How Close Is Too Close? The Role of Feature Attributions in Discovering Counterfactual Explanations | SpringerLink
Skip to main content

How Close Is Too Close? The Role of Feature Attributions in Discovering Counterfactual Explanations

  • Conference paper
  • First Online:
Case-Based Reasoning Research and Development (ICCBR 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13405))

Included in the following conference series:

Abstract

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are “actionable” explanations that help users to understand how model decisions can be changed by adapting features of an input. A case-based approach to counterfactual discovery harnesses Nearest-unlike Neighbours as the basis to identify the minimal adaptations needed for outcome change. This paper presents the DisCERN algorithm which uses the query, its NUN and substitution-based adaptation operations to create a counterfactual explanation case. DisCERN uses Integrated Gradients (IntG) feature attribution as adaptation knowledge to order substitution operations and to bring about the desired outcome with as few changes as possible. We present our novel approach with IntG where the NUN is used as the baseline against which the feature attributions are calculated. DisCERN also uses feature attributions to identify a NUN closer to the query, and thereby minimise the total change needed, but results suggest that the number of feature changes can increase. Overall, DisCERN outperforms other counterfactual algorithms such as DiCE and NICE in generating valid counterfactuals with fewer adaptations.

This research is funded by the iSee project (https://isee4xai.com) which received funding from EPSRC under the grant number EP/V061755/1.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9151
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11439
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/RGU-Computing/DisCERN-XAI.

References

  1. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: 32nd AAAI Conference on Artificial Intelligence, pp. 3530–3537 (2018)

    Google Scholar 

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning: comparative tests of feature-weighting methods in ANN-CBR twins for XAI. In: IJCAI-19, pp. 2708–2715, IJCAI (2019)

    Google Scholar 

  4. Wettschereck, D., Aha, D.W., Mohri, T.: A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif. Intell. Rev. 11(1), 273–314 (1997)

    Article  Google Scholar 

  5. Craw, S., Massie, S., Wiratunga, N.: Informed case base maintenance: a complexity profiling approach. In: AAAI, pp. 1618–1621 (2007)

    Google Scholar 

  6. Byrne, R.M.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI, pp. 6276–6282 (2019)

    Google Scholar 

  7. Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11

    Chapter  Google Scholar 

  8. Brughmans, D., Martens, D.: Nice: an algorithm for nearest instance counterfactual explanations. arXiv preprint arXiv:2104.07411 (2021)

  9. Wiratunga, N., Wijekoon, A., Nkisi-Orji, I., Martin, K., Palihawadana, C., Corsar, D.: Discern: discovering counterfactual explanations using relevance features from neighbourhoods. In: 33rd ICTAI, pp. 1466–1473. IEEE (2021)

    Google Scholar 

  10. Craw, S., Wiratunga, N., Rowe, R.C.: Learning adaptation knowledge to improve CBR. Artif. Intell. 170(16–17), 1175–1192 (2006)

    Article  Google Scholar 

  11. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)

    Google Scholar 

  12. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 607–617 (2020)

    Google Scholar 

  13. Karimi, A.-H., Barthe, G., Balle, B., Valera, I.: Model-agnostic counterfactual explanations for consequential decisions. In: International Conference on Artificial Intelligence and Statistics, pp. 895–905. PMLR (2020)

    Google Scholar 

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? explaining the predictions of any classifier. In: 22nd ACM SIGKDD, pp. 1135–1144 (2016)

    Google Scholar 

  15. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. Adv. Neural. Inf. Process. Syst. 30, 4765–4774 (2017)

    Google Scholar 

  16. Li, J., Zhang, C., Zhou, J.T., Fu, H., Xia, S., Hu, Q.: Deep-lift: deep label-specific feature learning for image annotation. IEEE Trans. Cybern. (2021)

    Google Scholar 

  17. Qi, Z., Khorram, S., Li, F.: Visualizing deep networks by optimizing with integrated gradients. In: CVPR Workshops, vol. 2 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anjana Wijekoon .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wijekoon, A., Wiratunga, N., Nkisi-Orji, I., Palihawadana, C., Corsar, D., Martin, K. (2022). How Close Is Too Close? The Role of Feature Attributions in Discovering Counterfactual Explanations. In: Keane, M.T., Wiratunga, N. (eds) Case-Based Reasoning Research and Development. ICCBR 2022. Lecture Notes in Computer Science(), vol 13405. Springer, Cham. https://doi.org/10.1007/978-3-031-14923-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14923-8_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14922-1

  • Online ISBN: 978-3-031-14923-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics