Abstract
Counterfactual explanation is a post-modeling technique that helps users manipulate input features to achieve desired model decisions. This study employs counterfactual explanations to help users understand predictive outcomes and their underlying reasons. By reversing model decisions, user issues can be addressed. We developed a visual analysis framework combining machine learning algorithms and visual analytics. And our primary efforts are outlined as follows: Establish a visual analysis framework combining machine learning algorithms and visual analysis. Use counterfactual interpretation, we improved model interpretability and helped users understand prediction results. Design visualization views according to the visual analysis tasks derived from user needs based on machine learning models, and designed counterfactual explanation operation views for model decision instances. Integrate the visual analysis view, realize the interactive visual analysis system CFEVis based on the credit approval data prediction model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2019)
Berg, D.: Bankruptcy prediction by generalized additive models. Appl. Stoch. Model. Bus. Ind. 23, 129–143 (2006)
Berk, R.A., Bleich, J.: Statistical procedures for forecasting criminal behavior. Criminol. Public Policy 12, 513–544 (2013)
Biran, O., Cotton, C.V.: Explanation and justification in machine learning: a survey (2017)
Boehmke, B.C., Greenwell, B.M.: Interpretable machine learning. In: Hands-On Machine Learning with R (2019)
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)
Cheng, F., Ming, Y., Qu, H.: DECE: decision explorer with counterfactual explanations for machine learning models. IEEE Trans. Visual Comput. Graphics 27, 1438–1447 (2020)
Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63, 68–77 (2018)
van den Elzen, S., van Wijk, J.J.: BaobabView: interactive construction and analysis of decision trees. In: 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 151–160 (2011)
Genuer, R., Poggi, J.M., Tuleau-Malot, C.: Variable selection using random forests. Pattern Recognit. Lett. 31, 2225–2236 (2010)
Gomez, O., Holter, S., Yuan, J., Bertini, E.: ViCE: visual counterfactual explanations for machine learning models. In: Proceedings of the 25th International Conference on Intelligent User Interfaces (2020)
Krause, J., Dasgupta, A., Swartz, J., Aphinyanagphongs, Y., Bertini, E.: A workflow for visual diagnostics of binary classifiers using instance-level explanations. 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 162–172 (2017)
Liu, S., Xiao, J., Liu, J., Wang, X., Wu, J., Zhu, J.: Visual diagnosis of tree boosting methods. IEEE Trans. Visual Comput. Graphics 24, 163–173 (2018)
Looveren, A.V., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. ArXiv abs/1907.02584 (2019)
Pühringer, M., Hinterreiter, A.P., Streit, M.: InstanceFlow: visualizing the evolution of classifier confusion at the instance level. In: 2020 IEEE Visualization Conference (VIS), pp. 291–295 (2020)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Tamagnini, P., Krause, J., Dasgupta, A., Bertini, E.: Interpreting black-box classifiers using instance-level visual explanations. In: Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics (2017)
Taylan, P., Weber, G.W., Beck, A.: New approaches to regression by generalized additive models and continuous optimization for modern applications in finance, science and technology. Optimization 56, 675–698 (2007)
Thomas, J.J., Cook, K.A.: Illuminating the Path: The Research and Development Agenda for Visual Analytics (2005)
Yuan, J., Chen, C., Yang, W., Liu, M., Xia, J., Liu, S.: A survey of visual analytics techniques for machine learning. Comput. Visual Media 7, 3–36 (2020)
Zhao, X., Wu, Y., Lee, D.L., Cui, W.: IForest: interpreting random forests via visual analytics. IEEE Trans. Vis. Comput. Graph. 25, 407–416 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lan, H. et al. (2024). Enhancing Model Interpretability Through Interactive Visual Analysis and Counterfactual Explanation Methods. In: Luo, Y. (eds) Cooperative Design, Visualization, and Engineering. CDVE 2024. Lecture Notes in Computer Science, vol 15158. Springer, Cham. https://doi.org/10.1007/978-3-031-71315-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-71315-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-71314-9
Online ISBN: 978-3-031-71315-6
eBook Packages: Computer ScienceComputer Science (R0)