Abstract
As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason, we provide here a first approach to measure the knowledge quality, encompassing several indicators and providing a compact score reflecting readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
Data are publicly available.
Notes
We remark that \(pre_i\) and \(post_i\) are the precondition and postcondition, respectively, associated with the i-th rule of the list.
References
Aziz, S., Dowling, M.: Machine learning and AI for risk management. In: FinTech and Strategy in the 21st Century, pp. 33–50. Palgrave Pivot, Cham (2019)
Berenji, H.R.: Refinement of approximate reasoning-based controllers by reinforcement learning. In: Birnbaum, L., Collins, G. (eds.) Proceedings of the Eighth International Workshop (ML91), Northwestern University, Evanston, Illinois, USA, pp. 475–479. Morgan Kaufmann (1991)
Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and Regression Trees. CRC Press, Boca Raton (1984)
Calegari, R., Sabbatini, F.: The PSyKE technology for trustworthy artificial intelligence. In: XXI International Conference of the Italian Association for Artificial Intelligence, AIxIA 2022, Udine, Italy, November 28–December 2, 2022, Proceedings, vol. 13796, pp. 3–16 (2023)
Craven, M.W., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8. Proceedings of the 1995 Conference, pp. 24–30. The MIT Press (1996). (ISBN 9780262201070)
De Mulder, W., Valcke, P.: The need for a numeric measure of explainability. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 2712–2720 (2021)
European Commission, Directorate-General for Communications Networks, C., Technology. Ethics guidelines for trustworthy AI. Publications Office (2019)
Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)
Garcez, A.S.D., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: a sound approach. Artif. Intell. 125(1–2), 155–207 (2001)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)
He, X., Zhao, K., Chu, X.: AutoML: a survey of the state-of-the-art. Knowl.-Based Syst. 212, 106622 (2021)
Horikawa, S., Furuhashi, T., Uchikawa, Y.: On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm. IEEE Trans. Neural Netw. 3(5), 801–806 (1992)
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011)
Kenny, E.M., Ford, C., Quinn, M., Keane, M.T.: Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif. Intell. 294, 103459 (2021)
Murphy, P.M., Pazzani, M.J.: ID2-of-3: constructive induction of M-of-N concepts for discriminators in decision trees. In: Machine Learning Proceedings 1991, pp. 183–187. Elsevier (1991)
Ng, A., Ibrahim, M.H., Mirakhor, A.: Ethical behavior and trustworthiness in the stock market-growth nexus. Res. Int. Bus. Financ. 33, 44–58 (2015)
Quinlan, J.R.: C4.5: Programming for Machine Learning. Morgan Kauffmann, San Mateo (1993)
Rocha, A., Papa, J.P., Meira, L.A.A.: How far do we get using machine learning black-boxes? Int. J. Pattern Recognit. Artif. Intell. 26(02), 1261001 (2012)
Sabbatini, F., Calegari, R.: Symbolic knowledge extraction from opaque machine learning predictors: GridREx & PEDRO. In: Kern-Isberner, G., Lakemeyer, G., Meyer, T. (eds.) Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, KR 2022, Haifa, Israel. July 31–August 5, 2022, pp. 554–563. IJCAI Organization, Haifa (2022)
Sabbatini, F., Calegari, R.: Bottom-up and top-down workflows for hypercube- and clustering-based knowledge extractors. In: Calvaresi, D., Najjar, A., Omicini, A., Aydogan, R., Carli, R., Ciatto, G., Främling, K. (eds.) Explainable and Transparent AI and Multi-Agent Systems. Fifth International Workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, Revised Selected Papers, Volume 14127 of LNCS, pp. 116–129. Springer Cham, Basel (2023a). (ISBN 978-3-031-40877-9)
Sabbatini, F., Calegari, R.: ExACT explainable clustering: unravelling the intricacies of cluster formation. In: Proceedings of the 2nd International Workshop on Knowledge Diversity, KoDis 2023, Rhodes, Greece, 3 September 2023 (2023)
Sabbatini, F., Calegari, R.: Explainable clustering with CREAM. In: Marquis, P., Son, T. C., Kern-Isberner, G. (eds.) Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning, KR 2023, Rhodes, Greece, 2–8 September 2023. IJCAI Organization, Rhodes, pp. 593–603 (2023)
Sabbatini, F., Calegari, R.: Unveiling opaque predictors via explainable clustering: the CReEPy algorithm. In: Proceedings of the 2nd Workshop on Bias, Ethical Al, Explainability and the role of Logic and Logic Programming, BEWARE-23, co-located with AlxIA 2023, Roma Tre University, Roma, Italy, 6 November 2023 (2023)
Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: On the design of PSyKE: a platform for symbolic knowledge extraction. In: Calegari, R., Ciatto, G., Denti, E., Omicini, A., Sartor, G. (eds.) WOA 2021—22nd Workshop “From Objects to Agents”, Volume 2963 of CEUR Workshop Proceedings, 29–48. Sun SITE Central Europe, RWTH Aachen University. 22nd Workshop “From Objects to Agents” (WOA 2021), Bologna, Italy, 1–3 September 2021. Proceedings (2021)
Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Hypercube-based methods for symbolic knowledge extraction: towards a unified model. In: Ferrando, A., Mascardi, V. (eds.) WOA 2022—23rd Workshop “From Objects to Agents.” CEUR Workshop Proceedings, vol. 3261, pp. 48–60. RWTH Aachen University, Sun SITE Central Europe (2022)
Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Symbolic knowledge extraction from opaque ML predictors in PSyKE: platform design and experiments. Intelligenza Artificiale 16(1), 27–48 (2022)
Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Towards a unified model for symbolic knowledge extraction with hypercube-based methods. Intelligenza Artificiale 17(1), 63–75 (2023)
Sabbatini, F., Ciatto, G., Omicini, A.: GridEx: an algorithm for knowledge extraction from black-box regressors. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Explainable and Transparent AI and Multi-Agent Systems. Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers, Volume 12688 of LNCS, pp. 18–38. Springer Nature, Basel (2021). (ISBN 978-3-030-82016-9)
Sabbatini, F., Ciatto, G., Omicini, A.: Semantic web-based interoperability for intelligent agents with PSyKE. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Proceedings of the 4th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems, Volume 13283 of Lecture Notes in Computer Science, chapter 8, pp. 124–142. Springer (2022). (ISBN 978-3-031-15564-2)
Sethi, K.K., Mishra, D.K., Mishra, B.: KDRuleEx: a novel approach for enhancing user comprehensibility using rule extraction. In: 2012 Third International Conference on Intelligent Systems Modelling and Simulation, pp. 55–60 (2012)
Setiono, R.: Extracting M-of-N rules from trained neural networks. IEEE Trans. Neural Netw. Learn. Syst. 11(2), 512–519 (2000)
Setiono, R., Liu, H.: NeuroLinear: from neural networks to oblique decision rules. Neurocomputing 17(1), 1–24 (1997)
Shaheen, M.Y.: Applications of artificial intelligence (AI) in healthcare: a review. In: ScienceOpen Preprints (2021)
Sovrano, F., Sapienza, S., Palmirani, M., Vitali, F.: Metrics, explainability and the European AI act proposal. Journal 5(1), 126–138 (2022)
Towell, G.G., Shavlik, J.W.: Interpretation of artificial neural networks: mapping knowledge-based neural networks into rules. In: Moody, J.E., Hanson, S.J., Lippmann, R. (eds.) Advances in Neural Information Processing Systems 4, [NIPS Conference, Denver, Colorado, USA, December 2–5, 1991], pp. 977–984. Morgan Kaufmann (1991)
Tran, S.N., Garcez, A.S.D.: Knowledge extraction from deep belief networks for images. In: IJCAI-2013 Workshop on Neural-symbolic Learning and Reasoning (2013)
Weiss, J.W.: Business Ethics: A Stakeholder and Issues Management Approach. Berrett-Koehler Publishers, San Francisco (2021)
Acknowledgements
This work has been partially supported by the EU ICT-48 2020 project TAILOR (No. 952215).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sabbatini, F., Calegari, R. On the evaluation of the symbolic knowledge extracted from black boxes. AI Ethics 4, 65–74 (2024). https://doi.org/10.1007/s43681-023-00406-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-023-00406-1