Abstract
Explainable Artificial Intelligence (XAI) is a relatively new approach to AI with special emphasis to the ability of machines to give sound motivations about their decisions and behavior. Since XAI is human-centered, it has tight connections with Granular Computing (GrC) in general, and Fuzzy Modeling (FM) in particular. However, although FM has been originally conceived to provide easily understandable models to users, this property cannot be taken for grant but it requires careful design choices. Furthermore, full integration of FM into XAI requires further processing, such as Natural Language Generation (NLG), which is a matter of current research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The full history has been reported by The New York Times, on May 2, 2017, p. A22. See https://nyti.ms/2qoe8FC.
- 2.
- 3.
- 4.
See note (71) in the preamble of GDPR. Actually, GDPR is quite timid in affirming the right of explanation [36], thence the need of more precise regulations on the subject in future.
- 5.
- 6.
- 7.
References
Alcala-Fdez, J., Alonso, J.M.: A survey of fuzzy systems software: taxonomy, current research trends, and prospects. IEEE Trans. Fuzzy Syst. 24(1), 40–56 (2016). https://doi.org/10.1109/TFUZZ.2015.2426212
Alonso, J.M., Magdalena, L.: Generating understandable and accurate fuzzy rule-based systems in a Java environment. In: Fanelli, A.M., Pedrycz, W., Petrosino, A. (eds.) WILF 2011. LNCS (LNAI), vol. 6857, pp. 212–219. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23713-3_27
Alonso, J., Conde-Clemente, P., Trivino, G.: Linguistic description of complex phenomena with the rLDCP R package. In: Proceedings of the 10th International Conference on Natural Language Generation, pp. 243–244 (2017)
Alonso, J.M., Magdalena, L.: HILK++: an interpretability-guided fuzzy modeling methodology for learning readable and comprehensible fuzzy rule-based classifiers. Soft Comput. 15(10), 1959–1980 (2011). https://doi.org/10.1007/s00500-010-0628-5
Alonso, J.M., Magdalena, L., González-Rodríguez, G.: Looking for a good fuzzy system interpretability index: an experimental approach. Int. J. Approx. Reason. 51(1), 115–134 (2009). https://doi.org/10.1016/j.ijar.2009.09.004
Alonso, J.M., Magdalena, L., Guillaume, S.: HILK: a new methodology for designing highly interpretable linguistic knowledge bases using the fuzzy logic formalism. Int. J. Intell. Syst. 23(7), 761–794 (2008). https://doi.org/10.1002/int.20288
Alonso, J.M., Castiello, C., Mencar, C.: Interpretability of fuzzy systems: current research trends and prospects. In: Kacprzyk, J., Pedrycz, W. (eds.) Springer Handbook of Computational Intelligence, pp. 219–237. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-43505-2_14
Alonso, J.M., Ramos-soto, A., Castiello, C., Mencar, C.: Hybrid data-expert explainable beer style classifier. In: IJCAI/ECAI Workshop on Explainable Artificial Intelligence (XAI 2018), pp. 1–5 (2018). https://www.dropbox.com/s/jgzkfws41ulkzxl/proceedings.pdf?dl=0
Bargiela, A., Pedrycz, W.: Human-Centric Information Processing Through Granular Modelling. SCI, vol. 182. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-92916-1
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: Workshop on Explainable AI (XAI), IJCAI 2017, pp. 8–13 (2017). http://www.intelligentrobots.org/files/IJCAI2017/
Bustince, H., Barrenechea, E., Fernández, J., Pagola, M., Montero, J.: The origin of fuzzy extensions. In: Kacprzyk, J., Pedrycz, W. (eds.) Springer Handbook of Computational Intelligence, pp. 89–112. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-43505-2_6
Casillas, J., Cordón, O., Triguero, F.H., Magdalena, L.: Interpretability Issues in Fuzzy Modeling, vol. 128. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-540-37057-4
Castiello, C., Mencar, C., Lucarelli, M., Rothlauf, F.: Efficiency improvement of DC* through a genetic guidance. In: 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6. IEEE, Naples, July 2017. https://doi.org/10.1109/FUZZ-IEEE.2017.8015585
Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. In: Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). CEUR Workshop Proceedings, vol. 2071 (2017). http://ceur-ws.org/Vol-2071/CExAIIA_2017_paper_2.pdf
Fernandez, A., del Jesus, M.J., Cordon, O., Marcelloni, F., Herrera, F.: Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to? IEEE Comput. Intell. Mag., 69–81 (2019). https://doi.org/10.1109/MCI.2018.2881645
Gacto, M., Alcalá, R., Herrera, F.: Interpretability of linguistic fuzzy rule-based systems: an overview of interpretability measures. Inf. Sci. 181(20), 4340–4360 (2011). https://doi.org/10.1016/J.INS.2011.02.021
Gatt, A., Krahmer, E.: Survey of the state of the art in natural language generation: core tasks, applications and evaluation. J. Artif. Intell. Res. 61, 65–170 (2018). https://doi.org/10.1613/jair.5477
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018). https://doi.org/10.1145/3236009
Guillaume, S., Charnomordic, B.: Learning interpretable fuzzy inference systems with FisPro. Inf. Sci. 181(20), 4409–4427 (2011). https://doi.org/10.1016/J.INS.2011.03.025
John, R., Coupland, S.: Type-2 fuzzy logic: challenges and misconceptions [discussion forum]. IEEE Comput. Intell. Mag. 7(3), 48–52 (2012). https://doi.org/10.1109/MCI.2012.2200632
Magdalena, L.: Do hierarchical fuzzy systems really improve interpretability? In: Medina, J., et al. (eds.) IPMU 2018. CCIS, vol. 853, pp. 16–26. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91473-2_2
Mamdani, E.H., Assilian, S.: An experiment in linguistic synthesis with a fuzzy logic controller. Int. J. Man-Mach. Stud. (1975). https://doi.org/10.1016/S0020-7373(75)80002-2
Mencar, C., Castiello, C., Cannone, R., Fanelli, A.M.: Design of fuzzy rule-based classifiers with semantic cointension. Inf. Sci. 181(20), 4361–4377 (2011). https://doi.org/10.1016/j.ins.2011.02.014
Mencar, C., Castiello, C., Cannone, R., Fanelli, A.M.: Interpretability assessment of fuzzy knowledge bases: a cointension based approach. Int. J. Approx. Reason. 52(4), 501–518 (2011). https://doi.org/10.1016/j.ijar.2010.11.007
Mencar, C., Fanelli, A.M.: Interpretability constraints for fuzzy information granulation. Inf. Sci. 178(24), 4585–4618 (2008). https://doi.org/10.1016/j.ins.2008.08.015
Mendel, J.: Fuzzy sets for words: a new beginning. In: The 12th IEEE International Conference on Fuzzy Systems, FUZZ 2003, vol. 1, pp. 37–42 (2003). https://doi.org/10.1109/FUZZ.2003.1209334
Michalski, R.S.: A theory and methodology of inductive learning. Artif. Intell. 20, 111–161 (1983). https://doi.org/10.1016/0004-3702(83)90016-4
Minsky, M.: Society of Mind. Simon and Schuster, New York (1988)
Pinker, S.: How the Mind Works, vol. 882. Wiley/Blackwell (10.1111) (1999). https://doi.org/10.1111/j.1749-6632.1999.tb08538.x
Razak, T.R., Garibaldi, J.M., Wagner, C., Pourabdollah, A., Soria, D.: Interpretability indices for hierarchical fuzzy systems. In: Proceedings of IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2017) (2017). https://doi.org/10.1109/FUZZ-IEEE.2017.8015616
Revell, T.: Computer says “no comment”. New Sci. 238(3173), 40–43 (2018). https://doi.org/10.1016/S0262-4079(18)30664-X
Schacter, D.L., Gilbert, D.T., Wegner, D.M.: Psychology, 2nd edn. Worth, New York (2011)
Sugeno, M., Kang, G.: Structure identification of fuzzy model. Fuzzy Sets Syst. 28(1), 15–33 (1988). https://doi.org/10.1016/0165-0114(88)90113-3
Takagi, T., Sugeno, M.: Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. SMC–15(1), 116–132 (1985). https://doi.org/10.1109/TSMC.1985.6313399
Trivino, G., Sugeno, M.: Towards linguistic descriptions of phenomena. Int. J. Approx. Reason. 54(1), 22–34 (2013). https://doi.org/10.1016/J.IJAR.2012.07.004
Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017). https://doi.org/10.1093/idpl/ipx005
Wang, Y.: On cognitive informatics. Brain Mind 4(2), 151–167 (2003). https://doi.org/10.1023/A:1025401527570
Yao, Y.: The rise of granular computing. J. Chongqing Univ. Posts Telecommun. Nat. Sci. Ed. 20(3), 229–308 (2008)
Yao, Y.: A triarchic theory of granular computing. Granul. Comput. 1(2), 145–157 (2016). https://doi.org/10.1007/s41066-015-0011-0
Zadeh, L.A.: Information granulation and its centrality in human and machine intelligence. In: 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, vol. 1, pp. 486–487, October 1997. https://doi.org/10.1109/ICSMC.1997.625798
Zadeh, L.A.: From computing with numbers to computing with words. From manipulation of measurements to manipulation of perceptions. IEEE Trans. Circ. Syst. I: Fundam. Theory Appl. 46(1), 105–119 (1999). https://doi.org/10.1109/81.739259
Zadeh, L.A.: A new direction in AI: toward a computational theory of perceptions. AI Mag. 22(1), 73–84 (2001). https://doi.org/10.1609/aimag.v22i1.1545
Zadeh, L.A.: Is there a need for fuzzy logic? Inf. Sci. 178(13), 2751–2779 (2008). https://doi.org/10.1016/j.ins.2008.02.012
Zadeh, L.A.: Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst. 90(2), 111–127 (1997). https://doi.org/10.1016/S0165-0114(97)00077-8
Zhong, N., et al.: Web intelligence meets brain informatics. In: Zhong, N., Liu, J., Yao, Y., Wu, J., Lu, S., Li, K. (eds.) WImBI 2006. LNCS (LNAI), vol. 4845, pp. 1–31. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-77028-2_1
Acknowledgments
Supported by the Spanish “Ministerio de Economía y Competitividad” through the Ramón y Cajal Program (RYC-2016-19802).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Mencar, C., Alonso, J.M. (2019). Paving the Way to Explainable Artificial Intelligence with Fuzzy Modeling. In: Fullér, R., Giove, S., Masulli, F. (eds) Fuzzy Logic and Applications. WILF 2018. Lecture Notes in Computer Science(), vol 11291. Springer, Cham. https://doi.org/10.1007/978-3-030-12544-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-12544-8_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-12543-1
Online ISBN: 978-3-030-12544-8
eBook Packages: Computer ScienceComputer Science (R0)