Fanoos: Multi-resolution, Multi-strength, Interactive Explanations for Learned Systems | SpringerLink
Skip to main content

Fanoos: Multi-resolution, Multi-strength, Interactive Explanations for Learned Systems

  • Conference paper
  • First Online:
Verification, Model Checking, and Abstract Interpretation (VMCAI 2022)

Abstract

Machine learning is becoming increasingly important to control the behavior of safety and financially critical components in sophisticated environments, where the inability to understand learned components in general, and neural nets in particular, poses serious obstacles to their adoption. Explainability and interpretability methods for learned systems have gained considerable academic attention, but the focus of current approaches on only one aspect of explanation, at a fixed level of abstraction, and limited if any formal guarantees, prevents those explanations from being digestible by the relevant stakeholders (e.g., end users, certification authorities, engineers) with their diverse backgrounds and situation-specific needs. We introduce Fanoos, a framework for combining formal verification techniques, heuristic search, and user interaction to explore explanations at the desired level of granularity and fidelity. We demonstrate the ability of Fanoos to produce and adjust the abstractness of explanations in response to user requests on a learned controller for an inverted double pendulum and on a learned CPU usage model.

This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-18-C-0092. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 10295
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 12869
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    “Fanoos” ( ) means lantern in Farsi. Our approach shines a light on black-box AI. Source code can be found at [7], and an extended exposition is in [8].

  2. 2.

    In the case of interval arithmetic, this over-approximation and inclusion of additional elements is often called the “wrapping effect” [42].

  3. 3.

    Elements of our abstract state refinement algorithm may be analogous to CEGAR and its standard extensions—for instance, we perform sampling-based feasibility checks prior to SAT-checks, which may be comparable to spuriousness checks in CEGAR. However, to avoid implying a stringent adherence to canon (i.e., [16] verbatim), we use a different name.

  4. 4.

    Strictly speaking, we could discuss a box containing \(S_I\) (i.e., a superset), but introducing an auxiliary, potentially larger definition domain might add confusion while giving little benefit.

  5. 5.

    Setting thresholds low enough to ensure each transaction is described would result in a deluge of highly redundant, low-precision rules lacking most practical value, a phenomena know as the “rare itemset problem” [50].

  6. 6.

    Dataset at https://www.openml.org/api/v1/json/data/562.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Google Scholar 

  2. Adam, S.P., Karras, D.A., Magoulas, G.D., Vrahatis, M.N.: Reliable estimation of a neural network’s domain of validity through interval analysis based inversion. In: 2015 International Joint Conference on Neural Networks, IJCNN 2015, Killarney, Ireland, 12–17 July 2015, pp. 1–8 (2015). https://doi.org/10.1109/IJCNN.2015.7280794

  3. Agrawal, R., Imieliński, T., Swami, A.: Mining association rules between sets of items in large databases. In: Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data, Washington, DC, USA, 26–28 May 1993, vol. 22, pp. 207–216. ACM (1993)

    Google Scholar 

  4. Andrews, R., Diederich, J., Tickle, A.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl.-Based Syst. 6, 373–389 (1995). https://doi.org/10.1016/0950-7051(96)81920-4

    Article  MATH  Google Scholar 

  5. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  6. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. CoRR abs/1909.03012 (2019)

    Google Scholar 

  7. Bayani, D.: Code for Fanoos: multi-resolution, multi-strength, interactive explanations for learned systems (2021). https://doi.org/10.5281/zenodo.5513079. Method of distribution is Zenodo, distributed in October, 2021

  8. Bayani, D., Mitsch, S.: Fanoos: multi-resolution, multi-strength, interactive explanations for learned systems. CoRR abs/2006.12453 (2020)

    Google Scholar 

  9. Benz, A., Jäger, G., Van Rooij, R.: Game Theory and Pragmatics. Springer, Heidelberg (2005). https://doi.org/10.1057/9780230285897

    Book  Google Scholar 

  10. Biere, A., Cimatti, A., Clarke, E.M., Strichman, O., Zhu, Y., et al.: Bounded model checking. Adv. Comput. 58(11), 117–148 (2003)

    Article  Google Scholar 

  11. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI (XAI), vol. 8, p. 1 (2017)

    Google Scholar 

  12. Bunel, R., Turkaslan, I., Torr, P.H.S., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada, pp. 4795–4804 (2018)

    Google Scholar 

  13. Chakraborti, T., Kulkarni, A., Sreedharan, S., Smith, D.E., Kambhampati, S.: Explicability? Legibility? Predictability? Transparency? Privacy? Security? The emerging landscape of interpretable agent behavior. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 29, pp. 86–96 (2019)

    Google Scholar 

  14. Chuang, J., Ramage, D., Manning, C., Heer, J.: Interpretation and trust: designing model-driven visualizations for text analysis. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 443–452. ACM (2012)

    Google Scholar 

  15. Clarke, E., Fehnker, A., Han, Z., Krogh, B., Stursberg, O., Theobald, M.: Verification of hybrid systems based on counterexample-guided abstraction refinement. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 192–207. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-36577-X_14

    Chapter  MATH  Google Scholar 

  16. Clarke, E., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000). https://doi.org/10.1007/10722167_15

    Chapter  Google Scholar 

  17. Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 238–252 (1977)

    Google Scholar 

  18. David, Q.: Design issues in adaptive control. IEEE Trans. Autom. Control 33(1), 50–58 (1988)

    Article  MathSciNet  Google Scholar 

  19. de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  20. Driescher, A., Korn, U.: Checking stability of neural NARX models: an interval approach. IFAC Proc. Vol. 30(6), 1005–1010 (1997)

    Article  Google Scholar 

  21. Dvijotham, K., Stanforth, R., Gowal, S., Mann, T.A., Kohli, P.: A dual approach to scalable verification of deep networks. In: Globerson, A., Silva, R. (eds.) Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, 6–10 August 2018, pp. 550–559. AUAI Press (2018)

    Google Scholar 

  22. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

    Chapter  Google Scholar 

  23. Etchells, T.A., Lisboa, P.J.: Orthogonal search-based rule extraction (OSRE) for trained neural networks: a practical and efficient approach. IEEE Trans. Neural Netw. 17(2), 374–384 (2006)

    Article  Google Scholar 

  24. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (General Data Protection Regulation) (2016)

    Google Scholar 

  25. Fern, A.: Don’t get fooled by explanations. Invited Talk, IJCAI-XAI (2020). Recording at: https://ijcai20.org/w41/. Schedule at: https://sites.google.com/view/xai2020/home

  26. Floridi, L.: The method of levels of abstraction. Mind. Mach. 18(3), 303–329 (2008)

    Article  Google Scholar 

  27. Friedrich, G., Zanker, M.: A taxonomy for generating explanations in recommender systems. AI Mag. 32(3), 90–98 (2011)

    Google Scholar 

  28. Garcıa, J., Fernández, F.: A comprehensive survey on safe reinforcement learning. J. Mach. Learn. Res. 16(1), 1437–1480 (2015)

    MathSciNet  MATH  Google Scholar 

  29. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21–23 May 2018, San Francisco, California, USA, pp. 3–18. IEEE Computer Society (2018). https://doi.org/10.1109/SP.2018.00058

  30. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 93 (2019)

    Article  Google Scholar 

  31. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850

    Article  Google Scholar 

  32. Hailesilassie, T.: Rule extraction algorithm for deep neural networks: a review. arXiv preprint arXiv:1610.05267 (2016)

  33. Han, J., Fu, Y.: Discovery of multiple-level association rules from large databases. In: VLDB, vol. 95, pp. 420–431. Citeseer (1995)

    Google Scholar 

  34. Han, J., Fu, Y.: Mining multiple-level association rules in large databases. IEEE Trans. Knowl. Data Eng. 11(5), 798–805 (1999)

    Article  Google Scholar 

  35. Hayes, B., Scassellati, B.: Autonomously constructing hierarchical task networks for planning and human-robot collaboration. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 5469–5476. IEEE (2016)

    Google Scholar 

  36. Hayes, B., Shah, J.A.: Improving robot controller transparency through autonomous policy explanation. In: 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 303–312. IEEE (2017)

    Google Scholar 

  37. Hipp, J., Güntzer, U., Nakhaeizadeh, G.: Algorithms for association rule mining- a general survey and comparison. SIGKDD Explor. 2(1), 58–64 (2000)

    Article  Google Scholar 

  38. Hostetler, J., Fern, A., Dietterich, T.G.: Progressive abstraction refinement for sparse sampling. In: Meila, M., Heskes, T. (eds.) Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI 2015, 12–16 July 2015, Amsterdam, The Netherlands, pp. 365–374. AUAI Press (2015)

    Google Scholar 

  39. Huang, S.H., Held, D., Abbeel, P., Dragan, A.D.: Enabling robots to communicate their objectives. Auton. Robot. 43(2), 309–326 (2019). https://doi.org/10.1007/s10514-018-9771-0

    Article  Google Scholar 

  40. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  41. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017, Part I. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  42. Kearfott, R.B.: Interval computations: introduction, uses, and resources. Euromath Bull. 2(1), 95–112 (1996)

    MathSciNet  MATH  Google Scholar 

  43. Kellert, S.H.: In the Wake of Chaos: Unpredictable Order in Dynamical Systems. University of Chicago Press (1993)

    Google Scholar 

  44. Kim, J., Rohrbach, A., Darrell, T., Canny, J.F., Akata, Z.: Textual explanations for self-driving vehicles (2018). https://doi.org/10.1007/978-3-030-01216-8_35

  45. Koul, A., Fern, A., Greydanus, S.: Learning finite state representations of recurrent policy networks. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019)

    Google Scholar 

  46. Levien, R., Tan, S.: Double pendulum: an experiment in chaos. Am. J. Phys. 61(11), 1038–1044 (1993)

    Article  Google Scholar 

  47. Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

  48. Liskov, B.: Keynote address-data abstraction and hierarchy. In: Addendum to the Proceedings on Object-Oriented Programming Systems, Languages and Applications (Addendum), pp. 17–34 (1987)

    Google Scholar 

  49. Liskov, B.H., Wing, J.M.: A behavioral notion of subtyping. ACM Trans. Program. Lang. Syst. (TOPLAS) 16(6), 1811–1841 (1994)

    Article  Google Scholar 

  50. Liu, B., Hsu, W., Ma, Y.: Mining association rules with multiple minimum supports. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 337–341 (1999)

    Google Scholar 

  51. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)

    Article  MathSciNet  Google Scholar 

  52. Miller, T., Howe, P., Sonenberg, L.: Explainable AI: beware of inmates running the asylum or: how I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1712.00547 (2017)

  53. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: International Conference on Machine Learning, pp. 3575–3583 (2018)

    Google Scholar 

  54. Mohseni-Kabir, A., Rich, C., Chernova, S., Sidner, C.L., Miller, D.: Interactive hierarchical task learning from a single demonstration. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 205–212. ACM (2015)

    Google Scholar 

  55. Moore, R.E.: Interval Analysis, vol. 4. Prentice-Hall, Englewood Cliffs (1966)

    MATH  Google Scholar 

  56. Muggleton, S.: Inductive logic programming: issues, results and the challenge of learning language in logic. Artif. Intell. 114(1–2), 283–296 (1999)

    Article  Google Scholar 

  57. Murdoch, W.J., Szlam, A.: Automatic rule extraction from long short term memory networks. arXiv preprint arXiv:1702.02540 (2017)

  58. Neema, S.: Assured autonomy (2017). https://www.darpa.mil/attachments/AssuredAutonomyProposersDay_Program%20Brief.pdf

  59. Papadimitriou, A., Symeonidis, P., Manolopoulos, Y.: A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Min. Knowl. Disc. 24(3), 555–583 (2012)

    Article  Google Scholar 

  60. Perera, V., Selvaraj, S.P., Rosenthal, S., Veloso, M.M.: Dynamic generation and refinement of robot verbalization. In: 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, New York, NY, USA, 26–31 August 2016, pp. 212–218. IEEE (2016). https://doi.org/10.1109/ROMAN.2016.7745133

  61. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24

    Chapter  Google Scholar 

  62. Raffin, A.: RL baselines zoo (2018). https://web.archive.org/web/20190524144858/https://github.com/araffin/rl-baselines-zoo

  63. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Krishnapuram, B., Shah, M., Smola, A.J., Aggarwal, C.C., Shen, D., Rastogi, R. (eds.) Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144. ACM (2016). https://doi.org/10.1145/2939672.2939778

  64. Richardson, A., Rosenfeld, A.: A survey of interpretability and explainability in human-agent systems. In: XAI Workshop on Explainable Artificial Intelligence, pp. 137–143 (2018)

    Google Scholar 

  65. Roberts, M., et al.: What was I planning to do. In: ICAPS Workshop on Explainable Planning, pp. 58–66 (2018)

    Google Scholar 

  66. Rosenthal, S., Biswas, J., Veloso, M.: An effective personal mobile robot agent through symbiotic human-robot interaction. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 915–922. International Foundation for Autonomous Agents and Multiagent Systems (2010)

    Google Scholar 

  67. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  68. Setiono, R., Liu, H.: Understanding neural networks via rule extraction. In: IJCAI, vol. 1, pp. 480–485 (1995)

    Google Scholar 

  69. Singh, G., Ganvir, R., Püschel, M., Vechev, M.T.: Beyond the single neuron convex barrier for neural network certification. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8–14 December 2019, Vancouver, BC, Canada, pp. 15072–15083 (2019)

    Google Scholar 

  70. Sreedharan, S., Madhusoodanan, M.P., Srivastava, S., Kambhampati, S.: Plan explanation through search in an abstract model space. In: International Conference on Automated Planning and Scheduling (ICAPS) Workshop on Explainable Planning, pp. 67–75 (2018)

    Google Scholar 

  71. Srikant, R., Agrawal, R.: Mining generalized association rules. In: Dayal, U., Gray, P.M.D., Nishio, S. (eds.) VLDB 1995, Proceedings of 21st International Conference on Very Large Data Bases, 11–15 September 1995, Zurich, Switzerland, pp. 407–419. Morgan Kaufmann (1995)

    Google Scholar 

  72. International Standardization: ISO/IEC 7498–1: 1994 information technology-open systems interconnection-basic reference model: the basic model. International Standard ISOIEC 74981, 59 (1996)

    Google Scholar 

  73. Taylor, B.J., Darrah, M.A.: Rule extraction as a formal method for the verification and validation of neural networks. In: 2005 Proceedings of 2005 IEEE International Joint Conference on Neural Networks, vol. 5, pp. 2915–2920. IEEE (2005)

    Google Scholar 

  74. Tennent, R.D.: The denotational semantics of programming languages. Commun. ACM 19(8), 437–453 (1976). https://doi.org/10.1145/360303.360308

    Article  MathSciNet  MATH  Google Scholar 

  75. Thrun, S.: Extracting rules from artificial neural networks with distributed representations. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) 1994 Advances in Neural Information Processing Systems 7, NIPS Conference, Denver, Colorado, USA, pp. 505–512. MIT Press (1994)

    Google Scholar 

  76. Tjeng, V., Xiao, K., Tedrake, R.: Verifying neural networks with mixed integer programming. CoRR abs/1711.07356 (2017). http://arxiv.org/abs/1711.07356

  77. Towell, G.G., Shavlik, J.W.: Extracting refined rules from knowledge-based neural networks. Mach. Learn. 13(1), 71–101 (1993)

    Google Scholar 

  78. Vanschoren, J., van Rijn, J.N., Bischl, B., Torgo, L.: OpenML: networked science in machine learning. SIGKDD Explor. 15(2), 49–60 (2013). https://doi.org/10.1145/2641190.2641198

    Article  Google Scholar 

  79. Veloso, M.M., Biswas, J., Coltin, B., Rosenthal, S.: CoBots: robust symbiotic autonomous mobile service robots. In: IJCAI, p. 4423 (2015)

    Google Scholar 

  80. Ventocilla, E., et al.: Towards a taxonomy for interpretable and interactive machine learning. In: XAI Workshop on Explainable Artificial Intelligence, pp. 151–157 (2018)

    Google Scholar 

  81. Walter, E., Jaulin, L.: Guaranteed characterization of stability domains via set inversion. IEEE Trans. Autom. Control 39(4), 886–889 (1994). https://doi.org/10.1109/9.286277

    Article  MathSciNet  MATH  Google Scholar 

  82. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, 15–17 August 2018, pp. 1599–1614 (2018)

    Google Scholar 

  83. Wellman, H.M., Lagattuta, K.H.: Theory of mind for learning and teaching: the nature and role of explanation. Cogn. Dev. 19(4), 479–497 (2004)

    Article  Google Scholar 

  84. Wen, W., Callahan, J.: Neuralware engineering: develop verifiable ANN-based systems. In: Proceedings IEEE International Joint Symposia on Intelligence and Systems, pp. 60–66. IEEE (1996)

    Google Scholar 

  85. Wen, W., Callahan, J., Napolitano, M.: Towards developing verifiable neural network controller. Technical report (1996)

    Google Scholar 

  86. Wen, W., Callahan, J., Napolitano, M.: Verifying stability of dynamic soft-computing systems. Technical report NASA-IVV-97-002, WVU-CS-TR-97-005, NASA/CR-97-207032, WVU-IVV-97-002 (1997)

    Google Scholar 

  87. Weng, L., et al.: Towards fast computation of certified robustness for ReLU networks. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 5276–5285. PMLR, 10–15 July 2018

    Google Scholar 

  88. Xiang, W., Johnson, T.T.: Reachability analysis and safety verification for neural network control systems. CoRR abs/1805.09944 (2018)

    Google Scholar 

  89. Yasmin, M., Sharif, M., Mohsin, S.: Neural networks in medical imaging applications: a survey. World Appl. Sci. J. 22(1), 85–96 (2013)

    Google Scholar 

Download references

Acknowledgments

We thank: Nicholay Topin for supporting our spirits at some key junctures of this work; David Held for pointing us to the rl-baselines-zoo repository; David Eckhardt for his proof-reading of earlier versions of this document; the anonymous reviewers for their thoughtful feedback.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to David Bayani or Stefan Mitsch .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bayani, D., Mitsch, S. (2022). Fanoos: Multi-resolution, Multi-strength, Interactive Explanations for Learned Systems. In: Finkbeiner, B., Wies, T. (eds) Verification, Model Checking, and Abstract Interpretation. VMCAI 2022. Lecture Notes in Computer Science(), vol 13182. Springer, Cham. https://doi.org/10.1007/978-3-030-94583-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-94583-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-94582-4

  • Online ISBN: 978-3-030-94583-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics