Abstract
Neural network (NN) verification is a problem that has drawn attention of many researchers. The specific nature of neural networks does away with the conventional assumption that a static program is given for verification as in the case of NNs multiple models can be used if one fails a new one can be trained leading to an approach called continuous verification, referring to the loop between training and verification. One tactic for improving the network’s performance is through “constraint-based loss functions” - a method of using differentiable logic (DL) to translate logical constraints into loss functions which can then be used to train the network specifically to satisfy said constraint. In this paper we present a uniform way of defining a translation from logic syntax to a differentiable loss function then examine and compare the existing DLs. We explore mathematical properties desired in such translations and discuss the design space identifying possible directions of future work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bak, S., Liu, C., Johnson, T.: The second international verification of neural networks competition (VNN-COMP 2021): summary and results, 2021. Technical Report. https://arxiv.org/abs/2109.00498
Casadio, M., et al.: Neural network robustness as a verification property: a principled case study. In: Computer Aided Verification (CAV 2022), Lecture Notes in Computer Science. Springer, 2022 https://doi.org/10.1007/978-3-031-13185-1_11
Cintula, P., Hájek, P., Noguera, C.: Handbook of mathematical fuzzy logic (in 2 volumes), vol. 37, 38 of studies in logic, mathematical logic and foundations (2011)
Fischer, M., Balunovic, M., Drachsler-Cohen, D., Gehr, T., Zhang, C., Vechev, M.: DL2: training and querying neural networks with logic (2019). https://openreview.net/forum?id=H1faSn0qY7
Hu, Z., Ma, X., Liu, Z., Hovy, E., Xing, E.: Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318 (2016)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Klebanov, L.B., Rachev, S.T., Fabozzi, F.J.: Robust and non-robust models in statistics. Nova Science Publishers Hauppauge (2009)
Klement, E.P., Mesiar, R., Pap, E.: Triangular norms. position paper ii: general constructions and parameterized families. Fuzzy sets Syst. 145 (3), 411–438 (2004)
Komendantskaya, E., Kokke, W., Kienitz, D.: Continuous verification of machine learning: a declarative programming approach. In: PPDP 2020: 22nd International Symposium on Principles and Practice of Declarative Programming, Bologna, Italy, pp. 1:1–1:3. ACM (2020)
Serafini, L., d’Avila Garcez, A.S.: Logic tensor networks: deep learning and logical reasoning from data and knowledge. CoRR abs/1606.04422 (2016)
Krieken, E.V., Acar, E., Harmelen, F.V.: Analyzing differentiable fuzzy implications. CoRR abs/2006.03472 (2020)
Krieken, E.V., Acar, E., Harmelen, F.V.: Analyzing differentiable fuzzy logic operators. Artif. Intell. 302, 1–46 (2022)
Varnai, P., Dimarogonas, D.: On robustness metrics for learning STL tasks (2020)
Wang, Q., Ma, Y., Zhao, K., Tian, Y.: A comprehensive survey of loss functions in machine learning. Ann. Data. Sci. 9(2), 187–212 (2022)
Xu, J., Zhang, Z., Friedman, T., Liang, Y., Van den Broeck, G.: a semantic loss function for deep learning with symbolic knowledge. In: Dy, J., Krause, A., eds, Proceedings of the 35th International Conference on Machine Learning, vol. 80 of Proceedings of Machine Learning Research, pp. 5502–5511. PMLR, 10–15 Jul 2018. https://proceedings.mlr.press/v80/xu18h.html
Acknowledgement
Authors acknowledge support of EPSRC grant AISEC EP/T026952/1 and NCSC grant Neural Network Verification: in search of the missing spec.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ślusarz, N., Komendantskaya, E., Daggitt, M.L., Stewart, R. (2022). Differentiable Logics for Neural Network Training and Verification. In: Isac, O., Ivanov, R., Katz, G., Narodytska, N., Nenzi, L. (eds) Software Verification and Formal Methods for ML-Enabled Autonomous Systems. NSV FoMLAS 2022 2022. Lecture Notes in Computer Science, vol 13466. Springer, Cham. https://doi.org/10.1007/978-3-031-21222-2_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-21222-2_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21221-5
Online ISBN: 978-3-031-21222-2
eBook Packages: Computer ScienceComputer Science (R0)