Abstract
The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called \(\mathcal {MLIC}\), which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy.
The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. We also hope to encourage researchers in both interpretable classification and in the constraint programming community to take it further and develop richer formulations, and bespoke solvers attuned to the problem of interpretable ML.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Change history
13 February 2019
In the original version of this chapter there was a typing error in the family name of the first author. This has now been corrected.
Notes
- 1.
We discuss in Sect. 3 that such a restriction can be achieved without loss of generality.
- 2.
The framework proposed in this paper allows generalization to other forms of rules, as we discuss in Sect. 3.6.
- 3.
Cost-sensitive classification is defined analogously by allowing a separate parameter for false positives and false negatives.
- 4.
A detailed evaluation among different MaxSAT solvers is beyond the scope of this work and left for future work.
- 5.
The source code of \(\mathcal {MLIC}\) and benchmarks can be viewed at https://github.com/meelgroup/mlic.
- 6.
Note, however, that the objective functions for the integer program and the LP relaxation in these papers are not the same as sparsity-penalized cost-sensitive classification error.
References
Andrews, R., Diederich, J., Tickle, A.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl. Based Syst. 8(6), 373–389 (1995)
van Beek, P., Hoffmann, H.F.: Machine learning of Bayesian networks using constraint programming. In: Proceedings of CP, pp. 429–445 (2015)
Berg, J., Saikko, P., Järvisalo, M.: Improving the effectiveness of sat-based preprocessing for MaxSAT. In: Proceedings of IJCAI (2015)
Bertsimas, D., Chang, A., Rudin, C.: An integer optimization approach to associative classification. Adv. Neur. Inf. Process. Syst. 25, 269–277 (2012)
Bessiere, C., Hebrard, E., O’Sullivan, B.: Minimising decision tree size as combinatorial optimisation. In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 173–187. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04244-7_16
Blake, C., Merz, C.J.: \(\{\)UCI\(\}\) repository of machine learning databases (1998)
Boros, E., Hammer, P., Ibaraki, T., Kogan, A., Mayoraz, E., Muchnik, I.: An implementation of logical analysis of data. IEEE Trans. Knowl. Data Eng. 12(2), 292–306 (2000)
Breiman, L., Friedman, J., Stone, C., Olshen, R.: Classification and Regression Trees. CRC Press, Boca Raton (1984)
Clark, P., Niblett, T.: The CN2 induction algorithm. Mach. Learn. 3(4), 261–283 (1989)
Cohen, W.W.: Fast effective rule induction. In: Proceedings of International Conference on Machine Learning, pp. 115–123. Tahoe City, CA, July 1995
Cohen, W.W., Singer, Y.: A simple, fast, and effective rule learner. In: Proceedings of National Conference on Artificial Intelligence, pp. 335–342, Orlando, FL. July 1999
Craven, M.W., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Proceedings of NIPS, pp. 24–30 (1996)
Davies, J., Bacchus, F.: Solving MaxSAT by solving a sequence of simpler sat instances. In: Proceedings of CP, pp. 225–239 (2011)
De Raedt, L., Guns, T., Nijssen, S.: Constraint programming for itemset mining. In: Proceedings of KDD, pp. 204–212 (2008)
Dembczyński, K., Kotłowski, W., Słowiński, R.: Ender: a statistical framework for boosting decision rules. Data Mining Knowl. Discov. 21(1), 52–90 (2010)
Emad, A., Varshney, K.R., Malioutov, D.M.: A semiquantitative group testing approach for learning interpretable clinical prediction rules. In: Proceedings of Signal Process. Adapt. Sparse Struct. Repr. Workshop, Cambridge, UK (2015)
Freitas, A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)
Friedman, J.H., Popescu, B.E.: Predictive learning via rule ensembles. Ann. Appl. Stat. 2(3), 916–954 (2008)
Jawanpuria, P., Jagarlapudi, S.N., Ramakrishnan, G.: Efficient rule ensemble learning using hierarchical kernels. In: Proceedings of ICML (2011)
Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Building interpretable classifiers with rules using Bayesian analysis. Technical report 609, Department of Statistics. University of Washington, December 2012
Malioutov, D.M., Varshney, K.R.: Exact rule learning via Boolean compressed sensing. In: Proceedings of ICML, pp. 765–773 (2013)
Marchand, M., Shawe-Taylor, J.: The set covering machine. J. Mach. Learn. Res. 3(Dec), 723–746 (2002)
Nijssen, S., Guns, T., De Raedt, L.: Correlated itemset mining in ROC space: a constraint programming approach. In: KDD, pp. 647–656. ACM (2009)
Quinlan, J.R.: C4.5: Programming for Machine Learning, p. 38. Morgan Kauffmann, San Francisco (1993)
Rivest, R.L.: Learning decision lists. Mach. Learn. 2(3), 229–246 (1987)
Rückert, U., Kramer, S.: Margin-based first-order rule learning. Mach. Learn. 70(2–3), 189–206 (2008)
Valiant, L.G.: Learning disjunctions of conjunctions. In: Proceedings of International Joint Conference on Artificial Intelligence, pp. 560–566. Los Angeles, CA, August 1985
Varshney, K.R.: Data science of the people, for the people, by the people: a viewpoint on an emerging dichotomy. In: Proceedings of Data for Good Exchange Conference (2015)
Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., MacNeille, P.: Or’s of And’s for interpretable classification, with application to context-aware recommender systems. arXiv preprint arXiv:1504.07614 (2015)
Wang, T., Rudin, C., Liu, Y., Klampfl, E., MacNeille, P.: Bayesian Or’s of And’s for interpretable classification with application to context aware recommender systems (2015)
Acknowledgements
This work was supported in part by NUS ODPRT Grant, R-252-000-685-133 and IBM PhD Fellowship. The computational work for this article was performed on resources of the National Supercomputing Centre, Singapore, https://www.nscc.sg.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Malioutov, D., Meel, K.S. (2018). MLIC: A MaxSAT-Based Framework for Learning Interpretable Classification Rules. In: Hooker, J. (eds) Principles and Practice of Constraint Programming. CP 2018. Lecture Notes in Computer Science(), vol 11008. Springer, Cham. https://doi.org/10.1007/978-3-319-98334-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-319-98334-9_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-98333-2
Online ISBN: 978-3-319-98334-9
eBook Packages: Computer ScienceComputer Science (R0)