Abstract
Various commercial and open-source tools exist, developed both by the industry and academic groups, which are able to detect various types of security bugs in applications’ source code. However, most of these tools are prone to non-negligible rates of false positives and false negatives, since they are designed to detect a priori specified types of bugs. Also, their analysis scalability to large programs is often an issue. To address these problems, we present a new source code analysis technique based on execution path classification. We develop a prototype tool to test our method’s ability to detect different types of information-flow dependent bugs. Our approach is based on classifying the Risk of likely exploits inside source code execution paths using two measuring functions: Severity and Vulnerability. For an Application Under Test (AUT), we analyze every single pair of input vector and program sink in an execution path, which we call an Information Block (IB). Severity quantifies the danger level of an IB using static analysis and a variation of the Information Gain algorithm. On the other hand, an IB’s Vulnerability rank quantifies how certain the tool is that an exploit exists on a given execution path. The Vulnerability function is based on tainted object propagation. The Risk of each IB is the combination of its computed Severity and Vulnerability measurements through an aggregation operation over two fuzzy sets using a Fuzzy Logic system. An IB is characterized of a high risk, when both its Severity and Vulnerability rankings have been found to be above the low zone. In this case, our prototype tool called Entroine reports a detected code exploit. The tool was tested on 45 Java vulnerable programs from NIST’s Juliet Test Suite, which implement three different types of exploits. All existing code exploits were detected without any false positive.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Okun, V., Delaitre, O., Black, P.: Report on the Static Analysis Tool Exposition (SATE) IV, pp. 500–297. NIST Special Publication (2013)
Rutar, N., Almazan, C., Foster, S.: A comparison of bug finding tools for Java. In: Proceedings of the 15th International Symposium on Software Reliability Engineering. IEEE Computer Society, USA (2004)
Livshits, V., Lam, M.: Finding security vulnerabilities in Java applications with static analysis. In: Proceedings of the 14th USENIX Security Symposium (2005)
Ayewah, N., Hovemeyer, D., Morgenthaler, J., Penix, J., Pugh, W.: Using static analysis to find bugs. IEEE Softw. 25(5), 22–29 (2008)
CodePro. https://developers.google.com/java-dev-tools/codepro/doc/
UCDetector. http://www.ucdetector.org/
Tripathi, A., Gupta, A.: A controlled experiment to evaluate the effectiveness and the efficiency of four static program analysis tools for Java programs. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering. ACM (2014)
Hovemeyer, D., Pugh, W.: Finding bugs is easy. SIGPLAN Not. 39(12), 92–106 (2004)
Jovanovic, N., Kruegel, C., Kirda, E.: Static analysis for detecting taint-style vulnerabilities in web applications. J. Comput. Sec. 18(5), 861–907 (2010). IOS Press
Weiser, M.: Program slicing. In: Proceedings of the International Conference on Software Engineering, pp. 439–449 (1981)
Stegiopoulos, G., Tsoumas, V., Gritzalis, D.: On business logic vulnerabilities hunting: the APP_LogGIC frame-work. In: Lopez, J., Huang, X., Sandhu, R. (eds.) NSS 2013. LNCS, vol. 7873, pp. 236–249. Springer, Heidelberg (2013)
Zhang, X., Gupta, N., Gupta, R.: Pruning dynamic slices with confidence. In: Proceedings of the Conference on Programming Language Design and Implementation, pp. 169–180 (2006)
Cingolani, P., Alcala-Fdez, J.: jFuzzyLogic: a robust and flexible Fuzzy-Logic inference system language implementation. In: Proceedings of the IEEE International Conference on Fuzzy Systems, pp. 1–8 (2012)
Doupe, A., Boe, B., Vigna, G.: Fear the EAR: discovering and mitigating execution after redirect vulnerabilities. In: Proceedings of the 18th ACM Conference on Computer and Communications Security, pp. 251–262. ACM, USA (2011)
Balzarotti, D., Cova, M., Felmetsger, V., Vigna, G.: Multi-module vulnerability analysis of web-based applications. In: Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 25–35. ACM, USA (2007)
Albaum, G.: The Likert scale revisited. Market Res. Soc. J. 39, 331–348 (1997)
Ugurel, S., Krovetz, R., Giles, C., Pennock, D., Glover, E., Zha, H.: What’s the code? automatic classification of source code archives. In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 632–638. ACM, USA (2002)
Abramson, N.: Information Theory and Coding. McGraw-Hill, New York (1963)
Etzkorn, L., Davis, C.: Automatically identifying reusable OO legacy code. IEEE Comput. 30(10), 66–71 (1997)
Glover, E., Flake, G., Lawrence, S., Birmingham, W., Kruger, A., Giles, L., Pennoek, D.: Improving category specific web search by learning query modification. In: Proceedings of the IEEE Symposium on Applications and the Internet, pp. 23–31. IEEE Press, USA (2001)
Stoneburner, G., Goguen, A.: SP 800-30. Risk management guide for information technology systems. Technical report. NIST, USA (2002)
OWASP: The OWASP Risk Rating Methodology. www.owasp.org/index.php/OWASP_Risk_Rating_Methodology
Leekwijck, W., Kerre, E.: Defuzzification: criteria and classification. Fuzzy Sets Syst. 108(2), 159–178 (1999)
Java API: Java Standard Edition 7 API Specification. http://docs.oracle.com/javase/7/docs/api/
Gosling, J., Joy, B., Steele, G., Bracha, G., Buckley, A.: The Java Language Specification. Java SE 8 Edition, Oracle America Inc. and/or its affiliates, March 2015
Harold, E.: Java I/O, Tips and Techniques for Putting I/O to Work. O’Reilly, New York (2006)
National Security Agency: On Analyzing Static Analysis Tools. Center for Assured Software, NSA, Washington (2011)
National Security Agency: CAS Static Analysis Tool Study-Methodology. Center for Assured Software, National Security Agency (December 2012). https://samate.nist.gov/docs/CAS%202012%20Static%20Analysis%20Tool%20Study%20Methodology.pdf
Yang, Y., Pederson, J.: A comparative study on feature selection in text categorization. In: Proceedings of the 14th International Conference on Machine Learning (ICML 1997), pp. 412–420 (1997)
BCEL, Apache Commons BCEL project page. http://commons.apache.org/proper/commons-bcel/
Dahm, M., van Zyl, J., Haase, E.: The Bytecode Engineering Library (BCEL) (2003)
Boland, T., Black, P.: Juliet 1.1 C/C ++ and Java test suite. Computer 45(10), 88–90 (2012)
Stergiopoulos, G., Tsoumas, B., Gritzalis, D.: Hunting application-level logical errors. In: Barthe, G., Livshits, B., Scandariato, R. (eds.) ESSoS 2012. LNCS, vol. 7159, pp. 135–142. Springer, Heidelberg (2012)
Stergiopoulos, G., Katsaros, P., Gritzalis, D.: Automated detection of logical errors in programs. In: Lopez, J., Ray, I., Crispo, B. (eds.) CRiSIS 2014. LNCS, vol. 8924, pp. 35–51. Springer, Heidelberg (2014)
Coverity SAVE audit tool. http://www.coverity.com
Mell, P., Scarfone, K., Romanosky, S.: Common vulnerability scoring system. Secur. Priv., IEEE 4(6), 85–89 (2006)
The Common Weakness Enumeration (CWE), Office of Cybersecurity and Communications, US Deptartment of Homeland Security. http://cwe.mitre.org
Stergiopoulos, G., Theoharidou, P., Gritzalis, D.: Using logical error detection in RemoteTerminal Units to predict initiating events of Critical Infrastructures failures. In: Tryfonas, Theo, Askoxylakis, Ioannis (eds.) HAS 2014. LNCS, vol. 8533. Springer, Heidelberg (2015)
CWE - CWSS. https://cwe.mitre.org/cwss/cwss_v1.0.1.html
CWSS 3.0 scoring system. https://www.first.org/cvss/specification-document
National Vulnerability Database (NVD). https://nvd.nist.gov/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Stergiopoulos, G., Katsaros, P., Gritzalis, D. (2016). Execution Path Classification for Vulnerability Analysis and Detection. In: Obaidat, M., Lorenz, P. (eds) E-Business and Telecommunications. ICETE 2015. Communications in Computer and Information Science, vol 585. Springer, Cham. https://doi.org/10.1007/978-3-319-30222-5_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-30222-5_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-30221-8
Online ISBN: 978-3-319-30222-5
eBook Packages: Computer ScienceComputer Science (R0)