Execution Path Classification for Vulnerability Analysis and Detection | SpringerLink
Skip to main content

Execution Path Classification for Vulnerability Analysis and Detection

  • Conference paper
  • First Online:
E-Business and Telecommunications (ICETE 2015)

Abstract

Various commercial and open-source tools exist, developed both by the industry and academic groups, which are able to detect various types of security bugs in applications’ source code. However, most of these tools are prone to non-negligible rates of false positives and false negatives, since they are designed to detect a priori specified types of bugs. Also, their analysis scalability to large programs is often an issue. To address these problems, we present a new source code analysis technique based on execution path classification. We develop a prototype tool to test our method’s ability to detect different types of information-flow dependent bugs. Our approach is based on classifying the Risk of likely exploits inside source code execution paths using two measuring functions: Severity and Vulnerability. For an Application Under Test (AUT), we analyze every single pair of input vector and program sink in an execution path, which we call an Information Block (IB). Severity quantifies the danger level of an IB using static analysis and a variation of the Information Gain algorithm. On the other hand, an IB’s Vulnerability rank quantifies how certain the tool is that an exploit exists on a given execution path. The Vulnerability function is based on tainted object propagation. The Risk of each IB is the combination of its computed Severity and Vulnerability measurements through an aggregation operation over two fuzzy sets using a Fuzzy Logic system. An IB is characterized of a high risk, when both its Severity and Vulnerability rankings have been found to be above the low zone. In this case, our prototype tool called Entroine reports a detected code exploit. The tool was tested on 45 Java vulnerable programs from NIST’s Juliet Test Suite, which implement three different types of exploits. All existing code exploits were detected without any false positive.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Okun, V., Delaitre, O., Black, P.: Report on the Static Analysis Tool Exposition (SATE) IV, pp. 500–297. NIST Special Publication (2013)

    Google Scholar 

  2. Rutar, N., Almazan, C., Foster, S.: A comparison of bug finding tools for Java. In: Proceedings of the 15th International Symposium on Software Reliability Engineering. IEEE Computer Society, USA (2004)

    Google Scholar 

  3. Livshits, V., Lam, M.: Finding security vulnerabilities in Java applications with static analysis. In: Proceedings of the 14th USENIX Security Symposium (2005)

    Google Scholar 

  4. Ayewah, N., Hovemeyer, D., Morgenthaler, J., Penix, J., Pugh, W.: Using static analysis to find bugs. IEEE Softw. 25(5), 22–29 (2008)

    Article  Google Scholar 

  5. CodePro. https://developers.google.com/java-dev-tools/codepro/doc/

  6. UCDetector. http://www.ucdetector.org/

  7. Pmd. http://pmd.sourceforge.net/

  8. Tripathi, A., Gupta, A.: A controlled experiment to evaluate the effectiveness and the efficiency of four static program analysis tools for Java programs. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering. ACM (2014)

    Google Scholar 

  9. Hovemeyer, D., Pugh, W.: Finding bugs is easy. SIGPLAN Not. 39(12), 92–106 (2004)

    Article  Google Scholar 

  10. Jovanovic, N., Kruegel, C., Kirda, E.: Static analysis for detecting taint-style vulnerabilities in web applications. J. Comput. Sec. 18(5), 861–907 (2010). IOS Press

    Google Scholar 

  11. Weiser, M.: Program slicing. In: Proceedings of the International Conference on Software Engineering, pp. 439–449 (1981)

    Google Scholar 

  12. Stegiopoulos, G., Tsoumas, V., Gritzalis, D.: On business logic vulnerabilities hunting: the APP_LogGIC frame-work. In: Lopez, J., Huang, X., Sandhu, R. (eds.) NSS 2013. LNCS, vol. 7873, pp. 236–249. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  13. Zhang, X., Gupta, N., Gupta, R.: Pruning dynamic slices with confidence. In: Proceedings of the Conference on Programming Language Design and Implementation, pp. 169–180 (2006)

    Google Scholar 

  14. Cingolani, P., Alcala-Fdez, J.: jFuzzyLogic: a robust and flexible Fuzzy-Logic inference system language implementation. In: Proceedings of the IEEE International Conference on Fuzzy Systems, pp. 1–8 (2012)

    Google Scholar 

  15. Doupe, A., Boe, B., Vigna, G.: Fear the EAR: discovering and mitigating execution after redirect vulnerabilities. In: Proceedings of the 18th ACM Conference on Computer and Communications Security, pp. 251–262. ACM, USA (2011)

    Google Scholar 

  16. Balzarotti, D., Cova, M., Felmetsger, V., Vigna, G.: Multi-module vulnerability analysis of web-based applications. In: Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 25–35. ACM, USA (2007)

    Google Scholar 

  17. Albaum, G.: The Likert scale revisited. Market Res. Soc. J. 39, 331–348 (1997)

    Google Scholar 

  18. Ugurel, S., Krovetz, R., Giles, C., Pennock, D., Glover, E., Zha, H.: What’s the code? automatic classification of source code archives. In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 632–638. ACM, USA (2002)

    Google Scholar 

  19. Abramson, N.: Information Theory and Coding. McGraw-Hill, New York (1963)

    Google Scholar 

  20. Etzkorn, L., Davis, C.: Automatically identifying reusable OO legacy code. IEEE Comput. 30(10), 66–71 (1997)

    Article  Google Scholar 

  21. Glover, E., Flake, G., Lawrence, S., Birmingham, W., Kruger, A., Giles, L., Pennoek, D.: Improving category specific web search by learning query modification. In: Proceedings of the IEEE Symposium on Applications and the Internet, pp. 23–31. IEEE Press, USA (2001)

    Google Scholar 

  22. Stoneburner, G., Goguen, A.: SP 800-30. Risk management guide for information technology systems. Technical report. NIST, USA (2002)

    Google Scholar 

  23. OWASP: The OWASP Risk Rating Methodology. www.owasp.org/index.php/OWASP_Risk_Rating_Methodology

  24. Leekwijck, W., Kerre, E.: Defuzzification: criteria and classification. Fuzzy Sets Syst. 108(2), 159–178 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  25. Java API: Java Standard Edition 7 API Specification. http://docs.oracle.com/javase/7/docs/api/

  26. Gosling, J., Joy, B., Steele, G., Bracha, G., Buckley, A.: The Java Language Specification. Java SE 8 Edition, Oracle America Inc. and/or its affiliates, March 2015

    Google Scholar 

  27. Harold, E.: Java I/O, Tips and Techniques for Putting I/O to Work. O’Reilly, New York (2006)

    Google Scholar 

  28. National Security Agency: On Analyzing Static Analysis Tools. Center for Assured Software, NSA, Washington (2011)

    Google Scholar 

  29. National Security Agency: CAS Static Analysis Tool Study-Methodology. Center for Assured Software, National Security Agency (December 2012). https://samate.nist.gov/docs/CAS%202012%20Static%20Analysis%20Tool%20Study%20Methodology.pdf

  30. Yang, Y., Pederson, J.: A comparative study on feature selection in text categorization. In: Proceedings of the 14th International Conference on Machine Learning (ICML 1997), pp. 412–420 (1997)

    Google Scholar 

  31. BCEL, Apache Commons BCEL project page. http://commons.apache.org/proper/commons-bcel/

  32. Dahm, M., van Zyl, J., Haase, E.: The Bytecode Engineering Library (BCEL) (2003)

    Google Scholar 

  33. Boland, T., Black, P.: Juliet 1.1 C/C ++ and Java test suite. Computer 45(10), 88–90 (2012)

    Article  Google Scholar 

  34. Stergiopoulos, G., Tsoumas, B., Gritzalis, D.: Hunting application-level logical errors. In: Barthe, G., Livshits, B., Scandariato, R. (eds.) ESSoS 2012. LNCS, vol. 7159, pp. 135–142. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  35. Stergiopoulos, G., Katsaros, P., Gritzalis, D.: Automated detection of logical errors in programs. In: Lopez, J., Ray, I., Crispo, B. (eds.) CRiSIS 2014. LNCS, vol. 8924, pp. 35–51. Springer, Heidelberg (2014)

    Google Scholar 

  36. Coverity SAVE audit tool. http://www.coverity.com

  37. Mell, P., Scarfone, K., Romanosky, S.: Common vulnerability scoring system. Secur. Priv., IEEE 4(6), 85–89 (2006)

    Article  Google Scholar 

  38. The Common Weakness Enumeration (CWE), Office of Cybersecurity and Communications, US Deptartment of Homeland Security. http://cwe.mitre.org

  39. Stergiopoulos, G., Theoharidou, P., Gritzalis, D.: Using logical error detection in RemoteTerminal Units to predict initiating events of Critical Infrastructures failures. In: Tryfonas, Theo, Askoxylakis, Ioannis (eds.) HAS 2014. LNCS, vol. 8533. Springer, Heidelberg (2015)

    Google Scholar 

  40. CWE - CWSS. https://cwe.mitre.org/cwss/cwss_v1.0.1.html

  41. CWSS 3.0 scoring system. https://www.first.org/cvss/specification-document

  42. National Vulnerability Database (NVD). https://nvd.nist.gov/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimitris Gritzalis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Stergiopoulos, G., Katsaros, P., Gritzalis, D. (2016). Execution Path Classification for Vulnerability Analysis and Detection. In: Obaidat, M., Lorenz, P. (eds) E-Business and Telecommunications. ICETE 2015. Communications in Computer and Information Science, vol 585. Springer, Cham. https://doi.org/10.1007/978-3-319-30222-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-30222-5_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-30221-8

  • Online ISBN: 978-3-319-30222-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics