Towards Adversarially Superior Malware Detection Models: An Adversary Aware Proactive Approach using Adversarial Attacks and Defenses | Information Systems Frontiers
Skip to main content

Towards Adversarially Superior Malware Detection Models: An Adversary Aware Proactive Approach using Adversarial Attacks and Defenses

  • Published:
Information Systems Frontiers Aims and scope Submit manuscript

Abstract

The android ecosystem (smartphones, tablets, etc.) has grown manifold in the last decade. However, the exponential surge of android malware is threatening the ecosystem. Literature suggests that android malware can be detected using machine and deep learning classifiers; however, these detection models might be vulnerable to adversarial attacks. This work investigates the adversarial robustness of twenty-four diverse malware detection models developed using two features and twelve learning algorithms across four categories (machine learning, bagging classifiers, boosting classifiers, and neural network). We stepped into the adversary’s shoes and proposed two false-negative evasion attacks, namely GradAA and GreedAA, to expose vulnerabilities in the above detection models. The evasion attack agents transform malware applications into adversarial malware applications by adding minimum noise (maximum five perturbations) while maintaining the modified applications’ structural, syntactic, and behavioral integrity. These adversarial malware applications force misclassifications and are predicted as benign by the detection models. The evasion attacks achieved an average fooling rate of 83.34% (GradAA) and 99.21% (GreedAA) which reduced the average accuracy from 90.35% to 55.22% (GradAA) and 48.29% (GreedAA) in twenty-four detection models. We also proposed two defense strategies, namely Adversarial Retraining and Correlation Distillation Retraining as countermeasures to protect detection models from adversarial attacks. The defense strategies slightly improved the detection accuracy but drastically enhanced the adversarial robustness of detection models. Finally, investigating the robustness of malware detection models against adversarial attacks is an essential step before their real-world deployment and can help in developing adversarially superior detection models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • AV-ATLAS. (2022). Total amount of malware and pua under android Available: https://portal.av-atlas.org/malware/statistics/. Last Accessed Feb 2022.

  • Deldjoo, Y., Noia, T. D., & Merra, F. A. (2021). A survey on adversarial recommender systems: from attack/defense strategies to generative adversarial networks. ACM Computing Surveys (CSUR), 54(2), 1–38.

    Article  Google Scholar 

  • Fang, Y., Zeng, Y., Li, B., Liu, L., & Zhang, L. (2020). Deepdetectnet vs rlattacknet: An adversarial method to improve deep learning-based static malware detection model. Plos One, 15(4), e0231626.

    Article  Google Scholar 

  • Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR).

  • Google Play. (2022). Available https://play.google.com/store?hl=en. Last Accessed February 2022.

  • Grosse, K., Papernot, N., Manoharan, P., Backes, M., & McDaniel, P. (2017). Adversarial examples for malware detection. In European symposium on research in computer security, pp. 62–79. Springer.

  • Hinton, G., Vinyals, O., & Dean, J. et al. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2(7). https://doi.org/10.48550/arXiv.1503.02531

  • Hispasec Sistemas. (2022). Virustotal Available: https://www.virustotal.com/gui/home Last Accessed February 2022.

  • Hu, W., & Tan, Y. (2017). Generating adversarial malware examples for black-box attacks based on gan. arXiv:1702.05983.

  • Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011). Adversarial machine learning. In 4Th ACM workshop on security and artificial intelligence, pp. 43–58.

  • Ji, Y., Bowman, B., & Huang, H. H. (2019). Securing malware cognitive systems against adversarial attacks. In 2019 IEEE International conference on cognitive computing (ICCC), pp. 1–9. IEEE.

  • Kolosnjaji, B., Demontis, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., & Roli, F. (2018). Adversarial malware binaries: Evading deep learning for malware detection in executables. In 2018 26Th european signal processing conference (EUSIPCO), pp. 533–537. IEEE.

  • Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial machine learning at scale. International Conference on Learning Representations (ICLR).

  • Li, D., Zhang, J., & Huang, K. (2021). Universal adversarial perturbations against object detection. Pattern Recognition, 110, 107584.

    Article  Google Scholar 

  • Li, J., Sun, L., Yan, Q., Li, Z., Srisa-An, W., & Ye, H. (2018). Significant permission identification for machine-learning-based android malware detection. IEEE Transactions on Industrial Informatics, 14 (7), 3216–3225.

    Article  Google Scholar 

  • McAfee. (2022). Detect me if you can: How cybercriminals are trying harder to appear legitimate and how to spot them Available: https://www.mcafee.com/content/dam/consumer/en-us/docs/reports/rp-mobile-threat-report-feb-2022.pdf. Last Accessed Feb 2022.

  • Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on security and privacy (SP), pp. 582–597. IEEE.

  • Qiu, J., Zhang, J., Luo, W., Pan, L., Nepal, S., & Xiang, Y. (2020). A survey of android malware detection with deep neural models. ACM Computing Surveys (CSUR), 53(6), 1–36.

    Article  Google Scholar 

  • Rathore, H., Sahay, S. K., Dhillon, J., & Sewak, M. (2021). Designing adversarial attack and defence for robust android malware detection models. In 2021 51St annual IEEE/IFIP international conference on dependable systems and networks-supplemental volume (DSN-s), pp. 29–32. IEEE.

  • Rathore, H., Sahay, S. K., Nikam, P., & Sewak, M. (2021). Robust android malware detection system against adversarial attacks using q-learning. Information Systems Frontiers, 23(4), 867–882.

    Article  Google Scholar 

  • Rathore, H., Sahay, S. K., Rajvanshi, R., & Sewak, M. (2020). Identification of significant permissions for efficient android malware detection. In International conference on broadband communications, networks and systems (BROADNETS), pp. 33–52. Springer.

  • Statcounter. (2021). Mobile operating system market share worldwide available: https://gs.statcounter.com/os-market-share/mobile/worldwide. Last Accessed Feb 2022.

  • Statista. (2022). Number of apps available in leading app stores Available: https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/. Last Accessed Feb 2022.

  • Statista. (2022). Number of smartphone subscriptions worldwide from 2016 to 2027 Available: https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide/. Last Accessed Feb 2022.

  • Suciu, O., Coull, S. E., & Johns, J. (2019). Exploring adversarial examples in malware detection. In 2019 IEEE Security and privacy workshops (SPW), pp. 8–14. IEEE.

  • Taheri, R., Javidan, R., Shojafar, M., Vinod, P., & Conti, M. (2020). Can machine learning model with static features be fooled: an adversarial machine learning approach. Cluster Computing, 23(4), 3233–3253.

    Article  Google Scholar 

  • Wang, Z., She, Q., & Ward, T. E. (2021). Generative adversarial networks in computer vision: a survey and taxonomy. ACM Computing Surveys (CSUR), 54(2), 1–38.

    Google Scholar 

  • Wei, F., Li, Y., Roy, S., Ou, X., & Zhou, W. (2017). Deep ground truth analysis of current android malware. In International conference on detection of intrusions and malware, and vulnerability assessment, pp. 252–276. Springer.

  • Wiśniewski, R., & Tumbleson, C. (2022). Apktool. Available: https://ibotpeaches.github.io/Apktool/. Last Accessed February 2022.

  • Ye, Y., Li, T., Adjeroh, D., & Iyengar, S. S. (2017). A survey on malware detection using data mining techniques. ACM Computing Surveys (CSUR), 50(3), 41.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hemant Rathore.

Ethics declarations

Conflict of Interests

We declare that we have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rathore, H., Samavedhi, A., Sahay, S.K. et al. Towards Adversarially Superior Malware Detection Models: An Adversary Aware Proactive Approach using Adversarial Attacks and Defenses. Inf Syst Front 25, 567–587 (2023). https://doi.org/10.1007/s10796-022-10331-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10796-022-10331-z

Keywords