Metamorphic Testing for Plant Identification Mobile Applications Based on Test Contexts | SpringerLink
Skip to main content

Metamorphic Testing for Plant Identification Mobile Applications Based on Test Contexts

  • Conference paper
  • First Online:
Mobile Computing, Applications, and Services (MobiCASE 2020)

Abstract

With the fast growth of artificial intelligence and big data technologies, AI-based mobile apps are widely used in people’s daily life. However, the quality problem of apps is becoming more and more prominent. Many AI-based mobile apps often demonstrate inconsistent behaviors for the same input data when context conditions are changed. Nevertheless, existing work seldom focuses on performing testing and quality validation for AI-based mobile apps under different context conditions. To automatically test AI-based plant identification mobile apps, this paper introduces TestPlantID, a novel metamorphic testing approach based on test contexts. First, TestPlantID constructs seven test contexts for mimicking contextual factors of plant identification usage scenarios. Next, TestPlantID defines test-context-based metamorphic relations for performing metamorphic testing to detect inconsistent behaviors. Then, TestPlantID generates follow-up images with various test contexts for testing by applying image transformations and photographing real-world plants. Moreover, a case study on three plant identification mobile apps shows that TestPlantID could reveal more than five thousand inconsistent behaviors, and differentiate the capability of detecting inconsistent behaviors with different test contexts.

Supported by National Key R&D Program of China (2018YFB1003900), National Natural Science Foundation of China (61602267, 61402229), Open Fund of the State Key Laboratory for Novel Software Technology (KFKT2018B19), Fundamental Research Funds for the Central Universities (NO. NS2019058), and China Postdoctoral Science Foundation Funded Project (No. 2019M651825).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Zhang, J., Harman, M., Ma, L., Liu, Y.: Machine learning testing: survey, landscapes and horizons. arXiv preprint arXiv:1906.10742 (2019)

  2. Amershi, S., et al.: Software engineering for machine learning: a case study. In: Proceedings of the 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), Montreal, QC, Canada, pp. 291–300 (2019)

    Google Scholar 

  3. Tao, C., Gao, J., Wang, T.: Testing and quality validation for AI software-perspectives, issues, and practices. IEEE Access 7, 120164–120175 (2019)

    Article  Google Scholar 

  4. Yin, Y., Chen, L., Xu, Y., Wan, J.: Location-aware service recommendation with enhanced probabilistic matrix factorization. IEEE Access 6, 62815–62825 (2018)

    Article  Google Scholar 

  5. Gao, J., Tao, C., Jie, D., Lu, S.: Invited paper: what is AI software testing? and why. In: IEEE International Conference on Service-Oriented System Engineering (SOSE), San Francisco East Bay, CA, USA, pp. 27–2709 (2019)

    Google Scholar 

  6. Barr, E., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015)

    Article  Google Scholar 

  7. Chen, T.Y., et al.: Metamorphic testing: a review of challenges and opportunities. ACM Comput. Surv. 51(1), 4:1–4:27 (2018)

    Article  Google Scholar 

  8. Chen, T.Y., Cheung, S., Yiu, S.: Metamorphic testing: a new approach for generating next test cases. Technical report HKUST-CS98-01. Department of Computer Science, Hong Kong University of Science and Technology, Hong Kong (1998)

    Google Scholar 

  9. Chen, T.Y., Tse, T., Zhou, Z.: Semi-proving: an integrated method for program proving, testing, and debugging. IEEE Trans. Softw. Eng. 37(1), 109–125 (2011)

    Article  Google Scholar 

  10. Jin, H., Jiang, Y., Liu, N., Xu, C., Ma, X., Lu, J.: Concolic metamorphic debugging. In: Proceedings of the IEEE 39th Annual Computer Software and Applications Conference (COMPSAC), Los Alamitos, CA, pp. 232–241 (2015)

    Google Scholar 

  11. Xie, X., Wong, W.E., Chen, T.Y., Xu, B.: Spectrum-based fault localization: testing oracles are no longer mandatory. In: Proceedings of the 11th International Conference on Quality Software (QSIC), Los Alamitos, CA, pp. 1–10 (2011)

    Google Scholar 

  12. Liu, H., Yusuf, I.I., Schmidt, H.W., Chen, T.Y.: Metamorphic fault tolerance: an automated and systematic methodology for fault tolerance in the absence of test oracle. In: Companion Proceedings of the 36th International Conference on Software Engineering (ICSE Companion), New York, NY, pp. 420–423 (2014)

    Google Scholar 

  13. Jiang, M., Chen, T.Y., Kuo, F.C., Towey, D., Ding, Z.: A metamorphic testing approach for supporting program repair without the need for a test oracle. J. Syst. Softw. 126, 127–140 (2017)

    Article  Google Scholar 

  14. Tian, Y., Pei, K., Jana, S., Ray, B.: DeepTest: automated testing of deep-neural-network-driven autonomous cars. In: Proceedings of the 40th International Conference on Software Engineering (ICSE), Gothenburg, Sweden, pp. 303–314 (2018)

    Google Scholar 

  15. Zhang, M., Zhang, Y., Zhang, L., Liu, C., Khurshid, S.: DeepRoad: GAN-based metamorphic testing and input validation framework for autonomous driving systems. In: Proceedings of the 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), Montpellier, France, pp. 132–142 (2018)

    Google Scholar 

  16. Zhou, Z., Sun, L.: Metamorphic testing of driverless cars. Commun. ACM 62(3), 61–67 (2019)

    Article  Google Scholar 

  17. Murphy, C., Kaiser, G.E., Hu, L., Wu, L.: Properties of machine learning applications for use in metamorphic testing. In: Proceedings of the 20th International Conference on Software Engineering and Knowledge Engineering (SEKE), San Francisco, CA, USA, pp. 867–872 (2008)

    Google Scholar 

  18. Xie, X., Ho, J.W., Murphy, C., Kaiser, G., Xu, B., Chen, T.Y.: Testing and validating machine learning classifiers by metamorphic testing. J. Syst. Softw. 84(4), 544–558 (2011)

    Article  Google Scholar 

  19. Brown, J., Zhou, Z. Chow, Y.: Metamorphic testing of navigation software: a pilot study with google maps. In: 51st Hawaii International Conference on System Sciences (HICSS), Hilton Waikoloa Village, Hawaii, USA, pp. 1–10 (2018)

    Google Scholar 

  20. Zhou, Z., Xiang, S., Chen, T.Y.: Metamorphic testing for software quality assessment: a study of search engines. IEEE Trans. Softw. Eng. 42(3), 264–284 (2016)

    Article  Google Scholar 

  21. Wang, S., Su, Z.: Metamorphic testing for object detection systems. arXiv preprint arXiv:1912.12162 (2019)

  22. Chen, T.Y., Poon, P., Xie, X.: METRIC: METamorphic Relation Identification based on the Category-choice framework. J. Syst. Softw. 116, 177–190 (2016)

    Article  Google Scholar 

  23. Zhang, J., et al.: Search-based inference of polynomial metamorphic relations. In: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering (ASE), New York, pp. 701–712 (2014)

    Google Scholar 

  24. Zhu, H.: A tool for automated Java unit testing based on data mutation and metamorphic testing methods. In: Proceedings of the 2nd International Conference on Trustworthy Systems and Their Applications (TSA), Los Alamitos, CA, pp. 8–15 (2015)

    Google Scholar 

  25. Zhu, H., Liu, D., Bayley, I., Harrison, R., Cuzzolin, F.: Datamorphic testing: a method for testing intelligent applications. In: IEEE International Conference On Artificial Intelligence Testing (AITest), Newark, CA, USA, pp. 149–156 (2019)

    Google Scholar 

  26. Pei, K., Cao, Y., Yang, J., Jana, S.: Deepxplore: Automated whitebox testing of deep learning systems. In: Proceedings of the 26th Symposium on Operating Systems Principles (SOSP), pp. 1–18. Shanghai, China (2017)

    Google Scholar 

  27. Ma, L., et al.: DeepGauge: multi-granularity testing criteria for deep learning systems. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (ASE), Montpellier, France, pp. 120–131 (2018)

    Google Scholar 

  28. Sun, Y., Huang, X., Kroening, D.: Testing deep neural networks. arXiv preprint arXiv:1803.04792 (2019)

  29. Guo, J., Jiang, Y. Zhao, Y. Chen, Q. Sun, J.: DLFuzz: differential fuzzing testing of deep learning systems. In: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/SIGSOFT), Lake Buena Vista, FL, USA, pp. 739–743 (2018)

    Google Scholar 

  30. Odena, A., Olsson, C., Andersen, D., Goodfellow, I.: TensorFuzz: debugging neural networks with coverage-guided fuzzing. In: Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, California, USA, pp. 4901–4911 (2019)

    Google Scholar 

  31. Xie, X., Chen, H., Li, Y., Ma, L., Liu, Y., Zhao, J.: Coverage-guided fuzzing for feedforward neural networks. In: 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), San Diego, CA, USA, pp. 1162–1165 (2019)

    Google Scholar 

  32. Ma, L., et al.: DeepMutation: mutation testing of deep learning systems. In: Proceedings of the 29th IEEE International Symposium on Software Reliability Engineering (ISSRE), Memphis, TN, pp. 100–111 (2018)

    Google Scholar 

  33. Shen, W., Wan, J., Chen, Z.: MuNN: mutation analysis of neural networks. In: IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C), Lisbon, pp. 108–115 (2018)

    Google Scholar 

  34. Ding, J., Kang, X., Hu, X.: Validating a deep learning framework by metamorphic testing. In: 2017 IEEE/ACM 2nd International Workshop on Metamorphic Testing (MET), Buenos Aires, pp. 28–34 (2017)

    Google Scholar 

  35. Murphy, C., Shen, K., Kaiser, G.: Using JML runtime assertion checking to automate metamorphic testing in applications without test oracles. In: International Conference on Software Testing Verification and Validation (ICST), Denver, Colorado, USA, pp. 436–445 (2009)

    Google Scholar 

  36. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: Concolic testing for deep neural networks. In: 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), Montpellier, France, pp. 109–119 (2018)

    Google Scholar 

  37. Gopinath, D., Wang, K., Zhang, M., Pasareanu, C., Khurshid, S.: Symbolic execution for deep neural networks. arXiv preprint arXiv:1807.10439 (2018)

  38. Gopinath, D., Zhang, M., Wang, K., Kadron, B., Pasareanu, C., Khurshid, S.: Symbolic execution for importance analysis and adversarial generation in neural networks. In: IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), Berlin, Germany, pp. 313–322 (2019)

    Google Scholar 

  39. Zhang, Z., Xie, X.: On the investigation of essential diversities for deep learning testing criteria. In: IEEE 19th International Conference on Software Quality, Reliability and Security (QRS), Sofia, Bulgaria, pp. 394-405 (2019)

    Google Scholar 

  40. Affine Transformation (2015). https://www.mathworks.com/discovery/affine-transformation.html

  41. Open Source Computer Vision Library (2015). https://github.com/itseez/opencv

  42. Removebg (2019). https://github.com/remove-bg

  43. iNaturalist 2018 Competition (2018). https://github.com/visipedia/inat_comp/tree/master/2018

  44. PlantSnap. https://play.google.com/store/apps/details?id=com.fws.plantsnap2

  45. PlantNet. https://play.google.com/store/apps/details?id=org.plantnet

  46. PictureThis. https://play.google.com/store/apps/details?id=cn.danatech.xingseus

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chuanqi Tao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, H., Tao, C., Huang, Z. (2020). Metamorphic Testing for Plant Identification Mobile Applications Based on Test Contexts. In: Liu, J., Gao, H., Yin, Y., Bi, Z. (eds) Mobile Computing, Applications, and Services. MobiCASE 2020. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 341. Springer, Cham. https://doi.org/10.1007/978-3-030-64214-3_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64214-3_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64213-6

  • Online ISBN: 978-3-030-64214-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics