Abstract
[Context and Motivation] One must test a system to ensure that the requirements are met, thus, tests are often derived manually from requirements. However, requirements representations are diverse; from traditional IEEE-style text, to models, to agile user stories, the RE community of research and practice has explored various ways to capture requirements. [Question/problem] But, do these different representations influence the quality or coverage of test suites? The state-of-the-art does not provide insights on whether or not the representation of requirements has an impact on the coverage, quality, or size of the resulting test suite. [Results] In this paper, we report on a family of three experiment replications conducted with 148 students which examines the effect of different requirements representations on test creation. We find that, in general, the different requirements representations have no statistically significant impact on the number of derived tests, but specific affordances of the representation effect test quality, e.g., traditional textual requirements make it easier to derive less abstract tests, whereas goal models yield less inconsistent test purpose descriptions. [Contribution] Our findings give insights on the effects of requirements representation on test derivation for novice testers. Our work is limited in the use of students.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bencomo, N., Whittle, J., Sawyer, P., Finkelstein, A., Letier, E.: Requirements reflection: requirements as runtime entities. In: International Conference on Software Engineering, (ICSE), pp. 199–202. ACM/IEEE (2010)
Brill, O., Schneider, K., Knauss, E.: Videos vs. use cases: can videos capture more requirements under time pressure? In: Wieringa, R., Persson, A. (eds.) REFSQ 2010. LNCS, vol. 6182, pp. 30–44. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14192-8_5
Cohn, M.: User Stories Applied: For Agile Software Development. Addison Wesley Longman Publishing Co., Inc., Redwood City (2004)
Cruzes, D.S., Dyba, T.: Recommended steps for thematic synthesis in software engineering In: International Symposium on Empirical Software Engineering and Measurement, pp. 275–284, September 2011
Dalpiaz, F., Franch, X., Horkoff, J.: istar 2.0 language guide (2016). https://arxiv.org/abs/1605.07767
de Oliveira Neto, F.G., Horkoff, J., Knauss, E., Kasauli, R., Liebel, G.: Challenges of aligning requirements engingeering and system testing in large-scale agile: A multiple case study. In: 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), pp. 315–322, September 2017
Felderer, M., Beer, A., Peischl, B.: On the role of defect taxonomy types for testing requirements: Results of a controlled experiment. In: 2014 40th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 377–384 (2014)
Felderer, M., Herrmann, A.: Manual test case derivation from uml activity diagrams and state machines: a controlled experiment. Inf. Soft. Technol. 61, 1–15 (2015)
Felderer, M., Herrmann, A.: Comprehensibility of system models during test design: a controlled experiment comparing uml activity diagrams and state machines. Soft. Qual. J. 27(1), 125–147 (2019)
Feldt, R., et al.: Four commentaries on the use of students and professionals in empirical software engineering experiments. Empir. Softw. Eng. 23(6), 3801–3820 (2018). https://doi.org/10.1007/s10664-018-9655-0
Fleiss, J.L., Levin, B., Paik, M.C.: Statistical Methods for Rates and Proportions. Wiley Series in Probability and Statistics, 3rd edn. Wiley, Hoboken (2003)
Hadar, I., Reinhartz-Berger, I., Kuflik, T., Perini, A., Ricca, F., Susi, A.: Comparing the comprehensibility of requirements models expressed in use case and tropos: results from a family of experiments. Inf. Soft. Technol. 55(10), 1823–1843 (2013)
Häser, F., Felderer, M., Breu, R.: Is business domain language support beneficial for creating test case specifications: a controlled experiment. Inf. Softw. Technol. 79, 52–62 (2016)
Hayes, A.F., Krippendorff, K.: Answering the call for a standard reliability measure for coding data. Commun. Methods Meas. 1(1), 77–89 (2007)
Horkoff, J., et al.: Goal-oriented requirements engineering: an extended systematic mapping study. Requir. Eng. 24(2), 133–160 (2017). https://doi.org/10.1007/s00766-017-0280-z
ISO/IEC/IEEE: Software and Systems Engineering - Soft. testing - Part 3: Test documentation. ISO/IEC/IEEE standard 29119–3:2013 (2016)
ISO/IEC/IEEE: Systems and Software Engineering - Life cycle processes - Requirements Engineering. ISO/IEC/IEEE standard 29148:2018 (2018)
Karac, E.I., Turhan, B., Juristo, N.: A controlled experiment with novice developers on the impact of task description granularity on software quality in test-driven development. IEEE Trans. on Soft. Eng. 1 (2019). https://doi.org/10.1109/TSE.2019.2920377
Kasauli, R., Knauss, E., Kanagwa, B., Nilsson, A., Calikli, G.: Safety-critical systems and agile development: A mapping study. In: 2018 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 470–477. IEEE (2018)
Larkin, J.H., Simon, H.A.: Why a diagram is (sometimes) worth ten thousand words. Cognit. Sci. 11(1), 65–100 (1987)
Massey, A.K., Otto, P.N., Antón, A.I.: Evaluating legal implementation readiness decision-making. IEEE Trans. Soft. Eng. 41(6), 545–564 (2015)
Matulevičius, R., Heymans, P.: Comparing goal modelling languages: an experiment. In: Sawyer, P., Paech, B., Heymans, P. (eds.) REFSQ 2007. LNCS, vol. 4542, pp. 18–32. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73031-6_2
de Oliveira Neto, F.G., Torkar, R., Feldt, R., Gren, L., Furia, C.A., Huang, Z.: Evolution of statistical analysis in empirical software engineering research: current state and steps forward. J. Syst. Softw. 156, 246–267 (2019)
Salman, I., Misirli, A.T., Juristo, N.: Are students representatives of professionals in software engineering experiments? In: 2015 IEEE/ACM 37th International Conference on Software Engineering, vol. 1, pp. 666–676. IEEE (2015)
Sharafi, Z., Marchetto, A., Susi, A., Antoniol, G., Guéhéneuc, Y.G.: An empirical study on the efficiency of graphical vs. textual representations in requirements comprehension. In: 2013 21st International Conference on Program Comprehension (ICPC), pp. 33–42. IEEE (2013)
Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wessln, A.: Experimentation in Software Engineering. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29044-2
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
de Oliveira Neto, F.G., Horkoff, J., Svensson, R., Mattos, D., Knauss, A. (2020). Evaluating the Effects of Different Requirements Representations on Writing Test Cases. In: Madhavji, N., Pasquale, L., Ferrari, A., Gnesi, S. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2020. Lecture Notes in Computer Science(), vol 12045. Springer, Cham. https://doi.org/10.1007/978-3-030-44429-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-44429-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-44428-0
Online ISBN: 978-3-030-44429-7
eBook Packages: Computer ScienceComputer Science (R0)