Abstract
The interrelationship between software faults and failures is quite intricate and obtaining a meaningful characterization of it would definitely help the testing community in deciding on efficient and effective test strategies. Towards this objective, we have investigated and classified failures observed in a large complex telecommunication industry middleware system during 2003- 2006. In this paper, we describe the process used in our study for tracking faults from failures along with the details of failure data. We present the distribution and frequency of the failures along with some interesting findings unravelled while analyzing the origins of these failures. Firstly, though “simple” faults happen, together they account for only less than 10%. The majority of faults come from either missing code or path, or superfluous code, which are all faults that manifest themselves for the first time at integration/system level; not at component level. These faults are more frequent in the early versions of the software, and could very well be attributed to the difficulties in comprehending and specifying the context (and adjacent code) and its dependencies well enough, in a large complex system with time to market pressures. This exposes the limitations of component testing in such complex systems and underlines the need for allocating more resources for higher level integration and system testing.
Chapter PDF
Similar content being viewed by others
References
Eldh, S., Hansson, H., Punnekkat, S., Pettersson, A., Sundmark, D.: A Framework for Comparing Efficiency, Effectiveness and Applicability of Software Testing Techniques. In: Proc. TAIC, IEEE, New York (2006)
Juristo, N., Moreno, A.M., Vegas, S.: Reviewing 25 Years of Testing Technique Experiments. Journal of Empirical Softw. Eng. 9(1-2), 7–44 (2004)
Hyunsook, D., Elbaum, S., Rothermel, G.: Infrastructure support for controlled experimentation with software testing and regression testing techniques. In: ISESE 2004. Proc. Int. Symp. On Empirical Software Engineering, pp. 60–70. ACM, New York (2004)
Apiwattanapong, T., Santelices, R., Chittimalli, P.V., Orso, A., Harrold, M.J.: Tata: MaTRIX: Maintenance-Oriented Test Requirements Identifier and Examiner. In: Proc. From TAIC, IEEE, New York (2006)
Basili, V.R., Selby, R.W.: Comparing the Effectiveness of Software Testing Strategies original 1985, revised dec. 87. In: Boehm, B., Rombach, H.D., Zelkowitz, M.V. (eds.) Foundations of Empirical Software Engineering, The Legacy of Victor R. Basili, Springer, Heidelberg (2005)
Myers, G.J.: A controlled experiment in program testing and code walkthroughs inspections, Comm. ACM, 760–768 (September 1978)
Hetzel, W.C.: An experimental analysis of program verification methods, PhD dissertation, Univ. North Carolina, Chapel Hill (1976)
Chillarege, R., Inderpal, S., Bhandari, J.K., Chaar, M.J., Halliday, Moebus, D.S., Ray, B.K., Wong, M.-Y.: Orthogonal defect classification – a concept for in-process measurements. IEEE Trans. on Soft. Eng. 18(11), 943–956 (1992)
Thane, H., Wall, A.: Testing Reusable Software Components in Safety-Critical Real-Time Systems, vol. 1(1-2), Artech House Publishers (2002)
Avižienis, A., Laprie, J.: Dependable computing: From concepts to design diversity. Proceedings of the IEEE 74, 629–638 (1986)
Basili, V.R., Perricone, B.T.: Software errors and complexity: An empirical investigation. Communications of the ACM 27(1), 42–52 (1984)
Beizer, B.: Software Testing and Quality Assurance. Van Nostrand Reinhold electrical/computer science and engineering series. Van Nostrand Reinhold, NY (1984)
DeMillo, R.A., Maihur, A.P.: A grammar based fault classification scheme and its application to the classification of the errors of TEX. Technical Report SERC-TR-165-P, Purdue University, West Lafayette, IN 47907 (1995)
Endres, A.: An analysis of errors and their causes in system programs. Technical report, IBM Laboratory, Boebligen, Germany (1975)
Johnson, C., et al.: Guide to IEEE standard for classification for software anomalies. Technical report, IEEE Computer Society, Washington (1995)
Goodenough, J.B., Gerhart, S.L.: Toward a theory of test data selection. In: Proceedings of the international conference on Reliable software, pp. 493–510. ACM Press, New York, USA (1975)
Gray, J.: Why do computers stop and what can be done about it? Technical Report, vol. 85(7) Tandem Computers (1985)
Harrold, M.J., Offutt, A.J., Tewary, K.: An approach to fault modeling and fault seeding using the program dependence graph. Journal of Systems and Software 36(3), 273–296 (1997)
Howden, W.E.: Reliability of the path analysis testing strategy. IEEE Trans. on Software Engineering 2(3), 208–215 (1976)
Knuth, D.E.: The errors of TEX. Software Practice and Experience 7, 607–685 (1989)
Ishoda, S.: A criticism on the capture-and-recapture method for software reliability assurance. In: Proc. Soft. Eng., IEEE, New York (1995)
Ostrand, T.J., Weyuker, E.J, Bell, R.M.: Predicting the Location and Number of Faults in Large Software Systems. IEEE Trans. of Soft. Eng., vol. 31(4) (April 2005)
Perry, D.E., Steig, C.S.: Software faults in evolving a large, real-time system: a case study. In: Sommerville, I., Paul, M. (eds.) ESEC 1993. LNCS, vol. 717, pp. 48–67. Springer, Heidelberg (1993)
Kaner, C. Falk, J., Nguyen, H.Q: Testing Computer Software. 2nd edn. International Thomson Computer Press (1993)
Vaidyanathan, K., Kishor, S., Trivedi, A.: A comprehensive model for software rejuvenation. IEEE Trans. on Dependable and Secure Computing 2(2), 124–137 (2005)
Zeil, S.J.: Perturbation techniques for detecting domain errors. IEEE Transactions on Software Engineering 15(6), 737–746 (1989)
Mohaghegi, P., Conradi, R., Borretzen, J.A.: Revisiting the problem of Using Problem Reports for Quality Assessments, WQSA, ICSE (2006)
Henningsson, K., Wohlin, C.: Assuring fault classification agreement – An Empirical Evaluation. In: Proc. of ISESE 2004, IEEE, New York (2004)
Offut, A.J., Huffman- Hayes, J.: A Semantic Model of Program Faults. In: Proceedings of ISSTA, pp. 195–200 (1996)
Damm, L.O, Lundberg, L., Wohlin, C.: Fault-Slip Through - a concept for measuring the efficiency of the test process. Journal of Software Process: Improvements and Practice 11(1), 47–59 (2006)
Murphy, B., Garzia, M., Suri, N.: Closing the Gap in Failure Analysis. Workshop on Applied SW Reliability-DSN (2006)
Andrews, J.H., Briand, L.C., Labiche, Y.: Is Mutation an Appropriate Tool for Testing Experiments? In: ICSE 2005, ACM, New York (2005)
Frankl, P.G., Iakounenko, O.: Further Empirical Studies of test Effectiveness. In: Proc. 6th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Orlando, FL, USA, pp. 153–162 (1998)
Vokolos, F.I., Frankl, P.G.: Empirical evaluation of the textual differencing regression testing technique. In: Proc. IEEE Int. Conference on Soft. Maint. USA, pp. 44–53 (1998)
IEEE Std. 1044 -1993, Standard for classification for software anomalies. IEEE (1993)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 IFIP International Federation for Information Processing
About this paper
Cite this paper
Eldh, S., Punnekkat, S., Hansson, H., Jönsson, P. (2007). Component Testing Is Not Enough - A Study of Software Faults in Telecom Middleware. In: Petrenko, A., Veanes, M., Tretmans, J., Grieskamp, W. (eds) Testing of Software and Communicating Systems. FATES TestCom 2007 2007. Lecture Notes in Computer Science, vol 4581. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73066-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-540-73066-8_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73065-1
Online ISBN: 978-3-540-73066-8
eBook Packages: Computer ScienceComputer Science (R0)