Abstract.
In last 50 years, the field of scientific computing has undergone rapid change – we have experienced a remarkable turnover of technologies, architectures, vendors, and the usage of systems. Despite all these changes, the long-term evolution of performance seems to be steady and continuous, following Moore’s Law rather closely. In 1965 Gordon Moore one of the founders of Intel, conjectured that the number of transistors per square inch on integrated circuits would roughly double every year. It turns out that the frequency of doubling is not 12 months, but roughly 18 months [8]. Moore predicted that this trend would continue for the foreseeable future. In Figure 1, we plot the peak performance over the last five decades of computers that have been called “supercomputers.” A broad definition for a supercomputer is that it is one of the fastest computers currently available. They are systems that provide significantly greater sustained performance than that available from mainstream computer systems. The value of supercomputers derives from the value of the problems they solve, not from the innovative technology they showcase. By performance we mean the rate of execution for floating point operations. Here we chart KFlop/s (Kilo-flop/s, thousands of floating point operations per second), MFlop/s (Maga-flop/s, millions of floating point operations per second), GFlop/s (Gega-flop/s, billions of floating point operations per second), TFlop/s (Tera-flop/s, trillions of floating point operations per second), and PFlop/s (Peta-flop/s, 1000 trillions of floating point operations per second). This chart shows clearly how well this Moore’s Law has held over almost the complete lifespan of modern computing – we see an increase in performance averaging two orders of magnitude every decade.
Jack Dongarra: dongarra@cs.utk.edu
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Optimizing Matrix Mulitply Using PHiPAC: A Portable, High-Performance ACSI C Coding Methodology. In: Proceedings of the International Conference on Supercomputing, Vienna, Austria (July 1997)
Whaley, R.C., Petitet, A., Dongarra, J.: Automated Empirical Optimizations of Software and the ATLAS Project. Parallel Computing 27(1-2), 3–35 (2001)
Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammaring, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide, 3rd edn. SIAM Publication, Philadelphia (1999) ISBN 0-89871-447-8
Top500 Report, http://www.top500.org/
Im, E.-J., Yelick, K.: Optimizing sparse matrix computations for register reuse in SPARSITY. In: Alexandrov, V.N., Dongarra, J.J., Juliano, B.A., Renner, R.S., Tan, C.J.K. (eds.) Proceedings of Computational Science – ICCS 2001. International Conference Part I, pp. 127–136. Springer, Heidelberg (2001)
Browne, S., Dongarra, J., Garner, N., Ho, G., Mucci, P.: A Portable Programming Interface for Performance Evaluation on Modern Processors. International Journal of High Performance Computing Applications 14(3), 189–204 (2000)
Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI: The Complete Reference. MIT Press, Boston (1996)
Moore, G.E.: Cramming More Components onto Integrated Circuits. Electronics 38(8), 114–117 (1919)
Dongarra, J.J.: Performance of Various Computers Using Standard Linear Equations Software (Linpack Benchmark Report). University of Tennessee Computer Science Technical Report, CS-89-85 (2003) http://www.netlib.org/benchmark/performance.pdf
Frigo, M., Johnson, S.: Fftw: An Adaptive Software Architecture for the FFT. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Seattle, Washington (May 1998)
DeRose, L., Reed, D.A.: SvPablo: A Multi-Language Architecture-Independent Performance Analysis System. In: Proceedings of the International Conference on Parallel Processing (ICPP 1999), Fukushima, Japan (September 1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Dongarra, J. (2003). High Performance Computing Trends and Self Adapting Numerical Software. In: Veidenbaum, A., Joe, K., Amano, H., Aiso, H. (eds) High Performance Computing. ISHPC 2003. Lecture Notes in Computer Science, vol 2858. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39707-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-540-39707-6_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20359-9
Online ISBN: 978-3-540-39707-6
eBook Packages: Springer Book Archive