Abstract
A popular U.S. talk show host uses "top 10" lists to critique events and culture every night. Our HPC industry is captivated by another list, the TOP500 list, as a way to track HPC systems' performance based on FLOPS/S assessed by a single, long-lived benchmark-Linpack. The TOP500 list has grown in influence because of its value as a marketing tool. It simplistically, but unrealistically, describes performance of HPC systems. The proponents have advocated for the TOP500 list for different reasons at different times. This paper critiques the Top 10 problems with the TOP500 list and provides suggestions on how to correct those shortcomings. It discusses why the TOP500 list is limiting the impact of HPC systems on real problems and other metrics that may be more meaningful and useful to represent the real effectiveness and value of HPC systems.
Original language | English (US) |
---|---|
Pages (from-to) | 223-230 |
Number of pages | 8 |
Journal | Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT |
DOIs | |
State | Published - 2012 |
Event | 21st International Conference on Parallel Architectures and Compilation Techniques, PACT 2012 - Minneapolis, MN, United States Duration: Sep 19 2012 → Sep 23 2012 |
Keywords
- Benchmarks
- HPC
- Linpack
- PERCU
- Performance
- Supercomputing
- System evaluation
- Top500
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture