TY - GEN
T1 - AI Benchmarking for Science
T2 - 37th International Conference on High Performance Computing , ISC High Performance 2022
AU - Thiyagalingam, Jeyan
AU - von Laszewski, Gregor
AU - Yin, Junqi
AU - Emani, Murali
AU - Papay, Juri
AU - Barrett, Gregg
AU - Luszczek, Piotr
AU - Tsaris, Aristeidis
AU - Kirkpatrick, Christine
AU - Wang, Feiyi
AU - Gibbs, Tom
AU - Vishwanath, Venkatram
AU - Shankar, Mallikarjun
AU - Fox, Geoffrey
AU - Hey, Tony
N1 - Acknowledgements. We would like to thank Samuel Jackson from the Scientific Machine Learning Group at the Rutherford Appleton Laboratory (RAL) of the Science and Technology Facilities Council (STFC)(UK) for his contributions towards the Cloud Masking benchmark. This work was supported by Wave 1 of the UKRI Strategic Priorities Fund under the EPSRC grant EP/T001569/1, particularly the ‘AI for Science’ theme within that grant, by the Alan Turing Institute and by the Benchmarking for AI for Science at Exascale (BASE) project under the EPSRC grant EP/V001310/1, along with the Facilities Funding from Science and Technology Facilities Council (STFC) of UKRI, NSF Grants 2204115 and 2204115, and DOE Award DE-SC0021418. This manuscript has been authored in part by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The publisher acknowledges the US government license to provide public access under the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). This research also used resources from the Oak Ridge and Argonne Leadership Computing Facilities, which are DOE Office of Science user facilities, supported under contracts DE-AC05-00OR22725 and DE-AC05-00OR22725, respectively, and from the PEARL AI resource at the RAL, STFC. This work would not have been possible without the continued support of MLCommons and MLCommons Research, and in particular, we thank Peter Mattson, David Kanter and Vijay Janapa Reddi for their leadership and help.
We would like to thank Samuel Jackson from the Scientific Machine Learning Group at the Rutherford Appleton Laboratory (RAL) of the Science and Technology Facilities Council (STFC)(UK) for his contributions towards the Cloud Masking benchmark. This work was supported by Wave 1 of the UKRI Strategic Priorities Fund under the EPSRC grant EP/T001569/1, particularly the ‘AI for Science’ theme within that grant, by the Alan Turing Institute and by the Benchmarking for AI for Science at Exascale (BASE) project under the EPSRC grant EP/V001310/1, along with the Facilities Funding from Science and Technology Facilities Council (STFC) of UKRI, NSF Grants 2204115 and 2204115, and DOE Award DE-SC0021418. This manuscript has been authored in part by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The publisher acknowledges the US government license to provide public access under the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). This research also used resources from the Oak Ridge and Argonne Leadership Computing Facilities, which are DOE Office of Science user facilities, supported under contracts DE-AC05-00OR22725 and DE-AC05-00OR22725, respectively, and from the PEARL AI resource at the RAL, STFC. This work would not have been possible without the continued support of MLCommons and MLCommons Research, and in particular, we thank Peter Mattson, David Kanter and Vijay Janapa Reddi for their leadership and help.
PY - 2022
Y1 - 2022
N2 - With machine learning (ML) becoming a transformative tool for science, the scientific community needs a clear catalogue of ML techniques, and their relative benefits on various scientific problems, if they were to make significant advances in science using AI. Although this comes under the purview of benchmarking, conventional benchmarking initiatives are focused on performance, and as such, science, often becomes a secondary criteria. In this paper, we describe a community effort from a working group, namely, MLCommons Science Working Group, in developing science-specific AI benchmarking for the international scientific community. Since the inception of the working group in 2020, the group has worked very collaboratively with a number of national laboratories, academic institutions and industries, across the world, and has developed four science-specific AI benchmarks. We will describe the overall process, the resulting benchmarks along with some initial results. We foresee that this initiative is likely to be very transformative for the AI for Science, and for performance-focused communities.
AB - With machine learning (ML) becoming a transformative tool for science, the scientific community needs a clear catalogue of ML techniques, and their relative benefits on various scientific problems, if they were to make significant advances in science using AI. Although this comes under the purview of benchmarking, conventional benchmarking initiatives are focused on performance, and as such, science, often becomes a secondary criteria. In this paper, we describe a community effort from a working group, namely, MLCommons Science Working Group, in developing science-specific AI benchmarking for the international scientific community. Since the inception of the working group in 2020, the group has worked very collaboratively with a number of national laboratories, academic institutions and industries, across the world, and has developed four science-specific AI benchmarks. We will describe the overall process, the resulting benchmarks along with some initial results. We foresee that this initiative is likely to be very transformative for the AI for Science, and for performance-focused communities.
KW - AI for Science
KW - Benchmarks
KW - Machine learning
KW - Science
UR - http://www.scopus.com/inward/record.url?scp=85148700325&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148700325&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-23220-6_4
DO - 10.1007/978-3-031-23220-6_4
M3 - Conference contribution
AN - SCOPUS:85148700325
SN - 9783031232190
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 47
EP - 64
BT - High Performance Computing. ISC High Performance 2022 International Workshops - Revised Selected Papers
A2 - Anzt, Hartwig
A2 - Bienz, Amanda
A2 - Luszczek, Piotr
A2 - Baboulin, Marc
PB - Springer
Y2 - 29 May 2022 through 2 June 2022
ER -