Supporting High-Performance and High-Throughput Computing for Experimental Science

E. A. Huerta, Roland Haas, Shantenu Jha, Mark Neubauer, Daniel S. Katz

Research output: Contribution to journalReview articlepeer-review


The advent of experimental science facilities—instruments and observatories, such as the Large Hadron Collider, the Laser Interferometer Gravitational Wave Observatory, and the upcoming Large Synoptic Survey Telescope —has brought about challenging, large-scale computational and data processing requirements. Traditionally, the computing infrastructure to support these facility’s requirements were organized into separate infrastructure that supported their high-throughput needs and those that supported their high-performance computing needs. We argue that to enable and accelerate scientific discovery at the scale and sophistication that is now needed, this separation between high-performance computing and high-throughput computing must be bridged and an integrated, unified infrastructure provided. In this paper, we discuss several case studies where such infrastructure has been implemented. These case studies span different science domains, software systems, and application requirements as well as levels of sustainability. A further aim of this paper is to provide a basis to determine the common characteristics and requirements of such infrastructure, as well as to begin a discussion of how best to support the computing requirements of existing and future experimental science facilities.

Original languageEnglish (US)
Article number5
JournalComputing and Software for Big Science
Issue number1
StatePublished - Dec 2019


  • Blue Waters
  • CMS
  • Containers
  • HPC
  • HTC
  • LIGO
  • OSG
  • Titan

ASJC Scopus subject areas

  • Software
  • Computer Science (miscellaneous)
  • Nuclear and High Energy Physics


Dive into the research topics of 'Supporting High-Performance and High-Throughput Computing for Experimental Science'. Together they form a unique fingerprint.

Cite this