High performance sound technologies for access and scholarship (HiPSTAS) in the digital humanities

Tanya E. Clement, David Tcheng, Loretta Auvil, Tony Borries

Research output: Contribution to journalArticlepeer-review


Currently there are few means for humanists interested in accessing and analyzing spoken word audio collections to use and to understand how to use advanced technologies for analyzing sound. The HiPSTAS (High Performance Sound Technologies for Access and Scholarship) project introduces humanists to ARLO (Adaptive Recognition with Layered Optimization), software that has been developed to perform spectral visualization, matching, classification and clustering on large sound collections. As this paper will address, this project has yielded three significant results for developing tools that facilitate machine learning with spoken word collections of keen interest to the humanities: (1) an assessment of user requirements; (2) an assessment of technological infrastructure needed to support a community tool; and (3) preliminary experiments using these advanced resources that show the efficacy, both in terms of user needs and computational resources required, of using machine learning tools to improve discovery with unprocessed audio collections.

Original languageEnglish (US)
JournalProceedings of the ASIST Annual Meeting
Issue number1
StatePublished - 2014


  • Audio databases
  • Audio user interfaces
  • Data mining
  • Data visualization
  • Digital humanities
  • Machine learning
  • Spectral matching

ASJC Scopus subject areas

  • Information Systems
  • Library and Information Sciences


Dive into the research topics of 'High performance sound technologies for access and scholarship (HiPSTAS) in the digital humanities'. Together they form a unique fingerprint.

Cite this