Improvising musical structure with hierarchical neural nets

Benjamin D. Smith, Guy E. Garnett

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Neural networks and recurrent neural networks have been employed to learn, generalize, and generate musical examples and pieces. Yet, these models typically suffer from an inability to characterize and reproduce the long-term dependencies of musical structure, resulting in products that seem to wander aimlessly. We describe and examine three novel hierarchical models that explicitly operate on multiple structural levels. A three layer model is presented, then a weighting policy is added with two different methods of control attempting to maximize global network learning. While the results do not have sufficient structure beyond the phrase or section level, they do evince autonomous generation of recognizable medium-level structures.

Original languageEnglish (US)
Title of host publicationMusical Metacreation - Papers from the 2012 AIIDE Workshop, Technical Report
Pages63-67
Number of pages5
StatePublished - 2012
Event8th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2012 Workshop - Stanford, CA, United States
Duration: Oct 9 2012Oct 9 2012

Publication series

NameAAAI Workshop - Technical Report
VolumeWS-12-16

Other

Other8th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2012 Workshop
Country/TerritoryUnited States
CityStanford, CA
Period10/9/1210/9/12

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Improvising musical structure with hierarchical neural nets'. Together they form a unique fingerprint.

Cite this