While generative text and image models like ChatGPT and DALL-E2 have garnered popular attention, generative music creation has only begun to blossom.
This talk will focus on a case study of an independent composer navigating the infrastructure and coding challenges of building, optimizing, and training a deep learning model for audio generation for use in her own compositions. Molly Jones, a composer and software engineer, attempts to develop creative work that is aesthetically, conceptually, and ethically sound while addressing problems of data augmentation, solo coding, and limited resources.
The case study does not offer immediate solutions to the problems faced by independent artists but rather a catalog of areas for future research aimed at easing the use of deep learning models in creative contexts. Molly is an Associate Software Consultant at Spantree, LLC, and a Composer in Residence at the University of Toronto, where she explores the intersections of AI and music.