MASOM: Musical Agent based on Self-Organizing Maps
Can we create an AI that listens to more music than a human could?
Musical agents are AI software making music. Musical Agent based on Self-Organizing Maps (MASOM) is a musical software agent for live performance. MASOM plays experimental music and free improvisation. It learns by listening to audio files such as recordings of performances or compositions. We can train a MASOM agent on a set of music that is so big that it would take more than one human life to listen. Similarly, we can train MASOM agents on dead composers and convert their fixed media piece to interactive musical agents.
MASOM also extracts higher level features such as eventfulness (arousal), pleasantness (valence), and timbre to understand the musical form of what it hears. MASOM is limited to the style of what is has listened to and reacts in real-time to what he is hearing. The agent can listen to itself and other performers to decide what to play next.
The research and development of MASOM is still ongoing. Following is the documentation of MASOM‘s previous versions, and public presentations.
->Tatar, K., Ens, J., Kraasch, J., Fan J., & Pasquier, P.A Comparison of Statistical Sequence Models in Musical Agents based on Self-Organizing Maps. Submitted to the Artificial Intelligence Journal. In review.
->Tatar K., Pasquier P., Siu R. (2019) Audio-based Musical Artificial Intelligence and Audio-Reactive Visual Agents in Revive. Accepted to the International Computer Music Conference and New York City Electroacoustic Music Festival 2019 (ICMC-NYCEMF 2019).
->Tatar K., Pasquier P., & Siu R. (2018). REVIVE: An audio-visual performance with musical and visual Artificial Intelligence Agents. CHI’18, April 21–26, 2018, Montreal, QC, Canada ACM 978-1-4503-5621-3/18/04.
->Tatar, K. & Pasquier, P. (2017). MASOM: A Musical Agent Architecture based on Self-Organizing Maps, Affective Computing, and Variable Markov Models. In Proceedings of the 5th International Workshop on Musical Metacreation (MuMe 2017). Paper
The documentation of previous versions and performances are here: (ノ ˘_˘)ノ----︎
This work has been supported by the Canada Council of the Arts, the Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada.
Ce travail est supporté par le conseil des arts du Canada, le Conseil national des sciences et de l’ingénieurie du Canada, et le Conseil national des sciences humaines et sociales du Canada.