Video Demos by Daniel Oore for…

Jason d’Eon (Dalhousie, Vector), Sri Harsha Dumpala (Dalhousie, Vector), Chandramouli Sastry (Dalhousie, Vector), Daniel Oore (IICSI, MUN), Mengyu Yang (U of T), Sageev Oore (Dalhousie, Vector), “A Speech-Based Music Composition Tool With Transformer” and “Musical Speech: A Transformer-based Composition Tool,” NeurIPS 2020 (34th Conference and Workshop on Neural Information Processing Systems)

BREAK DOWN: http://dani.oore.ca/moocow/

The speech in these video clips was fed into an AI transformer-based composition tool, which output raw MIDI files that were then used to trigger all the digital instrument sounds heard.

The initial fragment of regular speech at the beginning of the video below was fed into an AI transformer-based composition tool, which output raw MIDI files that were then used to trigger all the instrument sounds heard, and which was edited together with the video to create this piece:

All (MIDI triggered) sounds orchestrated, arranged, and edited by Daniel Oore
Audio & video edited by Daniel Oore
All raw MIDI files (notes & their relative durations) generated by a machine learning system by Jason D’Eon, Sri Harsha Dumpala, Sageev Oore