Music AI has finally moved beyond generating MIDI notes and into generating actual sound. Now there is a publicly available service named Jukebox OpenAI that allows one to experiment with the potential this tech offers.
I’ve been submitting little bits of sounds from the Listening Experience project to the AI, to see what happens and to learn more about how the process works. The goal is to end up with about one hour of sound snippets and then build some sort of piece from these snippets.
Not really sure about how to present this stuff. Right now there is a trend on YouTube to create videos that present the original audio source followed by the AI reinterpretations.