Instead of just accepting all this recent machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live. Magenta provides a pretty graspable way to get started within a field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies.Â
Magenta Studio: Free AI tools for Ableton Live
Download Magenta Studio for free
Requires Ableton Live 10.1 and Max for Live
Magenta Studio made its first appearance in public at Ableton's Loop conference in LA in November. There, in a talk entitled "The Computer as Collaborator," they joined artists in exploring what machine learning means for creativity. Artists YACHT and lucky dragons reflected on what humanness and listening mean to music, and how they've experimented with those media â including YACHT going as far as generating lyrics and melodies. Jesse Engel and Adam Roberts brought perspective from the Google Brain research team, melding engineering and music:
After some more polishing, Magenta Studio is now ready for primetime use since its full release earlier this year. If youâre working with Ableton Live, you can use Magenta Studio as a set of devices. Because theyâre built with Electron (a popular cross-platform JavaScript tool), though, thereâs also a standalone version. And, if youâre a developer, you can dig far deeper into the tools and modify them for your own purposes â and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)
I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But letâs back up and first talk about what this means.
AI?
Artificial Intelligence â well, apologies, I could have fit the letters âMLâ into the headline above but no one would know what I was talking about.
Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. âTensorFlowâ may sound like some kind of stress exercise ball you keep at your desk. But itâs really about creating an engine that can very quickly process lots of tensors â geometric units that can be combined into, for example, artificial neural networks.
Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff youâve been doing in music software with tools like grids, and lets you use a mathematical model thatâs more sophisticated â and that gives you different results you can hear.
You may know Magenta from its involvement in the NSynth synthesizer. That also has its own Ableton Live device, from a couple years back.
NSynth uses models to map sounds to other sounds and interpolate between them â it actually applies the techniques weâll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as theyâre something a bit unique â and you can again play around with in Ableton Live.
But even if that particular application didnât impress you â trying to find new instrument timbres â the note/rhythm-based ideas make this effort worth a new look.
Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say itâs âlearningâ in the sense that there are some parallels to very low-level conceptions of how neurons work in biology, but this is on a more basic level â running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.
Magentaâs âmusicalâ library applies a set of learning principles to musical note data. That means it needs a set of data to âtrainâ on â and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and youâll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.
One reason that itâs cool that Magenta and Magenta Studio are open source is, youâre totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldnât judge Magenta Studio on these initial results alone.)
Whatâs in Magenta Studio?
Magenta Studio has a few different tools. Many are based on MusicVAE â a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones â which is why this gets interesting for music software.
Crucially, you donât have to understand or even much care about the math and analysis going on here â expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But itâs a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.
 It takes the stuff youâve been doing in music software with tools like grids, and lets you use a mathematical model thatâs more sophisticated â and that gives you different results you can hear.
Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. Youâll make new clips â sometimes starting from existing clips you input â and the device will spit out the results as MIDI you can use to control instruments and drum racks. Thereâs also a slide called âTemperatureâ which determines how the model is sampled mathematically. Itâs not quite like adjusting randomness â hence they chose this new name â but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.
The data that these tools were trained on represents millions of melodies and rhythms. That is, theyâve chosen a dataset that will give you fairly generic, vanilla results â in the context of Western music, of course. (And Liveâs interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface⌠not to mention, arguably thereâs some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)
Here are your options:
Generate
This makes a new melody or rhythm with no input required â itâs the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.
Continue
This is actually a bit closer to what Magenta Studioâs research was meant to do â punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it â or generate a bunch of variations/continuations of an idea quickly.
Interpolate
Instead of one clip, use two clips and merge/morph between them.
Groove
Adjust timing and velocity to âhumanizeâ a clip to a particular feel. This is possibly the most interesting of the lot, because itâs a bit more focused â and immediately solves a problem that software hasnât solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a âhumanizeâ thatâs (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.
Drumify
Same dataset as Groove, but this creates a new clip based on the groove of the input. ItâsâŚsort of like if Band-in-a-Box rhythms werenât awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that âaccompaniesâ an input.
So, is it useful?Â
It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, youâre working with a model of music. And that model will impact how you play and think.
More to the point with something like Magenta is, do you really get musically useful results? Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.
Generate is also fun, though even in the case of Continue, the issue is that these tools donât particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see⌠all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if youâre alone in a studio or some other work environment.
One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but â letâs take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And thatâs the simplest case â music from punk to techno to piano sonatas will challenge these models in Magenta.
I bring this up not because I want to dismiss the Magenta project â on the contrary, if youâre aware of these things, having a musical game like this is even more fun.
You can break out of your usual habits and create some surprise even if youâre alone in a studio or some other work environment.
The moment you begin using Magenta Studio, youâre already extending some of the statistical prowess of the machine learning engine with your own human input. Youâre choosing which results you like. Youâre adding instrumentation. Youâre adjusting the Temperature slider using your ear â when in fact thereâs often no real mathematical indication of where it âshouldâ be set.
And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which havenât changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.
And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even thatâs a benefit.
Where this could go next
There are lots of people out there selling you âAIâ solutions and â yeah, of course, with this much buzz, a lot of it is snake oil. But thatâs not the experience you have talking to the Magenta team, partly because theyâre engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.
As Jesse Engel of Magenta tells us: âWeâre a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way. Things like more impressive MIDI generation, state of the art transcription, and new controller paradigms.Â
Our goal is just for someone to come away with an understanding that this is an avenue weâve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.
So okay, music makers â have at it:
Download Magenta Studio for free
Requires Ableton Live 10.1 and Max for Live
Text: Peter Kirn
A version of this article appeared on Create Digital Music