A.Squared: Extend the Voice
A capella music places the human voice front and center. It throws out instruments and synthesised sounds leaving vocals in a raw form or layered in lush harmonies. The Yale University students who make up a capella group A.Squared are taking the paradigm, and pushing it in an experimental direction: all sounds originate with the human voice, but performances also utilize Push, Live, and Max for Live for on-the-spot effects processing and live loop layering. Take a listen to A.Squared’s gorgeous cover of James Blake’s “Retrograde” below, then read on to hear from group member Jacob Reske on how it all works:
How does an A.Squared performance work? What is the role of the singers, and how are the voices processed?
We're a 6-person electronic vocal group: five singers and one beatboxer/producer with a Push (that's me). I write all of the arrangements and do production; I'll often co-write the songs with the singers. Here's the basic concept: using Ableton Live to augment the human voice, live and in an ensemble setting.
So we're just like your typical six-person vocal group, except that each of our members has the power to loop their voices and add effects in real time, individually. What we're interested in doing is exploring that grey area between vocal and instrumental music, using software as a tool to facilitate that exploration. But we want to still work within the a cappella tradition, as all of our singers (and a lot of our audience) come from that style. So, to that end, we've adopted some of the same constraints on our music that a cappella musicians usually have: 1) everything is made with the voice (no instruments), and 2) everything is performed live.
We use Live as the platform for the creation/production process, rehearsals, and performances. A lot of my friends who are electronic musicians kind of look at me funny when I say that we have an a cappella group that performs entirely in Ableton Live. But that's exactly what we do -- Live facilitates every part of the process. It's a much more flexible tool than a couple of loop pedals and some guitar effects, and you can make some truly crazy sounds with voices and DSP alone, even without typical instruments.
To that end, because we constrain ourselves by requiring everything to be live, we construct the live set very specifically beforehand to be the roadmap for the arrangement. This lets us trigger the right effects/loops for each section of the song and fit within the Live paradigm.
The great thing about this workflow is that the performance aspect is very flexible-- we can have one person control the whole arrangement, or give control over to the singers individually. Since so much is going on in Live, it can be empowering to the singers to have control of when to punch in loops/effects themselves, and it can make for some very cool creative choices.
At the end of the day, the singers get to just perform the songs like they normally would, except that there's a click track in their ear to sync with Live, and they have to be extra cognizant of when parts are getting looped and not looped.
How are you using Push live with A.Squared?
We use the Push pretty extensively, both in the creation/improvisation phase and in rehearsals and performance. For the larger and more complex pieces, I use it to trigger scenes, which are programmed ahead of time to serve as the skeleton for the arrangement.
We're very keen on ensuring that everything in the performance is live-- in other words, there's no pre-recorded material. So at the start of our songs, we don't have any active audio clips in the Live Set. But there is a lot of structure already present. We map out our arrangements so that Scenes trigger two things: 1) slots for audio to be recorded from the singers' live mics, and 2) dummy clips that select which effects chain gets mapped to the voice, along with automation. With this system, one person can manage the looping and effects of five singers at the same time. The Push serves as the guide for the whole piece, since everything in our arrangement can be controlled from there.
It's nice to have control of every singers' loops/effects from one place, but sometimes it's more fun to give control back to the singers and see what they come up with. We have some songs where each singer has an iPad running TouchAble, a program that lets them control their segment of the session independently. This can be very fun in an improvisatory setting, or just for coming up with new material as a group.
Some of our other songs use the Push as an instrument to play back vocal samples that we record on the fly, like those we record with Granulator's Live mode. I'm also using it constantly to tweak things in our rehearsal or apply effects/remix the master track if we want to break down the beat a bit.
Which Max for Live devices are you using with A.Squared?
Since all of our material is live vocals, we're more on the effects side, but a couple of big ones come to mind: Granulator and Buffer Shuffler are staples for us. Granulator's live mode lets us sample vocals in real time and pull them into an instrument so we can "play" that voice just after sampling it. Very processor-intensive, but well worth it! Buffer Shuffler is the big one that we use in Holocene, and it's insanely flexible. I'll often just pull it into an arrangement that we want to remix and see where the parameters take me. Another big one is Multimap, which I just discovered a few months ago and has saved me endless amounts of time. Having one dummy track map effects automation to 6 independent singer tracks is really, really useful.
Some of our other M4L patches are custom, since we need them to do very specific jobs. We've built one that does transposition, and I'm working on building a 4-part diatonic vocal harmonizer. For an all-vocal group, this is like the the holy grail, since a single singer can maneuver through chord changes and sound shockingly close to synths and electric guitars. Plus, playing with harmonizers is insanely fun.
Figured I'd mention some of the lesser-known, standard Live effects that we use extensively. Corpus is a BIG one for us; it's responsible for probably half of our kick drum sounds. We use Vocoder a lot to do formant shifts. And Saturator is like the Swiss army knife of distortion plugins, but everyone knows that!
In your performance of “Holocene”, how are you processing the voices to get the electronic-sound pitch shift effects on the loops?
A closely-guarded secret ;) JK, that one was fun to make, and I honestly just stumbled upon that sound one day. In the second verse, I record the voices individually to separate (muted) tracks, and then I have effects on those tracks that push the vocals to some extremes. It's a combination of the transposition M4L patch, Antares' "Throat" plug-in, and a healthy dose of auto-tune. From there, the vocals get mixed down and thrown into Buffer Shuffler, which creates that new, shuffled-around sample, just in time for the chorus. It's all pre-programmed and automated so that the processing happens live, but the singers REALLY have to be on their mark to get it to sound right!
When I'm working on producing the arrangements, I'll often have the singers come in one by one and lay down scratch takes of vocal parts. Then, I'll pull in one of several effects chains that I have onto the set and start seeing how far I can stretch things using just the voices. If you plan everything right, though, the performance can sound as produced as it does in the studio, except all the material is live in the latter case.