You’ve collaborated with electronic musicians throughout your career, how did these start?

My work with electronic artists started in the early 90s, and was in a way driven from what happened with sampling at that time. I was aware that there was this shared space between the academic electronic music which I was used to, coming out of places like the IRCAM in Paris, to people like Future Sound of London who were doing something very creative, abstract, coloristic, and tactile. It felt like there was an open space between these two traditions, which on the face of it, looked miles apart. For me that was very exciting, and at that time London was full of this stuff, happening mostly in Shoreditch and Strongrooms Studios. I got together with FSoL and started working with them as a note wrangler. Because they were producers, they were not really trained, so it was fascinating to me that we would have a very different understanding of the same sound.

“We love that sound, we love the way that feels,” and I would say, “Yes, that’s great, but that C needs to be a C# and then it would be great.” And they looked at me and said, “What are you talking about? It’s already great!” It was hilarious because I was thinking of it in terms of chords and scales and modes. It was a light, odd moment really, because it was a different way of looking at a sound.

When and why did you begin to incorporate electronic elements in your music?

I see it as an extension of what’s been going on within classical music for hundreds of years. If you go back to the 18th century, the orchestras were this little group of a few instruments, and through the centuries it just got bigger and bigger. What’s happening is that composers are looking for new colors [and] new ways of telling stories. Incorporating electronics into my music is just an extension of that. We’ve got all these amazing new tools, so why would we not use them?

Which were the first electronic instruments you used in your music?

The first electronic instruments I used were the ones I built. The first synthesizer I built was called the Transcendent 2000, which I think is the first synthesizer Thomas Dolby built, you know, it has a kind of a pedigree. It’s like a Minimoog clone basically. Tim Orr designed it, who also designed the OSCar, so it’s kind of a legend in synth design. That was my first instrument and then later on I started to get my hands on as many toys as possible, which is what we all do, and I guess I gravitated to the kind of Moog sound with analog.

When did you start using plug-ins?

When the plug-in era began, you were able to make effects in Pro Tools and Logic. That was a game changer, because you started to have real repeatability, real ability to act accurately, and recall stuff. At the beginning the plug-ins sounded really bad, although there was a brilliant Diva plug-in from Pro-Tools which had an infinite setting, which is like the holy grail of reverb. I mean it just carried on for days, and had amazing color. I used it in the Blue Notebooks as sort of pad, not as a reverb.

I got into Reaktor quite early on. I liked the idea of a space where you can build your own instrument. Working with electronic music, there is the composition of sounds but there’s also another dimension, which is the composing of the instrument that makes the sounds. A lot of people have done amazing things like that. If you think of Autechre, there’s a complete continuity between building stuff that makes the noises and making the noises with them.

I went through a whole REAKTOR craze. The Blue Notebooks and Songs from Before were my REAKTOR Ensemble building high-point. For me it’s always been a bit like an accidental ideas generator. A lot of the electronic music practices are sort of an experiment, where you try something. It’s quite different to composing on paper. Composing on paper is: ‘I’m gonna have an idea and I’m gonna write it down’. And the electronic music is more like ‘try this’, and REAKTOR is fantastic for that.

The Kontakt environment, I use all day long, every day. KONTAKT is where all the sampled libraries live; the Spitfire stuff, Heavyocity, Native Instruments stuff obviously, and it’s just a very solid machine. In terms of libraries I probably use the Spitfire stuff the most, because there’s a great continuity in sonic fingerprints between the Spitfire libraries and my recordings. I use a lot of the Heavyocity stuff, I think they’re great. But honestly there’s so many. There’s Pendle in Brighton who makes Sound Dust. Amazing! I mean, it’s a guy in his house just making instruments for KONTAKT. There are so many people doing that, it’s a community.

What was different on these compared to previous projects?

There are two things that happened in Sleep that are different. The first is that I finally went nuts and bought the TC 6000, which for me is the holy grail of reverbs. I consider that an essential instrument on the project. The other thing, is that I’ve used a lot of filtering within the audio, like the orchestral stuff; processing it through the filters and the hardware synthesizers, just for texture. The other thing about Sleep is that it’s the first piece I recorded digitally. Every record I’ve made up to this point has been a 2” 16-track [recording], done in a very nerdy sort of workflow with everything how I wanted. I’m completely obsessed with all of that stuff, but with Sleep you could not record on tape. So we had the recordings in Pro-Tools. In a way Sleep is a digital work, and can only really be played as a stream in its entirety, or in a Blue-Ray.

How did you get into film/tv scoring?

That happened by accident, really. I made Memoryhouse back in 2002, and The Blue Notebooks in 2004. After that, people started asking me if they could put them in their movies, which was wonderful. That happened for a while and then I started getting requests, mostly from crazy people, to write new music for their films. One of these crazy people – who actually turned out not to be crazy, but rather brilliant, in fact – was Ari Folman, who had written Waltz With Bashir in Israel while listening to The Blue Notebooks on repeat for about a week. So he said, “I’ve written a film and you have to score it.” Ari is so soulful, and so interesting, and the script was amazing – I had to say yes. For me, it was the perfect film and I had a wonderful time working with him. People kept asking from then on, but a lot of them wanted me to do middle-eastern war movies – I had to turn down a few of those.

 

“The Blue Notebooks” (2004)

 

What are the challenges when working on those scores?

Scoring is different from writing an album, it has to be: Within a film, the music is just one part of the storytelling. It’s not a symphony, but rather a puzzle-solving process a lot of the time. You have a lot of things to consider: the character, the story, the setting, the dynamics, how it’s shot, how the camera is moving, what the actors are doing, what their intention is, what we do and don’t want the audience to know at this point in the story, are we trying to support the action or are we trying to mislead and surprise… there are a million different questions around any given moment. And it’s really interesting to get into that and figure out the answers in a creative way. That’s what I like about it.

The other thing is that filmmaking is a collaborative endeavor. It’s about conversations, which is great. Making an album is more about me sitting in a room on my own and slowly going crazy!

Deadlines? Reworks? How does it work for you?

Film music is an industrial process. You have to be realistic about what’s going on. Of course, we want to be as creative as possible, but at the same time, in 90 days or whatever, you’re recording that score. It’s these two things: a collision between a poetic venture, and a hardcore industry, and there’s the challenge.

Film scoring is very technical, as you’re writing to a time coded picture. Life can be very difficult if you do it on paper, so it lives very much in a sequencer. If it needs to be played by an orchestra you need to generate parts, so it goes from Logic, into Sibelius. We do the scoring, then the parts go to the band, meanwhile Pro Tools sessions are done to record to. Then we record. There is the mixing process where we mix into various stems, and all of that goes to the dub. That’s the incremental industrial workflow, but hopefully, we’ve got a white light of creativity before we get to that.

What’s your current live set-up?

The current live setup is a little bit unusual because it’s specifically about Sleep. Sleep is just a monster with the amount of data that is being used. The multi-tracks are 24/96, eight-and-a-half hours, and like 20 tracks. It’s the most amount of data I’ve ever seen in my life. We are using quite a lot of that material in various forms in the live show, because there’s a lot of electronic music in Sleep.

 

 

How do you see the evolution of music technology influencing your work?

We started with very un-evolved products/tools which were hard to use. Over time there has been more and more users, input, and development, so we started to get tools which are more flexible, and not so prescriptive in how you use them. That’s really the interesting thing, that we got stuff which is more like a blank space which you can fill in. I think that a lot of these hybrid systems, where you get like a sort of physical interface with something in the machine, is an interesting conversation, and there’s a lot of that that’s been going along in the last few years.

What advice do you have for young composers?

Everybody has something that no one else has, and it’s trying to find out what that is. We’re unique creatures with unique life experiences, likes, and dislikes. That’s what we’ve got, our individuality, a sort of special little place in the world. Finding out what that thing is, is difficult as it’s a noisy world.