Wednesday, June 26, 2013

An Interview with James McCartney, the developer of SuperCollider, a programming language for real-time sound, music and algorithmic synthesis
By Ira G. Liss

SuperCollider is a programming language for real time audio production and composition. It is free, open source software popular among artists, musicians and developers. It can be found in various college electronic music classes. An enthusiastic worldwide community produces work with it and contributes to its development.

During the week of May 20, the 2013 SuperCollider Symposium was hosted in the Roser ATLAS building on the CU-Boulder campus. Evening concerts were performed by a variety of artists and developers from around the world. Discussions covering various aspects of development, invention and creativity were presented during the day.

I had the opportunity and privilege to interview one of the attendees, the creator and developer of SuperCollider, James McCartney. He first released the software in 1996. Over the years, it has undergone several upgrades. Currently, he is a senior engineer at Apple based in Cupertino, California and works in the CoreAudio group at Apple, responsible for operating system level audio services for Mac OSX and iOS.

To provide a more direct experience of the software, I asked two CU artist/composers to comment on it:

“For me, SuperCollider is the fastest and most expressive way to achieve the music I create. Its openness and flexibility provide the artist with the ability to create anything they could imagine: from traditional composed works to endless algorithmic structures; from dance music to experimental noise. Although there are various alternatives out there that can achieve the same basic result, I feel that the power of a programming language, rather than a graphical environment, allows for a higher degree of artistic control, despite the initial learning curve.” – Cole Ingraham, DMA graduate, College of Music, CU-Boulder and lead organizer/producer of the 2013 SuperCollider Symposium

Screen shot of the SuperCollider interface – The code (visible on left half of screen) was written for a performance of the Boulder Laptop Orchestra (BLOrk) by CU music faculty John Drumheller. The middle grey window is the graphical user interface (GUI) generated by the code that gives each BLOrk musician control over the sound coming from their workstation during a concert. See other screenshots:

“To me, SuperCollider is a toolkit for building environments for sound art creation. Because of the vastness of possibilities with this toolkit I see it as a blank canvass with a large set of brushes and pigments, or a large pile of scrap metal with state of the art welding torches. SuperCollider has allowed me to explore sound in a way that was not available before and has opened new paths and ways of thinking about sound, music and image.” – John Drumheller, instructor and director of music technology, College of Music, CU-Boulder

The following interview was conducted on May 23, 2013.

Ira G. Liss: How did you begin making electronic music?

James McCartney: I had an interest in computer science, but the reason for considering computer science was because I was interested in electronic music. I’ve always done audio.

I was using C-sound and stuff like that. First I was using analog synthesizers, then I learned C-sound. I didn’t really like it that much so I started writing my own tools I guess from that point.

James McCartney
So there were several different iterations. I wrote something else, Synth-O-Matic, then I wrote SuperCollider Version 1 and then Version 2 which sort of established the language the way it is now.

And then, I wanted to change the architecture of it, so I wrote SuperCollider Version 3. That’s what people are using now.

The idea was just to make a language that made it possible to create sounds – to write code that would make whole classes of sound or whole ranges of sonic possibilities from not-so-much code. That was the idea basically.

IGL: Did you have a conscious desire to free yourself from the sounds of traditional, acoustic instruments?

JM: I think that’s been the goal of electronic or computer music; maybe not the goal, but certainly the freedom that it offers – the ability to make any sound possible.
The loudspeaker is the most flexible sonic instrument.

IGL: It is amazing that the vibrating diaphragm can create almost anything.

JM: And then if you have a computer that can create any waveform going into something that can produce any sound – theoretically, there’s no limit to what sound you can make.

All electronic and computer music is in that realm – trying to exploit that realm of possibility, or has that potential to exploit it.

IGL: Could you talk about what you were striving for in creating SuperCollider?

JM: There’s a range of techniques that people have discovered, from all the analog stuff – oscillators and filters – all the tools that have been used. The idea was to bring it all into this software that made it easy to configure it all dynamically, so that while it’s running, it can be creating these sound structures that can be unique to the moment.

IGL: Kind of analogous or similar to traditional musical instruments where you make changes and get instant feedback or instant gratification; you’re not waiting months for the code to run somewhere on a mainframe.

JM: There’s a story about that for me. I was working on some music for modern dance and had a program that I was running on my computer and it wasn’t real time. At the time, computers were not fast enough. While I was working on this piece (and it was taking me forever because it was taking so long to calculate the audio), the PowerPC chip came out which had floating point; it was the first RISC micro-processer that had really fast floating point.

Apple was making one using this IBM chip; I went on the day that it was available and I bought it. I recompiled my program and it ran 32 times faster which was faster than real time. So I went from going maybe a third of real-time to faster than real-time literally overnight.

I finished that piece and then decided I’m going to rewrite all my code to work in real time – with the idea that it can work in real time. Before that, I’d written it where you set it all up and then it will make the sound for you in the file.

I changed it so that it would make the sound directly off to the speaker in real time; that was the beginning of making a tool that could do live music.

(Editor’s note: Apple made the PowerPC chip available in their Mac computers in 1994.)

IGL: It sounds like you were given wings or went from a horse-drawn carriage to a V8 engine.

JM: That was a good event. It took me some years more to make it where it was really easier to use. There were a lot of limitations in the first version to be overcome. That’s been a constant process, even now, thinking about how I could do this, and what would make it even more flexible or better, more intuitive to use.

IGL: When that moment happened, where you realized it was faster than real time, was it a celebration kind of moment? Did you tell your colleagues, wow, this chip is fabulous! Or did you just take it in stride?

JM: I wasn’t really communicative about it. I just thought, Great! I can get my piece done in time, was the main thing at that moment. It made me excited to be working on it.  I wasn’t really shouting about it.

IGL: Did you come to computer-generated sound with a traditional music background?

JM: No, I listened to music when I was a kid and I heard Walter Carlos (Editor’s note: Name changed to Wendy Carlos, co-creator of the album “Switched On Bach.”) and thought, oh, that’s kind of interesting and I heard German electronic groups in the late 70s and I thought, oh, this is something that I’m really interested in. I wasn’t that much interested in music until I heard electronic music. I do like a lot of music but for myself, I’m not interested in acoustic music for what I want to do.

IGL: That’s very clear-cut then. For each of us, our preferences are pretty much things we’re born with. We don’t really choose them. So it makes sense then that electronic music is really where your heart led you to listen and make tools.

JM: I was very mathematical. Walter (Wendy) Carlos had a box set of LPs that told how a synthesizer worked. It had a bunch of examples of “this is frequency modulation.” It was pretty easily accessible for me and I understood it very quickly and I liked the sound.

IGL: Did you understand it easily partly because mathematics is your medium as well, a part of your comfort zone?

JM:  I’m not really a mathematician. I never studied math at the higher levels. It was always just very intuitive to me. The plasticity of electronic sound – the fact that you can make it sound like anything or that it had a really wide range – that’s what attracted me to it.

IGL: Do you have ideas or visions of where electronic music is going?

JM: No. (laughter)

IGL: Well, we don’t have crystal balls.

JM: I was in Austin Texas. There was very active antagonism towards electronic music there. I didn’t anticipate that electronica would take off which is oriented around dance music. Now, if you say electronic music to someone they think dance music, but I don’t mean that.

IGL: Could you say what electronic music is for you?

JM: For me, it’s just exploiting the possibility of sound – having any sound available  to compose with. That’s the main thing. Sound is a pretty psychoactive medium.

IGL: Sometimes I see sound as being sculptural and almost literally sculpture. Was it just last night, the piece that was played in darkness? (Editor’s note: There were several concerts during the week of the SuperCollider Symposium 2013. The piece being referred to was an 8-channel fixed-media piece “Lucent Voids” by London-based artist Erik Nyström.) It was so unpredictable! There was no knowing from moment to moment where it would go and what texture it might have. For me, I was seeing a space that crackled or fell or tumbled, so the artist, through sound, was creating a 3D environmental physical experience, like sculpture or architecture.

JM: I think that happens for me a lot more in these multi-channel pieces that are really enveloping. You feel like you’re in this environment. I tend to have more visual associations with the sound in that context, rather than the stereophonic pieces. I don’t get that visual association with stereophonic pieces; they don’t quite immerse me in it.

IGL: For me, it’s easier to experience in the Black Box (the theater space of ATLAS Institute) where the speakers surround the audience, that is, when the composer, artist and/or sound designer has anticipated the near and far of the speaker setup.

How do you feel about a symposium or gathering of people to talk about a work of software that you personally created? Is that a very satisfying experience?

JM: Yeah, of course I like it. (laughter) I’m glad people are finding it interesting enough to have a symposium on it. That’s good.

IGL: Are there pieces that you are working on or that you might be performing that you can speak about?

JM: Well, no. Once I’ve done more development, it’s been a lot of thinking about the tool. I have ideas of how to change the tool in order to do it the way I want to do it. There’s been a lot of that kind of thing. I’m still looking for more flexible ways of composing sounds.  

There’s all this live stuff happening where you create a whole bunch of stuff and  destroy it in the piece. I’m interested in saving that history while you’re making it so you can go back at any point and get what you had and save that and maybe use it in the future again. It’s like keeping a history of what you did while you were coding and being able to go back to it.

IGL: I could see how that would be useful.

JM: And also making the code into a graph that can be manipulated more semantically than just as text.

IGL: When you say graph, you’re speaking of a graphical interface where users might push a line or cursor to make changes?

JM: Right, it’s still a programming language; it’s still code but the code is represented as a structure you can manipulate rather than just as text.

IGL: What language or languages were the precursors to what SuperCollider is now?

JM: SuperCollider is based a lot on a language called Smalltalk. It’s object-oriented. Smalltalk has some features of Lisp. I’ve added more features of Lisp and just a few features from other languages as well.

IGL: So you’ve been creative in finding existing code and language to help build SuperCollider?

JM: Well, the semantics. It’s mostly Smalltalk but also some from other languages in order to have a theory of operation. I have a solid foundation for the language basically.

IGL: Do you have any advice for musicians, composers or developers who might be in the beginnings of their studies or who might like to create something new?

JM: I don’t really know. I sort of followed a path and I’m not sure I have a good idea of what a good way to go is. Do you mean for creating another tool?

IGL: Let’s rephrase the question. If someone is looking to create a tool, and I imagine it’s not an overnight process. . .

JM: I think the thing is, in order to create a tool, you have to experience some existing construct of how you’re doing something and then have an idea of what you find is difficult for you with it. You also need to have skill to implement something to get around that difficulty.

IGL: There’s a problem/solution aspect to the process. You began it by working in software that you didn’t find easy or satisfactory.

JM: Right.

IGL: And so you invented something easier or better. It’s an ongoing journey.

JM: Right. At first it was someone else’s software that I found difficult. Now it’s my own software that I’m finding difficult and needing to route around. (laughter)

IGL: Do you see your software in a certain way in the future? Let’s say anticipating ahead in five or ten years?

JM: This particular construction of it, I think, probably won’t be used in five years.

IGL: Oh, it will be something else; that’s how rapidly it’s changing.

JM: I don’t know. It’s hard to say. Things have a certain lifespan until the structure becomes unwieldy and then a new thing replaces it. This always happens. I mean I might be the one to replace it with something else or someone else might be the one to replace it. But eventually, there are better ways to do things. So, things get replaced.

IGL: Well, that’s sort of the design of things, of the universe, evolution and biology.

JM: Right.

IGL: So when you imagine your software being replaced, does that thought make you sad?

JM: No, it doesn’t.

IGL: That seems pretty healthy. (laughter)

JM: Because I’m already working on something new, I’ve already abandoned it myself.

IGL: No one can reject it because you’ve already rejected it. You’re ahead of the curve.

JM: (laughter) Yes.

~  ~  ~  ~
related links:
- Ira G. Liss, the writer/interviewer, is assistant communications director at ATLAS Institute, University of Colorado Boulder. He is also a musician, songwriter and performing artist. See his video artwork here,

- Learn about upcoming, events at ATLAS here,

- Read more details about the SuperCollider Symposium 2013,

- See photos of the SuperCollider Sypmposium 2013:

- Stay informed about upcoming performing arts, talks and special events at ATLAS. Get on the email list by emailing with “email list” in the subject line.

- Download SuperCollider,