Meet our crew! Every edition of Output Afterhours gives a glimpse into the people who make our tools come to life. On deck is Spencer, Output’s chief technology officer, professor at California Institute of the Arts, and granular sampling enthusiast.
What do you do at Output?
I’m an audio software engineer, which means my colleagues and I write the code touching every sound that goes through the system. So, it’s an exciting role with lots of responsibility as we deal with sound at the lowest possible level. There’s a different set of rules and theories in music technology then you find in normal computer programming. It’s truly fascinating work unlocking the nature of sound at this level that has fueled my curiosity and career thus far as a musician, researcher, and engineer for the past decade.
What does a day in the life at Output look like?
A lot of my time is spent inside the grid staring at walls of C++ code and making sense of the software program it represents. Recently, I’ve been experimenting with a profiler, which is a programming tool that reveals what are the slowest parts of your code. Audio software is very sensitive to fluctuations in timing, so making sure that the whole thing runs smoothly is crucial. Even a small change in code, like rearranging the order of some data, can result in huge improvements in the speed of audio processing and the overall experience of using the software.
On the fun side, my team and I enjoy our day drinking yerba mate, playing ping pong, and exploring the many flavors of downtown Los Angeles. Some notable favorites are Chimney Coffee House, Nick’s Cafe, or if we are making a thing out of it, Cosa Buona in Echo Park.
What was your starting point in music?
I grew up playing the piano for 10 years. In high school before the days of Ableton and the laptop production movement, I began looking for ways to merge my classical training with my inclination towards computers.
By some serendipitous coincidence, I was pursuing my bachelor’s degree at Princeton in computer science at the time when both a computer music programming language called ChucK and the concept of “laptop orchestra” were being conceived there. ChucK was invented by Ge Wang and Perry Cook, prolific researchers and leading innovators that became great friends and mentors of mine. Cook and a composer named Dan Trueman created the first laptop orchestra. I became deeply involved with these creative endeavors, and these formative years greatly influenced my thoughts and perception as a musician and audio software engineer.
Following a stint in Silicon Valley where I worked on T-Pain’s Autotune iPhone app among many others, I started the Ph.D. program at Stanford’s legendary CCRMA Program founded by John Chowning, a brilliant composer and the inventor of FM sound synthesis. Surrounded by legends from Max Mathews (one of the first to create music using a computer), Bill Verplank (software designer behind the first commercial GUI and mouse), and Julius Smith (pioneer of physical modeling synthesis), I was figuring out how to make my own mark.
My earliest experiments focused on an instrument called the Fragment String. It’s a live granular sampler controlled by a magnificently weird interface known as the Gametrak, made of two strings pulled from a box on the ground you move around in space to control the sounds accompanied by an instrumentalist.
What are you working on?
These experiments laid the foundation for my research in musical interaction and in the development of my brainchild: Auraglyph.
Auraglyph is an infinite modular music sketchpad built for the iPad. You start from a blank canvas where you can infinitely scroll around in any direction to draw out your musical ideas. Every starting point is unique, but I usually start with some kind of sound-producing object like an oscillator, then draw connections to various pieces of a synthesis system, like filters, envelopes, modulators, sequencers, and so on. Apart from these musical elements, you can draw anything you want, whether it’s to decorate your patch or explain your ideas.
Ultimately, Auraglyph was made to inspire music-makers to think about and construct music in this vast sonic world. Drawing and moving these musical building blocks with your fingertips gives way to a totally tangible, physical interaction with music that’s been an absolute blast to build and will be amazing to share with the world later this year.
How do you balance all your creative endeavors?
It’s really about finding yourself in the right creative environment. I work as a professor at the California Institute of the Arts, allowing me to go off into more abstract directions in music technology; working at Output brings me back to relating my research in a broader sense on how it can be applicable for musicians all around the world.
What is the most beautiful sound in the world?
As a musician, I explore and find beauty in every sound to imagine and understand the relationships they have with each other. The real fun kicks in when you go deeper on how to pick them apart and recreate them.
I made this patch recently in Auraglyph using two filtered oscillators that are drenched in modulation, wave-shaping, and delay. I’m really happy with how rich it sounds to me and how many layers there are. When I want to chill out, I’ll open up this patch and play around with arpeggios for an hour.