NYC Music Hackathon

I had a great time this Saturday at the NYC Music Hackathon, where I hacked, coded, and generally made a lot of very strange sounds for many hours.  I teamed up with my fellow Knewton intern Dylan Sherry to hack together some cool signal processing software using Supercollider, an open source programming language for sound synthesis and signal processing.

Dylan and I created software that allows a physical instrument or voice to control the sound of a synthesizer.  The program works by processing the input signal to extract the volume envelope and dominant frequency, and then applying these characteristics to a synthesized sound generated with a set of oscillators.

What’s cool about this is that the sound produced has all the harmonic properties of the chosen synth sound (the combination of oscillators determines its frequency spectra), but all the dynamic properties of the original sound (because the original sound determines the volume envelope).

It’s a really cool effect.  Dylan and I used a set of pure sine-wave oscillators superimposed at octaves for our base synth, which on its own sounds a little bit like a church organ.  When I plugged in my stratocaster, the organ sound suddenly took on its twangy attack, and even became bluesy as I tried some pitch bends.  When Dylan plugged in his EWI (a very cool saxophone-like synthesizer), the synth swelled and vibrated through a sustained tremolo note, and danced over the scale through some jazz sax runs.

This is a challenging signal processing problem, and when we set out to tackle it we thought we were going to have to build some really serious tools from scratch.  Fortunately, we discovered Supercollider, which has great libraries for extracting features like pitch and amplitude from an input signal.  Supercollider is a pretty fantastic (and free!) tool with a good support base, so if you’re interested in playing with sound and computers, definitely check it out.

If we ever get the chance in the future, Dylan and I would like to continue this project and see where else we can take it.  Possibilities include extracting dynamic changes in frequency from the input signal (belting while singing, for example, or overblowing a wind instrument) and using them to control the application of a filter to the output signal to give a musician even greater control over the musical quality of the synth sound produced.  I personally would also really like to try and package this project into an Ableton plugin, so that I can use it in my own music projects.


One thought on “NYC Music Hackathon

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s