Programming the music to “Jupiter and Beyond”

This past week, I helped make a production for the Revision 2013 Demoparty. It ended up placing 3rd in the “Wild” demo competition. For this production, “Jupiter & Beyond,” I composed the music and wrote a large portion of the synthesizer/playback code. Take a look:

The production was led by my friends WAHa_06x36 and halcy, whom I’ve collaborated with many times before. ryx joined us again to contribute additional code and Forcer of TRSi contributed some awesome 2D artwork. This was our second demo for the STM32F4DISCOVERY microcontroller board, an extremely low-cost ARM-based microcontroller. It has better specs than an Arduino, but enough limitations to make it programming for it interesting — 168Mhz clock, 192KB ram, 1MB flash, and a VGA connector that the group made out of eleven resistors.

As amazing as the other parts of the demo are, I’ll just talk about the music aspect of it here.


I’ve written a handful of goofy web-based music tools in the past. WAHa ended up porting them to various platforms, such as a robot. The bitbin engine in particular (which is, actually a tiny subset of the Impulse Tracker format) ended up being used in SVatG’s first microcontroller demo, “Peridiummmm”.

Due to the fact that bitbin was intended to be a “chiptune web tracker”, the music for Peridiummmm ended up being purely chiptune—square waves, triangle waves, and some short drum samples. Though there was really no practical reason for the web-based tracker to feature these sounds, it worked out well for the microcontroller. The music engine is called repeatedly via an interrupt and must fill a small buffer as fast as possible so as to not stall the effects.

Improving the engine

When the decision was made to create a second STM32F4 demo, I knew immediately that I’d like a “less chiptuney” sound. Considering that the existing music code was a straight port of my Javascript to C by WAHa, I was tasked with improving it. Based on initial art by Forcer, I tried to plan what kind of song would fit best with the visuals. I then planned what features would be most appropriate for the song and overall aesthetic. The number and complexity of features that I wanted to add were limited by the capabilities of the microcontroller and the demands that the demo placed on it – I had to be careful not to add anything that would cause a huge per-sample overhead on the microcontroller, since I didn’t have one on hand to test the code with.

Forcer is great

The simplest feature I added was a unison/chorus. Instead of a single oscillator phase, four are stored, which only requires a few extra additions per channel. One of the waveforms was changed from a square wave to a 16kb string sample (which, being read-only, is streamed from flash, not RAM). The next feature I added was a single resonant lowpass filter. It was designed by madbrain, who wrote it for me in 4 IRC lines after I explained how I had spent a day trying to convert some existing filter code from floating-point to fixed-point.

Filter and delay

Finally, I wanted a delay effect, which would require a circular buffer in RAM. Since there is only about 64KB of RAM to work with for the entire demo, I made it so the delay was actually bitcrushed using 8 bits instead of 16. It also runs at a 5.5khz instead of 22khz sample rate, so it only takes 1024 bytes in total. This had the additional effect of creating some oldschool (but not chiptuney) noisiness that fits with the demo aesthetic.

With all these effects in place, I now had a workflow where I could convert an OpenMPT .IT song to C code, compile and run the result, and finally get the synthesized output as a WAV. Unfortunately, with the bitcrushed delay and the other crazy features, the output barely sounded like the original song I was working with. I decided to simulate it roughly by patching together some VST effects in OpenMPT.

OpenMPT setup. Here’s what it sounds like before conversion!

When it came to composing the song, I already had a decent idea of what I wanted—something unique, downtempo, and Blade Runner-esque. I wrote the song in a couple hours and then handed off to halcy to sync, so that ryx could quickly record the demo before everyone left for the actual demoparty. As the audio output had a lot of EMF interference, the software synthesizer output was dubbed back in to the video for the presentation.

So that’s basically how the music of “Jupiter and Beyond” came to be composed and implemented. I’m really happy with how it turned out, and proud to have been a part of such a cool production. If you have any questions, feel free to leave a comment, and thanks for reading.

2 thoughts on “Programming the music to “Jupiter and Beyond”

  1. Hey this “making of” was great. How about getting the crew to write a similar one about the graphics programming, too?

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: