All posts tagged “Thor”
When we talk about an audio signal generated by an analogue (or virtual analogue) oscillator, we often describe it using three characteristics: its waveform, its frequency, and its amplitude. These, to a good approximation, determine its tone, its perceived pitch, and its volume, respectively. But there is a fourth characteristic that is less commonly discussed, and this is called the ‘phase’ of the signal.
Consider the humble 100Hz sine wave. You might think that this can be described completely by its frequency and its amplitude and, in practice, this is true provided that you hear it in isolation. But now consider two of these waves, each having the same frequency and amplitude. You can generate these by taking a single sine wave and splitting its output, passing one path through a delay unit as shown in figure 1. If no delay is applied, the two waves are said to be ‘in phase’ with one another (or, to express it another way, they have a phase difference of 0º) and, as you would imagine, you could mix them together to produce the same sound, but louder.
Most physical objects vibrate at frequencies determined by their size, shape, materials and construction, and the specific frequencies for each object are known as its resonant frequencies. Nonetheless, simply adding energy to an object doesn’t guarantee that you’ll obtain an output. Imagine an object having a single resonance of 400Hz placed in front of a speaker emitting a continuous note at, say, 217Hz. If you can picture it, the object tries to vibrate when the sound first hits it, but each subsequent pressure wave is received at the ‘wrong’ time so no sympathetic vibration is established. Conversely, image the situation in which the speaker emits a note at 400Hz. The object is now in a soundfield that it is pushing and pulling it at exactly the frequency at which it wants to vibrate, so it does so, with enthusiasm.
There’s one thing you’ll never hear when synthesiser enthusiasts wax lyrical about their instruments: an argument about which has the sweetest or fattest high-pass filter. They’ll debate endlessly the benefits of Moog’s discrete component low-pass filters, argue about the pros and cons of CEM and SSM low-pass filter chips, and possibly come to blows over whether the 12dB/octave low-pass filter in the early ARP Odysseys is better or worse (whatever that means) than the 24dB/octave low-pass filter in the later models. But nobody ever got punched because they insulted someone’s high-pass filter.
What’s more, there was a time when you had to work quite hard to find a high-pass filter on an integrated (i.e. not a modular) synth. The groundbreaking instruments of the late ’60s and early ’70s – Minimoogs, ARP2600s and EMS VCS3s – didn’t have them and, by and large, it was left to emerging manufacturers such as Korg, Yamaha and Roland to bring them to the public’s attention.
So why is the high-pass filter such a poor relation when compared with its twin, the low-pass filter? To understand this, we again have to consider the nature of natural sounds.
To understand what filters do, and why they are one of the most important building blocks in synthesis, you have to understand a little about the nature of sound itself and, in particular, the nature of waveforms. So I’m going to introduce this series of tutorials about Thor’s filters by talking first about what constitutes the sound generated by an oscillator, whether analogue, virtual analogue or sample-based.
Mathematics tells us that any waveform can be represented either as a wave whose amplitude is plotted against time or as a series of components whose amplitudes are plotted against frequency. Often, these components will lie at integer multiples of the lowest frequency present – for example, 100Hz, 200Hz, 300Hz… and so on – and these are then known as the fundamental and its overtones, or as harmonics. In more complex sounds, there may be components that lie at non-integer frequencies, and these are sometimes called enharmonics. The sound may also include noise that is distributed over a range of frequencies and whose precise nature is determined by all manner of factors that we need not discuss here. Figure 1 illustrates all of these.
Why were wavetables developed?
We now hold early wavetable synths such as the PPGs in high esteem and think of them as expensive examples of early digital synthesisers. However, wavetables were actually invented to make possible relatively low-cost instruments that avoided the shortcomings of existing digital synthesis methods such as FM and additive synthesis, and to overcome the immense technological limitations of the day.
To understand this, imagine that you want to use a piece of audio equipment today to record, store and replay the sound of a someone saying “wow”. You choose a suitable recorder or sampler, capture the sound and, without any need to understand how the equipment does what it does, you can then replay it. If you used a digital recorder, you do so simply by pressing the Play button; if you used a sampler, you allocate the sample to a key or pad, then press that to listen to the recording that you’ve made.
However, we don’t have to travel back many years to a time when none of this was practical. The problem was two-fold. Firstly, early memory chips were extremely limited in capacity, and storing anything more than a fraction of a second of audio was very expensive. Secondly, even if you could store the audio, the primitive microprocessors available at the dawn of digital synthesis were barely able to address the memory and replay it at an adequate speed.
Let’s consider the word “wow”, which takes about a second to say. Using the sampling specification introduced for the audio CD (44,100 16-bit samples per channel per second) you would require 88.2KB of RAM to record the word in mono, and double that if it were recorded in stereo. Nowadays, we wouldn’t blink at that sort of requirement, but when the earliest digital audio equipment appeared, you needed as many as eight chips for just 2KB of RAM. Sure, samplers were shipped with boards stuffed full of dozens of these, but you would have needed no fewer than 352 of them to store the word “wow” at CD quality!
Clearly, this was impractical so, while various digital tape formats were able to provide the hundreds of megabytes of audio data storage needed to edit and master whole albums of music, developers of digital musical instruments were looking at much more efficient ways to record, store and replay sounds for use in synthesis. The wavetable was one such method.