Mastering MasteringVersion alert! This article was written before the release of Reason 3.0 with it's MClass Mastering Suite. Though the Reason mastering tricks have changed with that addition, the general mastering techniques are still valid.
This month we will be taking a break from Reason device dissection, and instead focus on a hot topic - mastering. Traditionally, mastering has been an isolated domain outside the music production perimeter, but today, more and more aspects of production and distribution are brought closer to home - mastering included. Artists are exploring ways of bypassing the traditional music distribution channels altogether, opting instead for mp3 files or home-made CDs - and if you're planning on going all the way with the do-it-yourself working model, you also need to master this final step of the process (as if being a composer, musician, producer and mixing engineer wasn't enough...!) Needless to say, there's a reason why people can make a career and a living on audio refinement - and if you're dead serious about your material you should consider taking it to a professional, as mastering would be considered by some as a "don't try this at home" thing. But if you're one of those adventurous spirits, here's a basic primer in the art and science of audio mastering - have your MasterCard ready and step up to the counter.
First, let's get this out of the way: If your burning question is "Why doesn't my music sound as loud as my commercial CDs? The peak meter tells me both sources are equally loud!", there are two things you should know: 1) This article will answer your question; in fact, it was written for you. 2) While we will focus on the subject of volume (real and imagined), there's so much more to mastering than just loudness. In fact, many mastering engineers resent the "loudness race" and favor a more conservative approach - but what's a poor home studio owner to do when every other CD you play is so loud it jumps straight out of the speakers and lumbers around the room like the Incredible Hulk? Let's crank it up!
Perceptual loudness - blessing or curse?
Have you ever found yourself jumping out of the sofa to hit the volume button on the remote whenever they cut to commercials? Audio tracks for commercials are usually "macho mastered", heavily treated with compression and limiting enough to suppress a nuclear blast. This is done to get the message across despite your vain attempts to seek refuge in the kitchen during the break - there's no escape! But, assuming the regular programming is played at maximum audio volume, how can commercials appear at least twice as loud? The long and short of it is: The human ear judges loudness not by peaks, but by average. Meet the concept of "perceptual loudness". One of the ear's imperfections is that it isn't fast enough to pick up extremely transient (=short) sounds in the 1-10 millisecond range and make an accurate interpretation of the volume. Modern audio science has taught engineers to exploit this shortcoming by developing techniques that ensure delivery of maximum sonic impact. "Normalization", however, is not one of them.
Normalization doesn't make it "normal"
You may have been offered the advice to "normalize" your tracks. All wave editors offer a normalization function. But what does normalization actually do? It searches for the highest peak in the audio file and adjusts the overall volume accordingly. If you've made a Reason track that stays barely at the safe side of the clipping limit, the highest peak is probably around 0 dB already, which means that normalization will accomplish absolutely nothing.
Let's cut right to the chase and look at a simple but effective demonstration before we get down to the nitty-gritty of it all (throughout the article we will be using snippets of well-known Reason demo songs for "target practice").
Note: The mp3 audio examples in this article are loops, and we therefore recommend that you configure your default mp3 player to loop/repeat playback mode.
The left picture shows post-normalization audio, but as stated above, normalization is pointless if the level is already close to 0 dB (in this case, -0.21 dB, a negligible difference). If you look at the original (left) you can identify three peaks (those spikey, needle-like things sticking out over the red -6dB lines). In this case, they are caused by the bass drum. They're virtually irrelevant in terms of musical information, but they pose a problem in that they prevent you from increasing the average level. In the middle picture we have used a limiter to chop off everything above -6 dB. This may or may not be brutal, all depending on the material you're working on, but it serves the purpose here and now. Try listening to the original and compare it to the processed version - can you tell the difference? If not, you've made a bargain here, since a whopping 6 dB previously held hostage by the peaks has now been released. This leads us to the third picture (right), illustrating the processed sound subsequently normalized to 0 dB . How's that for loudness?
This was not a complete mastering procedure by any stretch of the imagination, but it illustrates the fact that loudness is a very relative thing. Normalization is useful, but only after the appropriate processing has been done.
Clipping and meters
When analog was king, the most feared 'level enemy' was at the bottom of the scale - noise. Analog tape recorders could take moderate abuse at the top of the level ladder, but as soon as levels dropped, the noise was laid bare. Overloading an analog tape recorder didn't produce the nasty clipping artifacts you get in digital audio - in fact, a slight overload would often produce a pleasant sound. In the digital domain, a low audio signal isn't exactly a blessing either, but the side effects are nowhere near as destructive as digital overload is. Once the signal clips, the damage is irreversible - it's like an over-exposed photo, you can't bring back the parts of the picture that have already dissolved into white. So whatever you do, make sure that the raw, unprocessed audio doesn't clip. Keep an eye on the meter, but let the ear be the final judge - sometimes you can get away with clipping. However, if you don't have complete confidence in your ears, stay on the safe side and trust the red light.
The old school analog VU meters found on analog equipment were actually closer to the human ear's perception of sound level, because the response time was intentionally slow - around 0.3 seconds. A digital peak meter is something completely different from a VU meter. A digital peak meter is generally lightning fast - sample accurate - thus the tiniest, most transient level spike will make it shoot straight to that dreaded red light, even though you could swear you didn't hear a peak - and in all likelihood, you didn't. A digital peak meter serves the interests of the digital audio device it speaks for, so perhaps "peak alarm" would be a more appropriate name. In other words, take it with a grain of salt.
We love it loud, don't we? During the long hours of a studio session you turn it up a notch every time the ears have gone numb. It makes the music sound better, more powerful, and brings subtle details out in the open. A word of caution: Don't. First of all, the human ear has a built-in compressor/limiter that works in mysterious ways (part self-preservation mechanism, part imperfection); at near ear-splitting levels your ears will smooth out the roughness and give you the impression that the mix is reasonably balanced when it's not. The best way to discover if anything in the mix shoots straight off the charts is in fact to listen at very low levels. Only then will you discover that, for example, the bass drum is twice as loud as everything else. Another trick is to listen from a nearby room rather than being right in front of the speakers. Second, the louder the sound, the more bass you will hear - this is because the ear's response to bass energy is non-linear. Consequently, monitoring too loud will prompt you to cut away some bass when in fact you should leave it as it is, or even boost it.
Are you happy with the sound of your song, or do you expect the mastering process to solve all problems? Even the most deft mastering guru cannot save a sonic disaster from an eternity in hell. There are many things to be mindful of during the actual music production and mixing; this is where you lay the foundation for a professional sound - subsequent mastering is only the icing on the cake. Here are but a few issues worth considering long before you reach the mastering stage:
There's a long way to go between 20 and 20.000 Hz, but the frequency spectrum can only take so much abuse in one place before the mix becomes muddy. Keep an eye on the low midrange, it's usually the first to become crowded. Don't forget the equalizer's ability to cut rather than just using it to boost. Take your time to analyze each sound and examine its characteristics - what does it add to the mix? Does it bring something undesirable with the desirable? If so, can the undesirable aspects of it be eliminated?
Hands off those low octaves.
What? You mean... no bass? Of course not. But when keyboardists play piano sounds or pad/string sounds, they often play the chords with the right hand and 'show the bass' with the left. This can become a bad habit and is a classic source of muddy bass, simply because that pad/string/piano sound (or whatever it is you're playing the chords with) will compete with the bass line for the lower tonal range. That left hand is best left in your pocket! Generally speaking, it's good arrangement practice not to have too many instruments fooling around in the same tonal range - as with frequencies, try to distribute evenly.
Less is more
Hate it or love it, this old cliché always applies. If you feel that a song (or a certain part of a song) lacks energy, the best solution might be to take away rather than add. Every addition to an arrangement will suck energy out of the existing arrangement - occasionally for the better, but often for the worse. Granted, it is possible to pull off a wall-of-sound type production, but it's a delicate balancing act - it takes a great producer, a masterful mixing engineer and an elite mastering engineer to get it right.
Stack with care.
With todays unlimited polyphony and neverending supply of instruments, stacking sounds is a luxury that anyone can afford. Why choose between two snares when you can have three, four, eight at once? Be careful though, because the bitter equation still remains that you can't add something without taking something else away. If you stack two sounds, make sure that they complement eachother, not collide with eachother. If you play a sampled drum loop over programmed drums, maybe you can cut some frequencies on the loop to make room for the programmed drums?
Mix with focus.
All sounds cannot be upfront. An inherent problem in arranging and mixing is that you often concentrate on one sound at a time. When you turn your attention to one sound you will be tempted to bring it out in the open, enhance it, nurture it, make it stand out from the crowd. Soon enough you will have given all sounds this special treatment, and as a result, no sound stands out, instead you find yourself knee-deep in mud. Treat the music as a painting - you want to turn the viewer's focus to one particular spot. All else is secondary and must be treated accordingly - don't be afraid to sacrifice a sound by abstracting it or even removing it altogether, it will always be to the benefit of the sound you want to turn the spotlight on.
Careful with those subsonics.
Frequencies that you can't really hear on most systems are generally a waste of bandwidth. They will force the average level of your music down, and all you gain is, well, no gain. What's worse, a booming subsonic bass that happens to sound good over your speakers because they can handle it, may turn the speaker cones inside out on lesser systems, particularly those that boast "MegaBass" or some other voodooish pseudo-bass tomfoolery - cheap lightweight headphones and ghettoblasters that by all physical accounts should be unfit to deliver any bass whatsoever. You can't build dance floor rattling earthquake bass into the actual mix, that task will be taken care of by a dance floor rattling earthquake P.A. system when the time comes. Conversely, don't try too hard to emulate a hi-fi sound by boosting the high frequencies - keep it neutral.
Mastering software tools
There is an abundance of software products capable of performing mastering tasks, whether they were tailor made for mastering or not. The first thing you need is a good wave editor. For Mac, there is Peak and Spark; for Windows there is WaveLab, SoundForge, CoolEdit Pro and others. In addition to this there are many VST and DirectX plugins, including...
- BBE Sonic Maximizer
- Steinberg Mastering Edition - featuring Compressor (a multiband compressor), Loudness Maximizer, Spectralizer, PhaseScope, SpectroGraph and FreeFilter.
- Waves Native Gold Bundle - featuring C4 Multiband Parametric Processor, Renaissance Reverberator, Renaissance Compressor, Renaissance Equalizer, L1 Ultramaximizer, MaxxBass, Q10 Paragraphic, S1 Stereo Imager, C1 Parametric Compander, DeEsser, AudioTrack, PAZ Psychoacoustic Analyzer and much more.
- db-audioware Mastering bundle - featuring dB-M Multiband Limiter, dB-L Mastering Limiter, dB-D Dynamics Processor, dB-S De-Esser.
There is also a T-Racks, a stand-alone mastering kit available for both Mac and Windows.
Of course, a plugin doesn't need to have "mastering" written all over it to be a worthy mastering tool - compressors, de-essers and dynamic processors are commonplace and can be used for mastering as well as other chores.
As an example of what kind of results you can expect from plugins like these, let's do an experiment.
First, we exported a snippet of the Reason demo track "Why Red" from Reason. Then we brought it into WaveLab. You can listen to the original, unprocessed sound here: We then used BBE Sonic Maximizer to bring more clarity and brilliance to the sound, and Loudness Maximizer to increase the perceived loudness. The result is here: You might also want to listen to this A>B comparison which alters between the original and processed waveform every two bars.
This was a typical example of "macho mastering" to illustrate the huge difference you can make by processing a sound for maximum perceived loudness. If this is what you're after, look no further than the smart 'maximizer' plugins, but be careful not to over-use them - even those that are controlled by one single parameter are not foolproof. Unfortunately there's no "blanket procedure" you can apply to any and all tracks. You must listen to each track and identify its strengths and weaknesses. Play a commercial CD (preferably one you think sounds great...!) over the same system you're using for mastering - once that sound has been imprinted in your mind, it's easier to go back to your own track(s) and spot the problems. A good eye-opener tool is Steinberg's FreeFilter (part of the Mastering Edition bundle). It's an equalizer that features a Learning function. You can play back a reference track ("source") and FreeFilter will analyze the sound characteristics. You then repeat the same procedure for your own track ("destination"). You will then be able to look at the difference between the frequency curves of the source and the destination, which gives you an opportunity to spot problems in the way you apply EQ in your tracks.
The quickfix FAQ
So, now we'e established the fact that there is no generic, failsafe, magic method for mastering. Where do we go from here? Perhaps the best approach is to turn the tables and provide some 'quickfixes' in the form of an FAQ which addresses some classic issues that can be fixed in mastering:
Q: My tracks seem dull and quiet, they lack punch. What do I do?
A: This is likely an issue with volume. Raw, unprocessed tracks produced solely in the digital domain can have a dry, lacklustre, almost 'papery' quality to them. Use a multiband compressor, or - if you're not sure what all those parameters do - an advanced loudness plugin such as the Loudness Maximizer, L1 Ultramaximizer or BBE Sonic Maximizer. These generally produce better results than compressors because they don't add those unmistakable compression artifacts like 'pumping' etc. Keep in mind that the harder you push a Maximizer or Compressor, the more you squash the dynamic range. That carefully programmed hihat velocity might end up totally flattened, to the point where you might as well have used fixed velocity.
Q: My songs seem to lack high end. I tried boosting the high frequencies but it just didn't sound good. Help...?
A: Boosting the high frequencies will often make matters worse, as it can add unpleasant hiss. Experiment with an exciter-type plugin, these are designed to add pleasant sounding brilliance. Try BBE Sonic Maximizer, or the High Frequency Stimulator by RGC Audio (free).
Q; My track sounds harsh. My ears hurt. Can this be fixed?
A: Possibly. The nasty frequencies you're looking for are usually between 1 and 3 KHz. Use an equalizer to locate and cut.
Q: I'm happy with the loudness, but the sound still lacks 'presence'. How...?
A: The magic frequencies you're looking for are between 6 and 12 kHz. You can try a moderate boost in this range. You can also try Steinberg's Spectralizer, another 'magic box' that adds transparency and clarity using harmonic generators to produce synthesized overtones.
Q: Not enough bass. Give me bass!
A: As usual, go the equalizer way or the magic plugin way. Waves MaxxBass and BBE Sonic Maximizer are both capable of generating booming bass. If you prefer using a regular equalizer, there are multiple approaches and you might have to try them all to find the appropriate one. The problem could be that there is too much going on in the lower midrange, which gives a certain 'boxiness' to the sound. The culprit is in the 100-400 Hz range - try cutting. If the bass seems OK in terms of loudness but you'd like it to be deeper, try boosting gently in the 30-40 Hz range and cutting gently around 100-120 Hz.
Can Reason do it?
Reason is by no means a mastering tool, it should be used for composition and mixing, after which you should render the audio files at the highest possible bit depth and sampling frequency your software and hardware can handle, for further processing outside Reason. Having said that, Reason does feature the tools required for advanced EQ:ing and compression - this can be useful if you're a fan of the RPS publishing format and want to add that 'finalized' quality to songs played straight out of Reason. You will find that many songs in the song archive utilize a single COMP-01 as master compressor and a PEQ-2 as master EQ. However, a single compressor might not produce satisfactory results, especially not on extremely intense material. Since it works over the entire frequency spectrum you will often get a "ducking" effect - i.e. one sound pushes other sounds out of the way. For example, a dominant bass drum that looms high above the rest of the soundscape will prompt the compressor to inflict heavy damage on every beat, to the effect that all other sounds disappear out of 'hearsight' everytime the bass drum is triggered. To overcome this problem you need a multiband compressor - it works like a battery of compressors, each handling its own slice of the frequency spectrum. Three bands (low, mid and high) is usually more than enough. Thanks to a couple of new devices in Reason 2.5, you can now build your own multiband compressor in the Reason rack. The procedure is as follows:
- Create a 14:2 Mixer.
- Create a Spider Audio Merger & Splitter. This will be used both to split the stereo signal from the master mixer output and to merge the signals
- Create three BV512 Vocoders. Hold down Shift to avoid auto-routing (Reason will not guess right here).
- Set all three Vocoders to Equalizer mode, 512 bands (FFT).
- To divide up the three frequency ranges, use the sliders on each BV512's display. You need to cut all the bands except the ones that the Vocoder/EQ will be handling, so for the low range unit, leave the leftmost third (e.g. bands 1-10) as-is and pull the remaining sliders down to zero.
- Repeat step 4 for range assignment of the mid- and high range BV512 units - assign the bands on the middle of the display (e.g. bands 11-22) to one and the remaining bands on the right side of the display (e.g. bands 23-32) to the other. (see illustration and .rns example below)
- Create three COMP-01 compressors.
- Create a Spider Audio Merger & Splitter.
- Routing time: Mixer out to Spider Audio split input / Spider Audio split output 1-3 to BV512 #1-#3 carrier inputs / BV512 #1-#3 outputs to compressors #1-#3 inputs / compressors #1-#3 outputs to Spider Audio merge inputs 1-3 / Spider Audio merge output to Hardware interface.
For variation on this theme, try adding another pair of COMP-01 and BV512 and you get a multiband compressor with four bands (for example, low + low midrange + high midrange + high).
Or, you can try replacing each COMP-01 unit with a Scream 4 set to the Tape preset...
Here is a template Reason song with the multiband master compressor setup.
As a bonus, you can of course adjust the EQ bands on the Vocoders - together they serve as a master equalizer.
- Digital Domain
- An Introduction to Mastering by Stephen J. Baldassarre
- 20 Tips on Home Mastering by Paul White
- What to Expect from Mastering by John Vestman
Text & Music by Fredrik Hägglund
Part 3 - Mastering MasteringPart 2 - Dial R for Redrum Part 1 - Ask Dr.REX