So you finally finished recording all your vocal tracks, but unfortunately you didn't get one take that was perfect all the way through. You're also wondering what to do about some excessive sibilance, a few popped "P"s, more than a few pitchy lines and some words that are all but too soft to even be heard - don't worry, there's hope! And hey, welcome to the world of vocal editing.
A Little History...
Since the beginning of musical performance, singers (and instrumentalists) have craved the possibility of re-singing that one "if only" note or line. You know the one: "if only I had hit that pitch, if only I had held that note out long enough, if only my voice hadn't cracked", etc. With the advent of early recording technologies, these 'if only' moments were now being captured, and performers were forced to face reliving those 'if only' moments forever! One 'if only' moment could ruin an entire take.
With the popularity of analog tape recording in the mid 20th century also comes the popularity of splice editing. Now you can record the same song two different times, and choose the first half of one take and the second half of another. Next comes multi track recording, where you don't even have to sing the vocal with the band!
Multi track recording introduced punching in and out, which allowed re-recording of just the "if only" moments on an individual track. But more importantly as it relates to the subject at hand, multi-track recording also introduced the idea of recording more than one pass or 'take' of a lead vocal, leading to what is now known as "vocal comping". More on that in just a bit.
But before we get into the nitty-gritty, here's a brief outline of the typical vocal editing process for lead and background vocals. Of course, much of this is subject to change according to production direction, or the vocalist's skills and past experience.
Recording: This ranges from getting the first take, to punching in on a take, to recording multiple takes for comping.
Comping: Combining various takes into one final track, tweaking edits to fit, crossfading if needed.
Basic Cleaning: Listen in solo one time through. Typical tasks include removing the obvious things like talking, coughing, mouth 'noises' etc., checking all edits/crossfades, fading in/out where necessary.
Performance Correction: Timing and pitch correction takes place after you have a solid final comp track to work with.
Final Prep: this includes everything from basic compression/EQ, to de-essing, reducing breaths, filtering out Low Frequencies, etc.
Leveling: During the final mix, automating the vocal level (if needed) to sit correctly in the mix throughout the song.
Note that at many stages along the way you will be generating a new 'master vocal' file (while still holding on to the the original files, just in case!). For example, let's say you record 4 vocal 'takes' which become the current 'masters'. The you comp those takes together to create a new "Comp Master" vocal track, and then you tune/time the Comp Master and sometimes create a "Tuned Vocal Master" track (which is then EQ'd and compressed to within an inch of its life while simultaneously being drowned in thick, gooey FX, all before being unceremoniously dumped into what we like to call the mix).
Recording Vocals for Comping
In order to comp a vocal, you must first have multiple vocal tracks to choose from. Recording comp tracks can be slightly different from recording a 'single take' vocal. For one thing, you don't have to stop when you make a mistake — in fact, many times a performer gets some great lines shortly after making a mistake!
I tend to ask for multiple 'full top to bottom' takes from the vocalist, to preserve the performance aspects and to help keep things from getting over-analytical. Then I use comping to work around any mistakes and 'lesser' takes, choosing the best take for each line. Often the vocalist will be involved with the comping choices, so be prepared to be a good diplomat (and don't be too hard on yourself if you're comping your own vocals)!
How many tracks?
This will be different for every singer, but for comping I generally suggest recording around three to five tracks. Any less and I don't feel that I have enough choices when auditioning takes — any more and it becomes difficult to remember how the first one sounded by the time you've heard the last take.
When recording tracks that I know will be comped, I usually let the singer warm up for a few takes (while setting levels and getting a good headphone mix) until we get a 'keeper' take that is good enough to be called 'take one'. From there, simply continue recording new takes until you feel you have enough material to work with. If you find yourself on take seven or eight and you're still not even getting close, it may be time to take a break!
In Reason, when tracking vocals for future comping, you simply record each 'take' on the same track. With 'tape' recording this would erase the previous take, but with 'non-destructive' recording you are always keeping everything (with the newest take laying on 'top' of the previous take). When you enter Comp Mode, you will see each take just below the Clip Overview area (with the newest take above the older takes). The 'takes' order can easily be rearranged by dragging them up or down. Double-click on any 'take' to make it the 'active take' (it will appear in color and in the Clip Overview, and this is the take you will hear if you hit play). Now comes the fun part.
Vocal Takes in Comp Mode
Vocal takes in comp mode.
To combine or 'comp' different parts of different takes together, use the Razor tool as a 'selector' for the best lines/words. After creating cut lines with the Razor tool, you can easily move them earlier or later by dragging the 'Cut Handles' left or right. You can delete any edit by deleting the Cut Handle (click on it and hit the 'delete' key). Create a crossfade by clicking/dragging just above the Cut Handle. Silence can be inserted by using the Razor to make a selection in the "Silence" row, located below the Clip Overview and above the Comp Rows.
Comping (short for compositing): picking and choosing the best bits from among multiple takes, and assembling them into one continuos 'super take'.
Now that you have your vocal tracks recorded, how do you know which parts to use? I've approached this process differently through the years. Previously, I'd listen to each take in its entirety, making arcane notes on a lyric sheet along the way — this was how others were doing it at the time that I was learning the ropes. More recently I've taken another approach that makes more sense to me and seems to produce quicker, smoother, and better comps.
Currently, my auditioning/selection process consists of listening to one line at a time, quickly switching between the different takes and not stopping for discussion or comments. This is the basic technique you will see me demonstrate in our first video (see below).
Now it's time for a little thing I like to call a Video Detour. Enjoy De-tour (a-hem). Follow along in this 'made for internet' production as I comp the first verse of our demo song "It's Tool Late" (by singer/songwriter Trevor Price).
Note: watch your playback volume - the music at the top comes in soft, but it gets louder when the vocals are being auditioned.
Comping a Vocal using Reason's "Comp Mode"
The three most common issues with vocals are pitch, timing, and level/volume. All three are easy to correct with today's digital tools and just a little bit of knowledge on your part.
After comping, I usually move on to correcting any timing issues. You may also jump straight into dealing with any tuning issues if you prefer. Often times there isn't a lot of timing work that needs to be done on a lead vocal. But when you start stacking background vocals (BGVs) things can get 'messy' very quickly. User discretion is advised.
In our next video example (it's coming, I promise), I will show you how to line up a harmony vocal track with the lead vocal. I will use the lead vocal as the timing reference, moving the harmony track to match the lead. Since you can only see one track at at time when editing, I use the playback curser (Song Position Pointer in Reason) to 'mark' the lead vocal's timing, then when I edit the harmony track using this reference point to line it up with the lead vocal.
I will also use the following editing techniques:
Trim Edit, where you simply trim either end of a selected clip to be shorter or longer as desired, which will expose or hide more or less of the original recording that is inside the clip.
Time Stretch (called Tempo Scaling in Reason), where you use a modifier key [Ctrl](Win) or [Opt](Mac) when trimming an audio clip, allowing you to stretch or shrink any clip (audio, automation, or MIDI) which changes the actual length of the audio within the clip.
Clip Sliding (my term), where (in Comp Edit mode) you use the Razor to isolate a word or phrase, and you slide just that clip right or left to align it - using this technique allows you to slide audio forward or backwards in time without leaving any gaps between the clips!
OK, thanks for waiting - here's the video:
Possibly an entire subject in itself, as everyone has their own take on vocal tuning. Of course, it's always best to 'get it right the first time' if you can. But sometimes you are forced to choose between an initial performance that is emotionally awesome (but may have a few timing or pitch flaws), and one that was worked to death (but is perfect in regards to pitch and timing). If only you could use the first take with all its emotion and energy. Well now you can!
Neptune Pitch Adjuster on the lead vocal
In Reason, using Neptune to naturally correct minor pitch issues is about as simple as it gets. The following video demonstrates using Neptune for simple pitch correction, as well as using it in a few more advanced situations.
Vocal "Rides" (as they are called for 'riding the fader/gain'), have been common from almost the beginning of recording itself. In rare cases, you may have to actually ride the vocal while recording the vocal(!) - this is the way it was done back with ‘direct to disk' and ‘direct to two-track' recordings. But luckily you can now do these ‘rides' after the vocal is recorded, or you can even draw in these moves with a mouse (with painstaking detail, if you are so inclined). Most of the time I use a combination of both techniques.
The basic idea with vocal rides is to smooth out the overall vocal level by turning up the soft parts and turning down the loud parts (in relation to the overall mix). The end game is to get the vocal to sit ‘evenly' at every point in the song, in a way that is meaningful to you. Or as I like to say, to get the vocal to ride ON the musical wave, occasionally getting some air but never diving too far under the musical water.
Great engineers learn the song line by line and ‘perform' precision fader moves with the sensitivity and emotion of a concert violinist. It really can be a thing of beauty to watch, in an audio-geeky sort of way. For the rest of us, just use your ears, take your time, and do your best (you'll get better!).
There's no right or wrong way to edit vocal levels, only a few simple rules to follow: Obviously, you don't want to ever make an abrupt level change during a vocal (but you can have somewhat abrupt automation changes between words/lines), and you don't want to be able to actually hear any changes that are being made. All level rides should ideally sound natural in the end.
As for techniques, there are three approaches you can take in Reason. The most familiar is probably Fader Automation, which can be recorded in real-time as you ‘ride' the fader. You can also draw in these moves by hand if you prefer. Additionally, you can do what I call "Clip Automation", which involves using the Razor to create new clips on any word, breath or even an "S" that is too loud or too soft. Since each separate clip has it's own level, you simply use the Clip Level control to make your vocal ‘ride'. Alternatively, you can use the clip inspector to enter a precise numeric value, increase/decrease level gradually in a ‘fine tune' way, or simultaneously control a selection of clips (even forcing them all to the same level if desired).
The ‘pros' to Clip Automation are that it is fast, you can see the waveform change with level changes, you can see the change in decibels, and you can adjust multiple clips at once. The main con is that you can't draw a curve of any sort, so each clip will be at a static level. All I know is it's good to have options, and there's a time and place for each technique!
Using "Clip Automation" to reduce multiple "S"s on a Vocal Track
As a 'fader jockey' myself, I prefer to begin vocal rides with a fader (real or on-screen). From there I'll go into the automation track and make some tweaks, or to perform more 'surgical' nips and tucks (if needed) on the vocal track. It's these smaller/shorter duration level changes that are more often ideally created with a mouse rather than a fader. Reducing the level of a breath or an "S" sound come to mind as good examples of 'precision' level changes that benefit from being drawn by hand.
Vocal Track with Level Automation (with the first clip ready for editing...
Leveling the vocal must ultimately be done in context, which means while listening to the final mix that the vocal supposed to be is 'sitting' in (or 'bed' it is supposed to 'lay' on, or choose your own analogy!). This is because you are ultimately trying to adjust the vocal level so that it 'rides' smoothly 'on' the music track at all times (ok, so I'm apparently going with a railroad analogy for now), which doesn't necessarily imply that it should sit at a static level throughout the song.
You would think that a compressor would be great at totally leveling a vocal, but it can only go so far. A compressor can and will control the level of a vocal above a certain threshold, but this doesn't necessarily translate into a vocal that will sit evenly throughout a dynamic mix. Speaking of compression, this is probably a good time to mention that all processing (especially dynamics) should be in place before beginning the vocal riding process, as changing any of these can change the overall vocal level (as well as the level of some lines in relation to others). Bottom line - do your final vocal rides (IF needed) last in the mixing process.
Let's begin - set your monitors to a moderate level and prepare to focus on the vocal in the mix. Oftentimes I prefer smaller monitors or even mono monitoring for performing vocal rides - you gotta get into the vocal 'vibe' however you can.
Things to look for:
Before you get into any actual detail work, listen to the overall vocal level in the mix throughout the entire song. Sometimes you will have a first verse where the vocal may actually be too loud, or a final chorus that totally swallows up the vocal. Fix these 'big picture' issues first before moving on to riding individual lines and words.
When actually recording the fader moves (as in the video), I'll push the fader up or down for a certain word and then I will want it to quickly jump back to the original level. In the "Levels" video, you will see me hit 'Stop' to get the fader level to jump back to where it was before punching in. The reason why I'm doing it this way is that if you simply punch out (without stopping) the fader won't return to it's original level (even though it's recording it correctly). Long story short, it's the quickest way I found to create my desired workflow, and it works for me (although it may look a bit weird at first)!
Often times you will find that it is the last word or two in a line that will need to be ridden up in level (sometimes the singer has run low on air by the end of a line). Also watch for the lowest notes in a vocal melody - low notes require more ‘air' to make them as loud as the higher notes, so they can tend to be the quieter notes in a vocal track. Another thing to listen for are any louder instruments that may ‘mask' the vocal at any time - sometimes the fix is to raise the vocal, other times you can get better results by lowering the conflicting instrument's level momentarily. In extreme cases, a combination of both may be required!
Other problems that rear their heads from time to time are sibilance, plosives, and other 'mouth noises'. These can all be addressed by using creative level automation, or by using a device more specifically for each issue - a 'de-esser' for sibilance, a High Pass Filter for plosives, for example.
Now, enjoy a short video interlude demonstrating the various techniques for vocal level correction, including the fader technique as well as automation techniques including break-point editing, individual clip level adjustments, and some basic dynamic level control concepts including de-essing and multi-band compression.
Controlling Vocal Levels in Reason.
Multi-bands for Multi Processes
I will leave you with one final tip; you can use a multi-band compressor on a vocal track to deal with multiple issues at once. The high band is good for a bit of 'de-essing', the mid band can be set as a 'smoother' to only 'reduce' gain when the singer gets overly harsh sounding or 'edgy', and the lower band can be used to simply smooth the overall level of the 'body' of the vocal. If there are four bands available, you can turn the level of the bottom-most band totally off, thus replicating a high pass filter for 'de-popping' etc. Additionally, adjusting the level of each band will act like a broad EQ!
Setting the crossover frequencies with this setup becomes more important than ever, so take care and take your time. Remember you are actually doing (at least) four different processes within a single device, so pay attention not only to each process on it's own but to the overall process as a whole. When it works, this can be the only processor you may need on the vocal track.
Multi-band Compressor as 'Multi Processor'
...all of the techniques in this article, however helpful they can be, are not always required - do I even need to remind you all to 'use your ears' at all times? Using vocal rides as an example, I've mixed two songs in a row (by the same artist), one where the vocal automation looked like a city skyline and the very next mix where the vocal needed no automation whatsoever!
As always; "listen twice, automate once"!
Annex Recording and Trevor Price (singer/songwriter) for the use of the audio tracks.
Giles Reaves is an Audio Illusionist and Musical Technologist currently splitting his time between the mountains of Salt Lake City and the valleys of Nashville. Info @http://web.mac.com/gilesreaves/Giles_Reaves_Music/Home.html and on AllMusic.com by searching for “Giles Reaves” and following the first FIVE entries (for spelling...).
Drums are probably the oldest musical instrument in existence, as well as being one of the most popular. Drums are also one of the most basic instruments, having evolved little in concept through the years: at their most basic, drums are anything you strike which makes a sound!
As simple as they are, drums can be difficult to master. The same can be said of properly recording drums. While most folks may recommend that you go to a 'real studio' to record drums, that isn't always a possibility. They will also tell you that drums are difficult to record properly, which is at least partly true. But it's also true that there's a lot you can do, even with a very limited setup - if you know some very basic techniques.
To introduce you to the world of drum recording at home, I've gathered some of my favorite tips and recording techniques in hopes of encouraging you to try your hand at recording some drums in your personal home studio. I'll cover a few different scenarios from the single microphone approach on up to the many options that become available to you when you have multiple microphones.
Drums in 'da House
There are many ways to approach recording drums besides the ‘mic everything that moves' approach, including many time honored 'minimalist' approaches. Sometimes all it takes is a well placed mic or two to capture a perfectly usable drum recording. Luckily, this 'minimal' approach works well in the home studio environment, especially considering the limited resources that are typically available.
It's worth mentioning that there are as many drum 'sounds' as there are musical styles. Certain drum sounds can require certain drums/heads and certain recording gear to accurately reproduce. Other drum sounds are easier to reproduce with limited resources, mainly because that's how they were produced in the first place. Try to keep your expectations within reason regarding the equipment and space you have available!
Issues to be Aware of:
First, let's cover some of the potential issues you may run into when bringing drums into your home studio:
The first issue is that drums (by design) make noise - LOUD noise. Some folks just don't like noise. This is usually the first hurdle to overcome when considering recording drums at home. The best advice may simply to be considerate of others and be prepared to work around their schedules. There is little you can do (outside of spending loads of cash) to totally isolate the drums from the outside world.
While it is unlikely, you may run into a situation where a noise from outside will intrude on your recording. Like already mentioned, there is little you can do about this other than work around the schedules of others. Most home recordists will likely have already run into these issues before, and have learned to work around them!
The second hurdle is usually not having enough microphones to 'do it right'. There are some time-tested ways to get great drum sounds using fewer mics, or even just one good mic. Rather than looking at this as an obstacle to overcome, I prefer instead to call this the purist approach!
A possible third hurdle is the sound of the room you're recording in. It can be too small (or even too big), too live or too dead, too bright or too dark. Some of these issues can be dealt with by instrument placement or hanging packing blankets, some you try to avoid with close miking! Generally speaking, a smaller/deader/darker room will be easier to deal with than the opposite. The thing to understand here is that the room itself will almost always be a factor, since the farther you move a mic from the source of the sound, the more of the room sound you will pick up.
Finally, you should also be prepared to provide headphones (at least the drummer will want phones, but will often bring their own), and make sure you have all the cables you need and that they are long enough to reach where they have to reach.
Options are good - multiple cymbal choices, a few different snares to choose from, or alternate drum heads or sticks/mallets, or even different mics are all good options to have on hand (but not absolutely essential).
Ask the drummer to bring a small rug to set the drums on (a common 'accessory'), and be prepared to provide one if they don't have one (assuming you don't already have carpet). Also consider having a few packing blankets on hand to temporarily tame any 'overly live' walls or other surfaces.
One thing before I forget - a drum kit is only as good as the drummer that is tuning and playing it. A drummer should have decent gear (no 'pitted' heads, unexpected rattles, or malfunctioning hardware please), the basic skills to tune the kit, good time/meter, and be able to hit the drums consistently. Many folks overlook this last quality, but the sound of a drum can change drastically with different stick position and velocity. The more consistent a drummer is (both with timing and with dynamics), the more 'solid' the sound will be in the end (and the better it will make you look as well!).
And finally, the actual drum part is important too - not every drummer will share your musical vision and it's up to you to keep the drum part 'musical' (whatever that means to you) and not too 'drummery' (overly busy and showing off). It may be helpful in some circumstances for you to program the drum part ahead of time (either alone or with the drummer) so that you have a reference point and are all on the same page. Let the drummer listen this track to prepare for the session, and let them know how strictly you'll need them to stick to the programmed part.
To Recap: Issues to address prior to a drum session:
Drum/Cymbal Choice and Tuning
Drummer's Timing and Dynamics
Sound of the Room
The Drum Part/Pattern
Space is the Place
If this is the first time you're recording drums in your space, you may hear things you never heard before. This is where the packing blankets can come in handy, especially if there is ringing (Flutter Echos) or if the space is just too bright or 'roomy' sounding. If you hear these things, try to cover any large flat spaces, especially glass or mirrors. As with every other aspect of recording, you will have to experiment a bit to see which locations help with your specific issues. You may be able to locate the obvious problems ahead of time by simply clapping (and listening) while walking around your studio space.
The physical placement of the kit in your space may be dictated by available space, but if you do have the option, try moving just the kick around and listen in the room to how it sounds. You will probably find that you prefer one location over another - I suggest choosing the position that produces the most low end, as this is the toughest frequency to add if not present in the original source. Also listen to the snare, but keep in mind you'll have to compromise in placement between the sound of all the drums in the room. You're looking for the place where the entire kit sounds its best. Don't forget to move yourself around with each new kick position. If you find a spot that sounds particularly good, put a mic there!
Once you settle on placement for the kit, let the drummer finish setting it up and fine tuning it before you begin to place microphones. You may have to guess at the placement at first, then tweak it by listening. When recording drums in the same room as your speakers, you can better judge the sound by recording the drums first and then listening to playback to make any decisions. Even when drums are in the next room, the "bleed" you hear through the wall, being mostly low end and coming from outside of the speakers, will give you a false sense of 'largeness'. So be prepared: the first 'playback' can often come as a bit of a disappointment! It may help to have a reference recording of drums that you like as a 'sonic comparison' to refer back to from time to time when getting initial drum sounds.
Now let's move on to discussing where to put the mics, once you get the drums all setup, tuned, and ready to rock. Now may be a good time to tell the drummer to get ready to play the same beat over and over for the foreseeable future!
If you only have one mic:
[NOTE: Choosing the Microphone: Any microphone that is a good vocal mic will be a great place to start when miking the drum kit with a single mic.]
There are not many options to consider when you only have one microphone to mic an entire drum kit - however, this can actually be a good thing! First off, you don't have to worry about mic selection as the decision has already been made for you. Second, there is no chance in the world for any phasing issues to be a factor! That leaves mic placement as the only concern, and that's where the fun begins.
Sometimes you have limitations in space that prevent certain mic positions (low ceilings, close walls), sometimes there may be one drum or cymbal in the kit that is louder or softer than the rest and may dictate mic position - you never know what you may run into. But if you can find the 'sweet spot', you'd be amazed at how good one mic can sound!
It's best to have a friend help with this next part, have them move the mic around the drum kit as the drummer plays a simple beat. Listen to how the 'perspective' changes. You can learn a lot about how a drum kit sounds (generally and specifically) by listening to a single microphone moving around a kit. You may have to record this first, and then listen on playback - if so, be sure to 'voice annotate' the movement, describing where the mic is as it's moved.
One mic moving from front to back of drum kit
When you listen to this recording, you can hear the emphasis change from a 'kick heavy' sound in front of the kit, to a more balanced sound in the back of the kit. The microphone, a Lawson L-47 (large diaphragm tube condenser) is about four feet off the ground. You can faintly hear me describe my position as I move the mic.
If I had to pick just one microphone position, I'd say my favorite single mic position is just over the drummer's right shoulder (and slightly to their right), pointing down at the kick beater area. Use the drummer's head to block the hi hat if it's too loud. Raise the mic higher if you have the space and want a more distant sound.
For an even more distant sound, position your single mic out in front of the kit and at waist high (to start). Moving the mic up and down can dramatically change the tone of the kit, helping you to find the spot with the best balance between drums and cymbals.
Further options with a single microphone:
Consider recording each drum separately (kick, then snare, then hi hat), one at a time. The "Every Breath You Take" approach. Or at least take samples of the each drum, and program patterns using these sounds.
In fact, if you take the time to bring drums into your home studio, you should at least record a few hits of each drum - you can cut the samples out later if time is a concern. No time like the present to start building or add to your personal drum sample library.
If you only have a few mics:
First Choice: Right Shoulder (RS) position, plus Kick (K) or possibly Snare (S)
Second Choice: Stereo Overheads
First Choice: RS plus K $amp; S
Second Choice: Kick, plus Stereo Overheads
Stereo Overheads plus K & S
With four mics you can have stereo overheads plus close mics (spot mics) on Kick and Snare. Having two mics for overheads doesn't mean they have to be exactly the same exact model microphone (but should be as similar as possible). With two mic for overheads, you have many choices of microphone configurations including A-B (spaced pair), X-Y (coincident), ORTF (near coincident), M-S (using one cardioid and one figure 8 mic), the Glyn Johns or "RecorderMan" approach, or you can even try a Blumlein Pair if you have two mics that can do a 'figure 8' pickup pattern.
Beyond Four Mics
Going beyond 4 or so mics means you will begin to mic toms or even hi hats or ride cymbals. You may also opt to record more distant 'room' mics if you have enough microphones, preamps, and inputs to your recorder. The sky's the limit, but don't be too concerned if you try a mic position that ends up being discarded in the end.
Further options with a single microphone:
Obviously, with only one or two microphones to cover an entire drum kit, you can't place the mics very close to any one drum. But when you have more mics at your disposal you may begin to use what are sometimes called 'spot mics', or more commonly 'close mics'.
[NOTE: For drums, dynamic mics with cardioid or hyper-cardioid pickup patterns are preferred for close miking, while large and small diaphragm condensers are preferred for overhead and room mics.]
With close mics on a drum kit, you are attempting to isolate each drum from the rest of the kit - this is not a precise science, as you will always have a bit of the other drums 'bleeding' into every other close mic. By positioning the mic close to the desired drum, and also paying attention to the pickup pattern of the mic you can achieve a workable amount of isolation.
When considering the position of a microphone, the most important aspect of close miking is the actual position of the mic's diaphragm in the 3D space. The second more important aspect is the pickup pattern of the mic, and how you are 'aiming' it. Most of the time, when considering close miking a drum kit, you are not only aiming the mic AT the desired source but also AWAY from all 'undesired' ones. Every directional mic has a 'null' point where it is the least sensitive, usually at the back of the mic. By aiming this 'null' point at the potential offenders you can reduce the level of the offending instruments. One common example is aiming the back of the snare mic at the hi hats to minimize the amount of hi hat bleed (a common problem with a close snare mic).
If there's a hole in the front head of the kick, placing the mic diaphragm just inside this hole is a great place to start. With the mic further inside the drum, you can sometimes find a 'punchier' position. With the mic outside the front head, you can get a bigger/fuller sound.
The best place to start when miking a snare up close is a few inches above the drum head and just inside of the rim when viewed from above. I usually aim the mic down at the center of the drum, which also helps to aim the 'null' at the hi hat. But remember, it's the position of the diaphragm in the 3D space that contributes most to the sound of the snare when the mic is this close. Moving the entire mic up and down, or in and out will produce a more dramatic change than simply 'aiming' the mic differently.
Overhead Mic Options:
Overhead microphone ‘cluster’ for comparing different positions/techniques
Probably the most common miking of overheads is a spaced pair of cardioid condenser mics facing down, and about 6-8 or more feet above the ground (2-4 feet above the drums and cymbals), and as wide as required for the kit (follow the 3:1 rule for better mono compatibility, see below). Also common are an ORTF or X-Y miking configuration, but we will demonstrate all the above approaches so you can hear the differences for yourself.
There are two different general approaches to overhead drum mics: capturing the entire kit or capturing just the cymbals. With the first approach, you go for the best overall drum sound/balance from the overheads. With the second, you only worry about capturing the cymbals and usually filter out much of the low frequencies. The following techniques can be applied to either approach, with varying degrees of success.
If you have fewer overall mics on a drum kit, you will most likely need to capture the entire kit with the overhead mics. In fact, it's often best to begin with just the overhead mics and get the best possible sound there first. Then you add the kick and snare 'close mics' to bring out the missing aspects (attack, closeness) to fill out the sound coming from the overheads. So with fewer total mics, the overhead mics become VERY important.
Here are the various overhead techniques we will explore, with a short description of the technique. Also listed is the gear used to record the examples of each technique. Where possible we used the type of microphone typically used for that miking technique.
X-Y, or Coincident Pair
Rode NT-5s, Digidesign "Pre" mic pre
With this approach you are placing two mics as close together as possible, but aimed at a 90° angle to each other. The mono compatibility is second to none, but the stereo image isn't that wide. (see illustration below)
ORTF, or Near Coincident Pair
Rode NT-5s, Digidesign "Pre" mic pre
ORTF allows you to combine the best of a spaced pair and an X-Y pair. You get decent mono compatibility, but a wider stereo image. Like X-Y, one advantage is that you can use a 'stereo bar' to mount both mics to the same stand. This saves space and makes setup a breeze as you can 'pre-configure' the mics on the stereo bar before you even put them on the stand. (see illustration above)
Rode NT-5s mounted on the “Stereo Bar” attachment, set to ORTF
A-B, or Spaced Pair
AKG c3000, Digidesign "Pre" mic pre
This common miking approach can be use for mainly cymbals or the entire kit. Either way, you may want to be familiar with the 3:1 rule for multiple mics: for every "one" unit of distance from the sound source to the mic, the two mics should be three times this distance from each other. If the mics are one foot above the cymbals, they should be three feet from each other. The main reason for this 'rule' is to help with mono compatibility, so don't sweat it too much if you can't hit these numbers precisely. If you check for mono compatibility (assuming it's important in your work) and you don't hear a problem, you're fine! By the way, in our example the mics are about two feet from the cymbals, three feet from each other, and doesn't seem to be a problem.
Glyn Johns Approach
Lawson L-47, API mic pre
This is a four mic approach, which using a close mic for kick and snare, and two overheads in a 'non-standard' configuration. The first mic is centered directly over the snare, between three and four feet away. The second mic is aimed across the drums from the floor tom area, and must be exactly the same distance from the snare. Some folks pan the two overhead mics hard left/right, other suggest bringing the 'over snare' mic in half way (or even both mics in half way).
Rode NT-5s, Digidesign "Pre" mic pre
Named after the screen name of the engineer who first suggested this approach, it is similar to the Glyn Johns approach in that you begin with a mic directly over the snare drums. But it diverges from that approach with the second overhead mic, placing it in the "Right Shoulder" position. This can also be considered an extension of the one mic 'over the right shoulder' approach. Fine tuning is achieved by measuring the distance from each mic to both kick and snare, and making each mic equal distance from each drum. This is easily accomplished by using a string, but difficult to describe in writing. For a further explanation of this technique, check out this YouTube video.
Royer 122 ribbon mic (figure 8), Focusrite mic pre
Named after Alan Blumlein, a "Blumlein Pair" is configured using two 'figure 8' microphones at 90° to each other and as close together as possible. This approach sounds great for room mics, by the way.
Lawson L-47s, API mic pres
The Mid-Side technique is the most intriguing mic configuration in this group. In this approach, you use one cardioid (directional) mic and one 'figure 8' (bi-direction) mic for the recording. But you need to use an M-S 'decoder' to properly reproduce the stereo effect. The 'decoder' would allow you to control the level of the mid and the side microphone, allowing you to 'widen' the stereo image by adding more 'side' mic. This technique (along with X-Y and Blumlein) has great mono compatibility. This is because with M-S, to get mono you just drop the 'side' mic all together and you're left with a perfect single microphone recording in glorious mono.
I invited a few engineer friends to the Annex Studio for a 'drum day' to record the examples for this article. It's always more fun to do this stuff with some friends! It's a good idea to have someone move the mics while you listen - sometimes the mic doesn't end up in a position that 'looks right' (even though it may sound perfect!). We took the time to get each approach setup as precisely as possible, and recorded all of them in a single pass so they could be compared side by side.
The recording space is a large, irregularly shaped room, about 24 by 30 'ish feet with 9 foot ceilings. There are wood floors throughout (carpet under the drums) and we hung one large stage curtain to tame the room a bit for this recording. The overhead mics, for the most part, were about 6-7 feet above the floor (2-3 feet from the ceiling).
The Reason Song File
I've provided the Song File because it's easier to compare between the different miking positions when you can switch as a track plays. I've set it up so that there are "Blocks" with the title of each section. Just click on a block and hit "P" on the keyboard and that section will begin loop playback. As it is currently setup, you must mute and un-mute tracks in the sequencer - you could also do this in the SSL Mixer by un-muting all the sequencer tracks and using the Channel mutes instead.
Single Mic Sweep, front to back
The first track is a single microphone starting from in front of the kit, and slowly moving around to the back and ending up in the "Right Shoulder" position. Listen closely and you'll hear me describing my position as I move.
Compare Overhead Mic Positions
Next you will find a few bars of drums with close mics on Kick and Snare, and the following overhead tracks: X-Y, ORTF, A-B, RecorderMan, Glyn Johns, Blumlein. Playing this clip allows you to explore the different miking techniques, and allow blending of the close mics at will. All the "stereo" overhead tracks are designed to be heard one at a time, although the mics are all in phase so they certainly could be used in combination with each other if you're feeling creative. But the main purpose of this clip is to allow you to hear the difference between the various miking techniques presented.
Moved the Royers to a Room Mic Position
The third clip is a similar drum pattern, with the Royer ribbon microphones (Blumlein Pair) moved to 15 feet in front of the drums. This is our typical 'room mic' position and mic choice, and is the only difference between the previous clip and this clip. In my opinion, the sound of this miking technique combined with the 'color' of a ribbon mic makes the perfect 'room' sound. For a room mic to work, the room must sound great, of course. But also it has to be more diffused and a bit 'out of focus' compared to the close mics, which produces a similar effect as the 'blurry' background of a photo. As in the photo example, having a blurry background can help to put more focus on the foreground (close mics).
Fun with Mid-Side - Adjust M-S in Rack
Finally we have a Mid-Side recording (plus the Kick and Snare close mics) to play with. We didn't have enough mics to include it in the first round, but wanted to present it as an additional track. In addition to drum overheads, the Mid-Side approach also works well with room mics, because you can increase or reduce 'width' after the recording. I've inserted an M-S decoder on the Insert for this channel in the mixer, and by going to 'rack view' you can use the M-S combi to adjust the balance between the Mid and the Sides.
Kick: Sennheiser 421, API mic pre Snare: Shure SM57, API mic pre X-Y, ORTF, RecorderMan: Rode NT5s, Digidesign "Pre" mic pre A-B: AKG c3000, Digidesign "Pre" mic pre Blumlein: Royer 122 ribbon mics, Focusrite mic pre Glyn Johns, Mid-Side: Lawson L-47s, API mic pres
1967 Gretsch kit 22x14 Kick 16x16 Floor Tom 13x9 Rack Tom 14" Pearl Snare Zildjian and Paiste Cymbals
There are always other ways to record drums. Here are a few slightly out-of-the-box approaches for your consideration.
The "Every Breath You Take" Approach:
You don't necessarily need to record the entire kit at once - this can help if you only have one mic. Things to plan for: the drummer must know about this in advance. It's not as easy as you would think to only play one instrument at a time! This approach can work especially well if you're building up a rhythm track, much like you'd program a track with a drum machine. Start with the kick, then add snare, then hi hat. Move on to the next beat. Then for fun you can us one of the 'One Mic' approaches.
The Quiet Approach...shhhhh:
Sometimes in the studio, less actually IS more! Case in point, recording drums that are lightly tapped can sometimes produce huge sounds when played back at loud levels. This approach will work best if you can record one drum at a time, and will certainly help with neighbor issues as well! You can also apply this technique to sampling as well. Consistency is the key when playing softly - sampling can help if you can't play softly at a consistent level.
Sampling, Why Not!?:
Sometimes you don't have all the ingredients for a full drum session. Don't overlook sampling as a way to get around some of these issues - and why not do it anyway! Don't forget to record multiple hits at multiple levels, even if all you need at first is one good single sample - these additional samples may come in handy later, and you never know when you'll have the drums all tuned and setup again (and it only takes a few minutes)!
The 'shaker' family of percussion can be recorded with any mic, depending on the sound you're going for. As a starting point, any mic that's good on vocals or acoustic guitar will work fine for the 'lighter' percussion like shakers and bells etc. For hand drums like Djembes and Dumbeks, or Congas and Bongos, you can approach them like kicks/snares/toms. A good dynamic mic on the top head, and sometimes (for Djembes in particular) a good kick drum mic on the bottom. Watch for clipping - these drums can be VERY dynamic!
Annex Recording (Rob Duffin, Josh Aune, Perry Fietkau, Trevor Price), and Zac Bryant (for playing drums) with Victoria
Giles Reaves is an Audio Illusionist and Musical Technologist currently splitting his time between the mountains of Salt Lake City and the valleys of Nashville. Info @http://web.mac.com/gilesreaves/Giles_Reaves_Music/Home.html and on AllMusic.com by searching for “Giles Reaves” and following the first FIVE entries (for spelling...).
In the Tools for Mixing series here at Record U, we discuss a number of useful types of effects and processing in other articles: dynamics such as compression and gating, EQ types such as shelving and parametric, send effects such as reverb and delay, and master effects such as maximizing and stereo imaging. Most importantly, these other articles cover how you can use these effects and processors to make your mixes sound great.
What's different about insert effects, the subject of this article? Well, in some ways, absolutely nothing. Insert effects can be compressors, EQs, reverbs, delays, and any other kind of processor. Like these effects and processors, insert effects are very effective tools to give each track its own sonic space and to make your mix sound better — it's these goals that we'll focus on in this article.
In addition to their fundamental assignment as mix improvement devices, insert effects can be used as sound design and arrangement tools as well. Sound design means altering the sound of something, such as taking the sound of a guitar and making it sound a little different, significantly different, or even unrecognizable. Doing so can have a profound effect on the emotion of a track and even an entire song. In an arrangement, you can use insert effects to create subtle and not so subtle changes to the sound of an instrument or voice as the song progresses from one section to another. Using automation, you can have an effect get more intense in the choruses when the instruments are playing at full volume, then less intense in the quieter verses.
What?? You haven't used automation before? It's easy as pie. It's very similar with all multitrack recording software, but it's particularly easy in Reason. Let's digress for a quick lesson on automation.
To record automation data for an effect parameter, simply alt-click or option-click on the knob, dial, or slider you want to automate. In this illustration, we alt-clicked on the Feedback knob on the DDL-1 Digital Delay device. A green box outlines the control, as you see here, to let you know that the control is now automated. At the same time, a Feedback parameter automation lane has been created for this device in the sequencer, as you can see at the bottom of the illustration. The parameter record button is red, indicating the parameter lane is record-ready.
Next, all you have to do is hit the record button in the Transport Controls, and then move the Feedback knob with your mouse to change the sound as your song progresses. You can see the cursor has been dragged up, and a parameter display appears indicating that the Feedback parameter has been increased to 44. In the sequencer, you can see automation curves being recorded.
When you finish a pass, you can easily edit and change the automation data you've recorded. Just double-click on the automation clip, and whether you're in arrangement mode or in edit mode, you can edit the data points with the editing tools, such as the Pencil tool shown here.
There. Couldn't be simpler.
For our purposes as budding mix engineers, the most significant differences between insert effects and other effect applications are the following:
Insert effects work only on one channel, not across multiple channels.
Insert effects process the entire channel; there is no un-effected track signal once the effect is active.
This means that though you have tremendous power over the fate of a single track with insert effects, the worst thing that could happen to your mix is that only one track might sound crappy. And you can always delete the effect and go back to your unprocessed track.
Adding insert effects is usually one of the last things you do when you're mixing — unless you think like a sound designer, in which case adding insert effects to your tracks is one of the first things you do. With a song featuring acoustically recorded instruments, the order of business in a mix session usually flows like this:
Set gain staging.
Apply basic EQ to individual tracks.
Apply basic dynamics.
Apply send effects.
Apply insert effects.
Sometimes you'll apply more than one tool simultaneously. Sometimes you'll do things in a different order. The point is that insert effects are usually what you add to fine-tune your mix, to give individual tracks something special. As with all of the tools we discuss in Tools for Mixing, you could achieve a decent basic mix using insert effects alone; they're that powerful.
Do This Before You Apply Insert Effects
No matter how you like to approach your mix, before you apply insert effects, make sure you do two things to your entire mix:
Using a high pass filter on the channel EQ, roll off all frequencies below 150 Hz on all your tracks, especially those tracks that you don't think have any energy at those frequencies. The exceptions are the kick and the bass. Even on the bass, roll off everything below 80 Hz. Doing this will make your mix sound much more open right away, and it's critical to remove unnecessary energy in these frequencies before your insert effects amplify them.
Listen to each track carefully, and remove any sounds that you don't want, including buzzes, bumps, coughing, fret noise, stick noise, or any sound made by a careless musician. Any unwanted sounds may get amplified by the application of insert effects.
Now that you're at this stage of your mastery of mixing, it's time to reveal what the deal is with signal routing. Most mixing boards and recording programs let you choose the order in which each track signal passes through the various processors and effects. At the top of each channel in Reason's mixing board, you can see a little LCD diagram, entitled Signal Path. Just above the LCD diagram, there are two buttons: Insert Pre and Dyn Post EQ. When engaged, the Insert Pre button places the insert effects prior to the dynamics and the EQ; when disengaged, the insert effects come after the dynamics and the EQ. When the Dyn Post EQ button is engaged, it means the compressor and gate follow the EQ; when disengaged, the dynamics are in front of the EQ.
Fortunately, the wonderful mixer gives a very clear picture of how the signal path options work. Let's look at this LCD picture as we discuss the musical reasons for choosing one signal path over another. And keep in mind that when you change the signal path, the compressor and gate don't jump to the bottom of the channel strip, and the insert effects don't slide on up to the top. In the channel strips themselves, the controls stay in their place. It's just the audio that follows its various courses behind the scenes.
When you're gaining experience as a mix engineer, it's always nice to have a compressor at the end of your signal path to attenuate any extreme boosts in the signal you may inadvertently cause. Without the compressor in that catchall position, a severe peak might get all the way to the main outputs, where it could cause clipping — and it might take a long time to figure out what's causing the overage. If you're an experienced mix engineer, you're likely to be vigilant for such accidents, and can therefore choose any signal path option for any reason you like. Here are musical reasons for choosing each of the four options for signal routing in Reason's mixer over the others.
In this configuration, the compressor comes first, so it tames the peaks in the track. The EQ is next, opening up the options for sound sculpting — but with no compressor or limiter following it to compensate for any boosts that might lead to clipping farther down. The most musical choice of an insert effect here would be another compressor, in order to get a super-compressed sound that moves forward in the mix. You'll see this in action in Audio Example 20, later on.
Similar to configuration No. 1, this puts the EQ first, and then the compression. The most musical choice for an insert effect here would be another EQ, in order to get a very specific frequency curve dialed in.
This is the best configuration for less experienced mixers. You can make bold choices in the insert effect slot and really get some fun into the track, be aggressive with dialing in the frequencies you want with EQ, and then the compressor will be there to even out anything that gets out of hand.
This configuration puts the compressor after the insert effect, which is good for safety, but with the EQ at the end, this configuration is best for an experienced engineer who uses EQ as their primary mixing tool even when fine-tuning a mix. This means the artistry such an engineer puts in to their EQ won't get squashed by a compressor farther down the signal path.
And Now, the Insert Effects
Insert effects themselves are simple as can be. But their interaction with other processors and mixing tools can have surprising results from seemingly subtle changes. That's why the preceding introduction is important: Any changes you make to your music with an insert effect will have an impact on the changes you've made using other tools. The difference, as noted before, is that insert effects impact the individual tracks more than the overall mix.
Insert effects come in a several types. Dynamics (loudness) processors: You're already familiar with compressors, limiters, maximizers, and gates. They can be used as insert effects, too. Timbre effectsare also familiar to you already, as they include EQ and filters. Modulation effects include tremolo, ring modulation, chorus, flanging, vibrato, and vocoders. These effects use variations in pitch, loudness, and time to get their sound. Time-based effects include reverb and delay. Distortion effects include overdrive, vinyl effects, exciters, tube distortion, distressing, downsampling, and bit conversion. Pitch correction effects include Auto Tune and vocoders.
Combo effects: Amp simulators use a combination of effects in one package, including distortion, delay, compression, and EQ. The Scream 4 device uses a combination of distortion, EQ, formant processing, and compression to make its impressive sounds.
As you explore the effects available in your recording software or hardware multitrack recorder or mixer, experiment like crazy with insert effects. Try out anything and everything, and don't be afraid to just delete the effect and start over. In Reason, it's quite easy to explore effects that have already been designed for particular musical applications; just click on the folder icon (Browse Insert Effect Patch) at the bottom of the Insert Effect section and find the Effects folder in the Reason Sound Bank.
Click this early, and often, to discover what insert effects can do for your music.
In the examples that follow, we're focusing on acoustically-recorded tracks (that is, tracks that aren't created from virtual instruments) that are intended to support a song, since creating good mixes of acoustic tracks is one of the biggest challenges that the majority of songwriters face. The examples here illustrate applications of a wide variety of effects to solve common mix problems: They're designed to show the results various types of insert effects can have on the way the affected track sits in the mix. In most cases, we're applying the effects in excessive ways, going a bit over the top to make the results obvious. It's possible to solve problems in a mix using only insert effects. But in practice, you should use all the tools available to you — EQ, dynamics, send effects, panning, and inserts — to make your mix sound the way you want it to.
This example features a chicken-pickin' rhythm section of drums, bass, and electric guitar, with a solo female vocal. No effects have been applied yet, but in addition to just setting the basic gain staging, we've carved out a little space for the vocal with a bit of EQ on the instrument channels. The vocal is audible enough to begin working more on the mix.
Make a vocal thicker by doubling
When we apply a simple doubling of the vocal track and a little detuning to the double using the UN-16 Unison device, the vocal suddenly has expanded to occupy a much more prominent place in the mix without adding any perceptible increase in level. Well, it's twice as many as doubling, as we used the Unison's lowest voice count of four (it also does eight and 16!). We set the detuning fairly low and the wet/dry mix so that the original signal is the most prominent.
Adding a UN-16 Unison device to a vocal track is easy. Just select the track, channel, or audio device and select UN-16 Unison from the Create menu. All connections are made automatically. All you have to do is decide if you want four, eight, or 16 voices.
Thicken up a vocal with delay
Running the vocal through a basic delay with just two repeats and a fairly low feedback setting makes a huge difference in the spread and level of the vocal. When you add delay, you should always watch that the volume doesn't get carried away. This example uses the DDL-1 Digital Delay device with the wet/dry balance set rather dry, so the original signal comes through clearly. Even with just two repeats, the effect is very much like a reverb.
Ducking a vocal delay
The trouble with putting a delay on a vocal is that the repeats can get in the way of the clarity of the lyrics. You could adjust the wet/dry mix by hand, riding it towards the wet signal when each phrase ends. Or you could put a compressor on the delay and trigger it from the vocal signal. This ducks (lowers the volume of) the delay effect as long as the original vocal signal is above the threshold on the compressor. When the vocal dips below the threshold, the full delay signal comes back up in volume. This gives the vocal a bigger place in the mix, while keeping the effect out of the way of the lyrics.
It's easy to set up a ducking delay. In this excerpt, we want the delay to be softer while the vocal is present, but to come back up when the vocal phrase ends. First, create a Spider Audio Merger & Splitter on the vocal channel. Run the Insert FX outputs into the input on the splitter (right) side of the Spider and connect one set of Spider outputs back to the Insert FX inputs. Create a new audio track for the delay, and create a DDL-1 Digital Delay and an MClass Compressor for this track. Take a second set of outputs from the Spider, and connect them to the inputs of the delay. Take a third set out outputs from the Spider and connect them to the Sidechain In on the compressor — this is what will trigger the ducking effect. Connect the delay outputs to the compressor inputs, and run the compressor outputs to the delay channel Insert FX inputs. Presto! Your ducks are now in a row.
Thicken a vocal with chorus
Another way to fatten up a vocal track is to run it through a chorus effect. This doesn't necessarily make the track more prominent, but it does seem to take up more of the frequency spectrum. For this example, we used the CF-101 Chorus/Flanger, with very little delay, and just enough modulation to make the effect obvious while not overwhelming the vocal.
Extreme reverb effects for setting off vocals
Adding reverb to a vocal track is a sure way to give it some of its own space, even apart from the reverb you might apply with the send effects. You can even get some extreme sound design effects with reverb, such as this reverse reverb algorithm on the R7000 Advanced Reverb. This may not be the most appropriate use of reverb for this particular track, but you can easily hear how the reverb imparts its own space and EQ to the vocal, setting it apart from the instruments.
Use multieffects to bring out vocals
For vocals, the best approach to insert effects often involves a subtle combination of reverb, delay, chorus, compression, and EQ. Here is our track again, this time with a blend of two reverbs, delays set to different timing in the left and right channels, a mono delay set to yet a third delay rate, and a chorus, all routed through a submixer in a handy effects preset called “Vox Vocal FX.” The four channel insert effects controls have been programmed to control the various delay times and levels, the reverb decay, the delay EQ, the dry/wet balance. The vocal sounds sweeter, all the lyrics are clear, and the track seems to float in its own space in the mix. Sweet!
Distressing effects for vocals
At the other end of the spectrum, there are distortion effects. For the vocal track we've been working on, distortion may be rather inappropriate — but we won't tell the vocalist if you won't. The result is powerful. We inserted a Scream 4 device, set the Destruction mode to Scream, set the EQ to cut the low end, and set the tweaked the Body size and resonance to get a very distressed bullhorn sound. It'd be more effective if it were Tom Waits singing something that would tear your heart out, but you can hear how this may or may not be just the ticket for your own songs.
Let's listen to the excerpt we'll be working with for the next few examples, before we start mangling the guitar. This track has mono electric guitar and mono electric bass, both recorded direct, with no amps and with no effects. The drums are in stereo, and they have a bit of room reverb on them. Although the guitar has a decent bit of sonic real estate in which to sit, it sounds kind of thin. Let's see what we can do to beef it up a bit.
Spread out a guitar with phaser
Adding a phaser to the guitar pushes it back in the mix a bit. But the gentle swirling of the phased overtones gives it a new frequency range to hang out in. The PS-90 is a stereo effect, so our mono guitar is now a stereo guitar. Even though the track is panned straight up the center, the phaser gives it a wider dimension.
When vocals or lead instruments collide with guitar
What if there's a vocal or lead instrument that shares some of the same frequency range as the phased guitar? Is the phase effect enough to make the guitar distinct from the lead? As you can hear in this example, without further adjustment the guitar and trumpet are competing. Does this mean more work with EQ and dynamics to solve the problem? Check out the next example to find out.
Making room for a vocal or lead by spreading a phased guitar
Since our phased guitar track is now stereo, let's see what happens when we widen the stereo spread. As you can hear, the two sides of the PS-90 effect are panned hard right and left, leaving the center of the track virtually guitar-free, and the trumpet part now sounds like it's all by itself. This is a great technique to use to make room for a vocal or lead instrument; sometimes just putting a time-based effect such as a phaser on a guitar or keyboard part and then spreading the sound wide is all you need to do to make the vocal stand out clearly.
Add presence to guitar with chorus
Chorus is another modulation effect that gets a similar result to a phaser when applied to a guitar in a mix, in that the guitar seems pushed back in the mix We've taken the stereo channels of the CF-101 Chorus/Flanger and spread them wide, left and right. The guitar definitely has more presence, and seems to float in a shimmery kind of way above the drums and bass.
Re-amping for sonic flexibility
Re-amping is a great technique for working with direct-recorded guitar tracks. In a nutshell, you send the signal of a direct-recorded guitar to an amp, then you record the sound of the amp. Re-amping gives you a lot of flexibility in guitar tone. If your recording program has amp models, you can use them in a similar way. Reason features guitar and bass amp models from Line6, which you can add to a track just like you would with any other effect, and it can make a huge difference to the presence and tone of the guitar. Here, we've selected a very clean amp model, and without boosting the bass, the low strings are a lot more audible. The guitar now has a much better location in the mix.
Distortion and overdrive to bring out the guitar
Then there's the time-honored tradition of making more room for a guitar in the mix by using an amp that's overdriven and distorted like crazy. Here's the same track with the Line6 guitar amp inserted and the “Treadplate” preset selected. The correct phrase is, “My goodness, that certainly cuts through the mix now, doesn't it?” But it doesn't overpower the drums and bass, either.
Multiple amps for huge yet flexible guitar sounds
Amp modeling plug-ins such as the Line6 amps give you lots of sonic flexibility and options for getting a guitar track to sit in a mix. But sometimes you want even more from a guitar sound. In this example, we're running the same exact guitar track through three separate Line6 amp simulators. To get the maximum flexibility, we inserted a Line6 amp on each of three mixer channels, then split the dry signal using the Spider Audio Splitter & Merger device and sent it to each of the other two amps. This setup lets us use the channel dynamics and EQ on each of the amps, which allows us to roll off the low end of one amp, to just use the highs, and then dial out the high and low end on the third amp so it just projects middle frequencies. It's a great way to build up a massive guitar sound while giving you more options for making it all work in the mix.
Here are the three Line6 amps, each in their own track.
To connect the three amps in series, create a Spider Audio Merger & Splitter on the guitar track. Since we're dealing with a mono signal, run the Insert FX Left out into the Spider Left input. Run one Left output to each of the amps. Run the amp outputs back to the Insert FX inputs. Now you're ready for some serious guitar sound sculpting — as well as some powerful mix crafting.
Set the guitar apart with tremolo
Tremolo is a classic modulation effect that not only helps give a track its own sonic space, but also imparts a whole new character to the performance. This tremolo effect is created by a combination of effects in a preset called “Wobble,” which you can find in the Reason Sound Bank just by clicking on the Browser Insert FX Patch button in the Insert section of any channel strip. “Wobble” uses a combination of limiters, EQ, and compression, the latter of which is controlled a CV signal triggered by the track volume. Reason has tons of effect patches that are designed to give you the effect you're looking for while helping to make the track fit into the mix. Just browse some of the effect patches to discover more.
Beef up the drums with delay
If your drum track isn't quite as full sounding as the rest of the instruments in your track, you can increase its sonic girth by adding a delay to it. The delayed signal should have no more than one or two repeats, the repeats should be so soft as almost inaudible, and the repeats are most effective when timed with the music. Start with the delay timed with the quarter-notes, then try eighth-notes, then 16ths, 32nds, 64ths, and even smaller values. If done well, the drums will just sound fuller, and you won't be able to distinguish the delay. For this example, we inserted a DDL-1 Digital Delay device on the drums, and set its dry/wet ratio to 8, so that the repeats were almost subliminal.
Give cymbals a psychedelic shimmer
Cymbals lend themselves to certain modulation effects. Applying a flanger to a drum track with lots of cymbals results in a trippy, swirling sound that also has the benefit of beefing up the track. For this example, we split the stereo drum track using the MClass Stereo Imager, setting the crossover frequency to around 1.8 kHz, and then sending the high band output to a CH-101 Chorus/Flanger device. We adjusted the flanger to get a slow swirl, then mixed the flanged sound back in with the direct signal using a Micromix submixer device. If you have multitracked drums, then you can just slap a flanger right on the overhead channel.
Put the bass in your face
One of the more common inserts to apply to a bass track is compression, though any modulation or delay effect can sound great, too, depending on the material. Now that we've got the guitar tremolo and the drum cymbals swirling from the previous examples, the bass is sounding flabby and getting lost. Compression is a good way to attack this. We cranked up the channel compression, but it was still not quite enough. So we flipped the signal path so the insert effects came after the dynamics in the channel. We applied an insert effect preset named “Super Bass Comp” to the bass track, cranked the ratio (which was mapped automatically to the insert effect knobs), and presto! A bass sound that's solid as a rock, and seems to be coming towards you as you listen, rather than hanging back. This is the litmus test for a hard-compressed track: If you've done it right, the track comes forward in the mix. This preset utilizes three MClass devices, Compressor, Equalizer, and Maximizer.
Based in the San Francisco Bay Area, Ernie Rideout is Editor at Large for Keyboard magazine, and is writing Propellerhead Record Power! for Cengage Learning.
Of all the tools we talk about in the Tools for Mixing articles here at Record U, reverb is unique in that it's particularly well suited to make it easy for you to create clear mixes that give each part its own sonic space.
Reverb derives its uniqueness from the very direct and predictable effect it has on any listener. Since we humans have binaural hearing, we can distinguish differences in the time between our perception of a sound in one ear and our perception of the same sound in our other ear. It's not a big distance from ear to ear, but it's enough to give our brains all they need to know to immediately place the location of a sound in the environment around us.
Similarly, our brains differentiate between the direct sound coming from a source and the reflections of the same sound that reach our ears after having bounced off of the floor, ceiling, walls, or other objects in the environment. By evaluating the differences in these echoes, our brains create an image accounting for the distances between the sound source, any reflective surfaces, and our own ears.
The good news for you: It's super easy to make your mixes clearer and more appealing by using this physiological phenomenon to your advantage. And you don't even need to know physiology or physics! We'll show you how to use reverb to create mixes that bring out the parts you want to emphasize, while avoiding common pitfalls that can lead to muddiness.
All the mixing tools we discuss in this series — EQ, gain staging, panning, and dynamics — ultimately have the same goal, which is to help you to give each part in a song its own sonic space. Reverb is particularly effective for this task, because of the physiology we touched on earlier. As with the other tools, the use of reverb has limitations:
It cannot fix poorly recorded material.
It cannot fix mistakes in the performance.
Any change you make to your music with reverb will affect changes you've made using the other tools.
As with all songwriting, recording, and mixing tools, you're free to use them in ways they weren't intended. In fact, feel free to use them in ways that no one has imagined before! But once you know how to use them properly, you can choose when to go off the rails and when to stay in the middle of the road, depending on what's best for your music.
Before we delve into the details of using reverb in a mix, let's back up a step and talk about what reverb is.
Reverb: Cause and Effect
At its most basic, a reverberation is an echo. Imagine a trombonist standing in a meadow, with a granite wall somewhere off in the distance. The trombonist plays a perfect note, and an echo follows:
Fig. 1. This is a visual representation of a basic type of reverb: a single echo. The listener hears a trombonist play a note, and then the subsequent echo. Even with eyes closed, the listener can picture how far away the reflecting wall might be, based on how long the sound took to reflect, which direction it seemed to come from, and its loudness.
Now lets put the trombonist on the rim of a large canyon. Once again, the trombonist plays a perfect note, and this time several echoes come back, as the sound reflects off of stone walls at differing distances and of differing angles.
Fig. 2. The trombonist plays the note again, this time from the rim of the Grand Canyon. The listener is also on the rim of the canyon, and hears the original note, followed by the subsequent echoes. With the diminished volume of each echo, the listener can easily picture how far away the canyon walls are. Even if each echo is a perfect copy of the original sound, as long as it diminishes and volume and seems to come from a location other than that of the original sound, the listener's mind places the trombonist in an imagined space.
Trombonists being highly sought after in Sweden, even to the point of being an imported commodity, let's put our trombonist in the Stockholm Konserthuset, one of the finest concert halls in Europe. This time, rather then producing a series of individual echoes, the note our trombonist plays generates numerous echoes that overlap in time, ultimately creating a wash of sound that decays gradually.
Fig. 3. Once onstage at the Konserthuset, the trombonist plays the note again. This time, the echoes are so numerous as to blend into a wash of sound. To the listener in the front row, still with eyes closed, the length of the predelay (the time from the initial note attack until the time the first reverberation occurs) and the length of the reverb tail (the gradual decay of the wash of echoes) provide enough information for them to imagine the size of the stage, the location of the walls, the height of the ceiling, and other characteristics.
Heading upcountry a bit, we'll put our trombonist in the Uppsala Cathedral, one of the largest medieval cathedrals in Scandinavia. Standing right in the middle of the cathedral, the trombonist blows a note and is immediately surrounded in a wash of reverberation that seems to last forever.
Fig. 4. In a cathedral, the note the trombonist plays seems to expand and reverberate endlessly as the sound reflects off of the many stone surfaces to cross and re-cross the vast space. The mind of the listener can picture not only the dimensions of the space, but also the material with which it's constructed, based on which overtones reverberate the longest.
Though simple, the reverb scenarios above represent aspects of how you can use reverb and delay to create sonic space around your tracks — and they explain why these effects are, well, effective. Plus, they also represent the real-world phenomena that inspired the creation of the reverb effects that are the basis of all studio reverbs. Let's check out some notable milestones in reverb development, as this knowledge will also make it easier for you to dial up the exact reverb effects you need.
Man-made Reverb: Chasing the Tail
For recording orchestral and chamber music, the simplest way to get a great reverb is to put the ensemble in a space that produces a great reverb, such as a concert hall or cathedral, and record the performance there. Of course, there are aspects of this that make it not so simple, such as the cost, the delays due to unwanted sounds caused by passing trucks or airplanes, and the lack of availability of such venues in general.
In the mid-20th Century, many recording studios were built with rooms big enough to hold large orchestras, in the hopes of re-creating that naturally occurring reverb. In some cases, these rooms definitely had a sweet sound. In many, however, though they could hold an orchestra, the sound was not reverberant enough.
There are many reasons that recording engineers are called engineers, and one of them is their resourcefulness. To overcome the reverberation situation, engineers would convert or build rooms in the same building as the studio, sometimes adjacent to it, and sometimes underneath it. These rooms would have a certain amount of reverberation caused by the surface material, the angles of the walls, and objects placed within the room to diffuse the sound. By placing a speaker in such a room and sending a recorded signal to the speaker, the signal would have the reverberation characteristics of the room. By placing a microphone in the room to return this reverb-processed sound to the mixing desk in the control room, the engineers could then add the processed signal to the original recording to give it the extra reverberative quality. This is the basis of the effects send and return capability found on almost all mixers, real and virtual. And when you see chamber or room listed as the type of reverb in a reverb processor, it's this kind of room they're trying to emulate.
Some studios succeeded in creating reverberation chambers that created a convincing reverb, such as those at Abbey Road in London and at Capitol Records in Los Angeles, which were used on the recordings of the Beatles and Frank Sinatra, respectively. But these didn't work for every kind of music, and you couldn't vary the amount of reverb time. There was definitely a market for some kind of device that would let engineers add reverb to a track without having to build an addition to their studio. Since steel conducts sound and transmits it well, plate reverbs were developed, in which a steel plate would be set to vibrate with sound introduced by a transducer at one end of the plate, and the processed sound would be captured by a pickup at the other end of the plate.
A German company called EMT produced the most popular of these in the late 1950s, which featured up to six seconds of reverb, a movable fiberglass panel that cold vary the decay time, a mono send input, and stereo return outputs. Their sound was smoother than that of a real room, but also darker. Though these were attractive attributes when compared to the cost and inflexibility of a reverb chamber, they were far from convenient: In their cases they were eight feet long, four feet high, and one foot thick! Consider that the next time you dial up a plate reverb preset on your processor.
What recording studios did have on hand were lots of tape recording machines. By sending a signal from the mixing desk to a dedicated tape recorder, recording the signal with the record head, and then returning the signal via the playback head to the mixing desk, a single echo of the signal was created, due to the distance between the record and playback heads on the tape recorder. This delayed signal could then be blended with the original. This is called a slapback echo, and it's prevalent on lots of rock and roll recordings from the 1950s. Even though the effect was just of one echo, it still imparted a sense of space to the instrument or voice to which it was applied, setting it apart from the other parts.
This poor man's reverb was improved when engineers figured out how they could take the tape-delayed signal from the playback head and route it back to the record head, creating multiple delays. This became known as tape delay, and crafty engineers developed ways to keep the sound from building up too much (feedback), so the effect would be of three or four quick echoes that got quieter with each iteration. This added another dimension to the spatial effect, and when engineers started sending this multiple-delayed signal into their dedicated reverb chambers, they discovered yet another useful reverb application.
Fortunately for you, there is Propellerhead Reason, so you don't have to build underground rooms or rewire reel-to-reel tape recorders. In fact, you don't need anything except your computer and Reason! No matter what hardware or software recording devices you work with, keep in mind the technology behind these historical developments, as well as our travelling trombonist, as we work with reverb applications in the next section.
It is Better to Send than to Insert
In each of the other Tools for Mixing articles, we created rough mixes using just the single tool featured in the article. We did this purely to explore the power of each of the tools brings to your music, not to suggest that you create a final mix using only panning, or EQ. In fact, in creating the rough mixes, we applied the tools sometimes to extreme levels, which you normally wouldn't do when crafting a mix. Normally, you'd use all your tools in equal amounts to make each track stand out just the way you want.
We're going take a similar approach with reverb, though in some cases, we'll actually grab the channel faders and make adjustments to achieve the full effect of placing sounds in the soundstage.
Another difference between reverb and the other mixing tools is the point at which it's best to apply it in the signal path: as an insert effect or as a send effect. Here's the difference between the two.
Fig. 5. Insert effect signal path: When you add an insert effect to a mixer channel, the entire signal for that track is processed by the effect, whether you've chosen a reverb, EQ, compressor, or any other effect. The only control you have over the amount of signal processing is with the effect's wet/dry mix, which is the mix between the unprocessed and processed signals. This method is best for effects that have more to do with the sound design of an individual track than with the sound of your overall mix. While you're mixing, it's a pain to go into the controls of individual insert effects to change the wet/dry mix. This limits your flexibility when mixing.
Fig. 6. Insert effect section in Reason: Here is a mixer channel in Reason that has an insert effect applied to it, in this case a very wacky vocal processor effect. It's easy to tweak the four available controls, two of which are active here, but it's not easy to adjust the wet/dry mix from the mixer.
Fig. 7. Send effect signal path: To process a track with a send effect, you engage the send effect, which splits the signal. The Send Level knob lets you set the amount of signal that gets sent to the effect. The processed signal goes through the Master Effects Return control, which lets you set the amount of processed signal you want to mix with the unprocessed signal via the effects return bus — a key element when it comes to mixing. Using the channel effects send controls in their default state, the send occurs post-fader (green arrow). In this mode, you set your send level with the Send Level knob. The channel fader then boosts or cuts the send level as you move the fader up or down, respectively, relative to the setting of the send knob. Any adjustments you make with the channel fader will affect both the track volume and the send level. The Return Level knob on the master channel determines the global level of the return signal mixed in to the master bus. In other words, the balance between the effect and dry signal remains proportional as you move the channel fader. If you choose pre-fader by clicking on the Pre button, then the channel fader has control over the effects send level as determined by the position of the Send Level knob (orange arrow). The level of the processed signal is determined only by the settings of the Send Level knob and the Master Return Level knob. Having these two options gives you a lot of control over how you blend processed and unprocessed sounds in your mix.
Fig. 8. Send effects in Reason:This shows the overall effect send levels and effects in the Master Channel (1), the return levels in the Master Channel (2), an individual track send button and send level (3), and an individual track send button and send level with the pre-fader button engaged (4). In the examples that follow, we’ll be making most of our adjustments just with the individual channel controls.
Most engineers use reverb as a send effect, not as an insert effect. This allows much more flexibility and control during mixdown. You can easily achieve a more unified sound by sending multiple tracks to the same reverb, create individual locations for tracks by adjusting send levels, or distinguish different groups of tracks by applying different reverbs to all tracks in each group.
Since we're committed to giving you the best practices to adopt, we'll focus on using reverbs as send effects.
Create a Mix Using Reverb: Give a Single Track Its Own Space
This brief excerpt features a great Redrum pattern from the ReBirth 808 Mod Refill and a meandering Malström synth line triggered by an RPG-8 random arpeggiator pattern. The drum part is busy, to say the least. The arpeggio covers four octaves, and varies the gate length as the pattern progresses, resulting in staccato sections followed by legato sections. The basic levels are comparable. Give a listen:
The synth is certainly audible, but it gets lost in the ReDrum. Let's see if we can create some sonic space for it. We'll activate the default RV7000 plate reverb in the Master FX Send section by clicking on the first send button in the Malström channel strip.
Wow. That made a huge difference in the presence of the synth part. Even the low staccato notes stand out, and the smooth plate reverb seems to reinforce not only the individual pitches, but also the loping random melody overall. And that's just with the default settings, not even with any adjustment to send or return levels! Let's try the next default send effect by activating the second send on the Malström channel strip, which feeds a room reverb also on the RV7000.
The room reverb definitely gives the synth notes some space and makes them more present. But it doesn't have the smooth sustain of the plate reverb. Let's tweak the send level on the channel strip by cranking it up about 10 dB and see what that sounds like.
Increasing the send level had two interesting effects: It gave the synth its own sonic space, but that space sounded like it was way behind the drums! This is the basic idea of how you can use reverb to make one track sound like it's toward the back of the soundstage, and another sound like it's toward the front. More effect = farther away from the listener. We'll experiment more with this a little later. Now let's try out the next send effect, which is a tape echo created with a Combinator patch that uses two DDL-1 delay instruments and the tape emulation effects from a Scream 4 sound destruction unit.
The multiple echoes reinforce the sound while giving it a very distinct sense of space. This kind of effect isn't for all types of music, but it works great with a nice melodic synth patch like this. Let's try out the fourth and last default send effect, which is a very simple 3-tap delay from a single DDL-1 instrument.
Very interesting. There is no reverb per se applied to the Malström track, yet it sounds like it has reverb on it. It's more present in the mix as well. This is because on the sustained notes, even though we can't hear the echoes distinctly, they definitely create a sense of sustained reverb. On the staccato notes, you can still hear the delayed echoes, but since they're rapid and they decay quickly, they continue the apparent effect of reverberation. This is why engineers use both reverb and delay to help give each track its sonic space; even when used by itself, delay can create a very convincing sense of space.
This also explains why we had our trombone player go to the Grand Canyon earlier: To demonstrate that echo, delay, and reverb are effective variations of the same tool. What else have we picked up on?
Increasing the send to a reverb makes a track recede from the listener
Delay can have the same overall effect as reverb in a mix
Create a Mix Using Reverb: Make Tracks Come Forward and Others Recede
This track is a wacky bit of big jazz that has a brief feature for the saxophone section, which consists of soprano, alto, tenor, and baritone sax. Each part is angular and the harmonies are crunchy, to say the least. Let's have a listen.
You can hear that there are individual parts, but it's difficult to discern them, even though the tone is distinct for each instrument. They definitely don't blend, and there is not a clear sense of which instrument has the lead. Let's start by adding some room reverb to all four instruments, using the second send effect controls in each channel strip.
Putting the entire section in the same reverb space does help make them sound more distinct. It also makes the section sound like it was in the same room at the same time as the rhythm section, though they're obviously closer to the listener. It's a good start. Let's assume the soprano sax has the lead, and bring it to the fore a little bit more by increasing the send levels of the other three saxes.
Well those three saxes sure sound like they're in the background. But they're still just as loud as the lead soprano sax. Let's adjust their respective levels just a bit using the channel faders, to see if we can create the sense that the soprano sax is stepping forward.
By reducing the three lower saxes by 3 dB and boosting the soprano 1 dB, we've created a pretty convincing audio image of the lead player standing closer to the listener than the rest of the section. Musically, we've probably gone a bit overboard, as this might not be the best mix. But what the heck, let's switch things up and make the tenor player step out front.
Tenor players unite! With the tenor at 1 dB on the fader and -12 dB at the send, while its colleagues are all at -3 dB at the faders and 0 dB at the sends, we've created a clear picture of one instrument coming forward toward the listener and others moving away, even though it's not playing the highest part.
You can use these methods on background vocals, rhythm section tracks, and any tracks that you want to sound unified, yet at different distances away from the listener. The takeaway:
Sending a group of tracks to the same reverb gives a unifying sound
To bring a track forward, bring up the fader and reduce the effect send
To send a track away from the listener, bring down the fader and increase the effect send
Create a (Rough) Mix Using Reverb
This blues track you may recall from other Tools for Mixing articles. We'll try the X-Games mix on it, using reverb and a little bit of level adjustment only to create a rough mix. As we said before, this is not a “best practice” for creating a rough mix. It is, however, a good way to learn what best practices are for using reverb and delay to create sonic space for each track in your mix. The track levels are consistent, and the panning is all right up the center. Let's give a listen.
We can hear all the parts, but the individual tracks are very dry. There's no sense of blend, and all the instruments are right up front. Let's start by putting the horns and drums in the back of the soundstage by increasing the channel strip send levels to the default room reverb and bringing down the channel faders a bit on the horns.
All right, now we've got a stage going on. The drums are still very present, but their sound is defined by the size of the stage. The horns sound like they're behind the guitar and organ, right where we want them for now. The guitar needs its own space, let's try some delay to see if that sets it apart.
Sending the guitar track to the tape echo certainly sets it apart. We also sent the organ to the room reverb, though not so much as to make it recede. It's starting to sound like a band. But there might be a couple more things we can do to make it sound better. One danger to sending groups of instruments to the same reverb is that the muddy low-mid frequencies can start to build up, and this might be the case with this rough mix. Let's see if we can't bring those down by editing the EQ on the RV7000 that's producing the room reverb.
Fig. 9. To edit the EQ on the RV7000, go to the Rack View, click on the EQ Enable button, and click on the triangle to the left of the Remote Programmer. This brings opens up the Programmer. Click on the Edit Mode button in the lower left corner until the EQ mode light is lit. The RV7000 gives you two bands of parametric EQ to work with, and for our needs, the Low EQ is all we need. Set the Low Frequency to its highest setting, which is 1 kHz, and crank the Low Gain all the way down. This creates a highpass filter that removes all the low and mid frequencies that were bouncing around our virtual stage due to the effect of the room reverb.
Ah, cutting the low EQ on the RV7000 helped a lot to open things up, but it still leaves us a sense of space. For fun, we also put some plate reverb on the guitar along with the tape echo, which really put it in a unique space that doesn't interfere with the other instruments. We sent the organ to the room reverb, to make it seem like it's part of the session, but still up front.
Perhaps not the mix we'd send to the mastering studio, but certainly one that shows how easy it is to use reverbs and delays to set up a virtual soundstage! The takeaway from this mix:
Cut the mids and lows out of the reverb using the built-in EQ on the reverb itself; this will help keep your mix from getting muddy
Send tracks to the back by increasing the send and bringing down the level
Blend tracks where possible by sending to the same reverb
Help individual tracks find their own space by sending them to a reverb or delay different from most other tracks
Combined with your mastery of the other mixing tools, your knowledge of how to use reverb and delay in a mix will help you get the mixes you want in a minimum amount of time!
Based in the San Francisco Bay Area, Ernie Rideout is Editor at Large for Keyboard magazine, and is writing Propellerhead Record Power! for Cengage Learning.
When preparing a space for recording and mixing we enter a potential minefield, as no two areas will sound the same, and therefore no one-solution-fits-all instant fix is available. There are, however, a few systematic processes we can run through to facilitate vastly improving our listening environment.
When putting together a home studio, it is very easy to spend sometimes large sums of money buying equipment, and then to neglect the most important aspect of the sound; namely the environment set up and used for recording. No matter how much we spend on computers, speakers, guitars, keyboards or amps etc., we have to give priority to the space in which they are recorded.
Whether it be a house, apartment, or just a room, the method is still based on our ability to soundproof and apply sound treatment to the area. It is extremely difficult to predict what will happen to sound waves when they leave the speakers. Every room is different and it’s not just the dimensions that dictate how a room will sound. Assorted materials which make up walls, floors, ceilings, windows and doors - not to mention furniture - all have an effect on what we hear emanating from our monitors.
Fig 1. A vocal booth with off the shelf acoustic treatment fitted.
Whether we have a large or a small budget to sort out our space, there are a number of off-the-shelf or DIY solutions we can employ to help remedy our problem. It should be pointed out at this stage that a high-end studio and a home project studio are worlds apart. Professional studio design demands far higher specification and uses far narrower criteria as its benchmark, and therefore costs can easily run in to hundreds of thousands!
Why do we use acoustic treatment?
An untreated room - particularly if it is empty - will have inherent defects in its frequency response; this means any decisions we make will be based on the sound being ‘coloured’. If you can’t hear what is being recorded accurately then how can you hope to make informed decisions when it comes to mixing? Any recordings we make will inherit the qualities of the space in which they are recorded. Fine if it’s Abbey Road or Ocean Way, but maybe not so good if it’s your bedroom.
No matter how good the gear is, if you want your recordings or mixes to sound good elsewhere when you play them, then you need to pay attention to the acoustic properties of your studio space.
Begin with an empty room
When our shiny new equipment arrives in boxes, our instinct is always to set it up depending on whether it ‘looks right’, as if we are furnishing our new apartment.
Beware. Your main concern is not to place gear and furniture where they look most aesthetically pleasing, but where they sound best. The most important consideration is to position the one thing that takes up zero space but ironically consumes all the space. It is called the sound-field, or the position in the room where things sound best.
One of the things I have learned is that the most effective and reliable piece of test equipment is - surprise surprise - our ears! Of course we need more advanced test equipment to fine-tune the room but we need to learn to trust our ears first. They are, after all, the medium we use to communicate this dark art of recording.
Before you shift any furniture, try this game.
Ask a friend to hold a loudspeaker, and whilst playing some music you are familiar with, use a piece of string or something which ensures he or she maintains a constant distance from you of say 2-3 metres. Get them to circle around you whilst you stand in the centre of the room listening for the place where the room best supports the ‘sound-field’. The bass is usually the area where you will hear the greatest difference. As a guide listen for where the bass sounds most solid or hits you most firmly. Why am I focusing on bass? Because, if you get the bass right, the rest will usually fall into place.
Also, avoid areas where the sound is more stereo (we are after all holding up just one speaker, a mono source); this is usually an indication of phase cancellation. Beware of areas where the sound seems to disappear.
Finally, having marked a few potential positions for speaker placement, listen for where the speaker seems to sound closest at the furthest distance. We are looking for a thick, close, bassy and mono signal. When we add the second speaker this will present us with a different dilemma but we’ll talk about speakers later.
Remember: Though you may not have any control over the dimensions of your room, you do have a choice as to where you set up your equipment, and where you place your acoustic treatment. As well as the above techniques there are other things to consider.
It is generally a good idea to set up your speakers across the narrowest wall.
As a rule, acoustic treatment should be as symmetrical as possible in relation to your walls.
Ideally your speakers should be set up so that the tweeters are at head height
The consistency of the walls has a huge bearing on the sound. If they are thin partition walls then the bass will disperse far easier and be far less of a problem than if they are solid and and prevent the bottom end from getting out. (This is a Catch-22 as thin walls will almost certainly not improve relations with neighbours!)
Audio 1.'Incredible' Front Room:
Audio 2.'Incredible' Center Room:
Audio 3.'Incredible' Back Room:
Three audio examples demonstrating the different levels of room ambience present on a vocal sample played 0.5 m/2.5m/5m from the speakers in a wooden floored room.
The Live Room
If you are lucky enough to have plenty of space and are able to use a distinct live area the rules we need to observe when treating a listening area don’t necessarily apply here. Drums, for example, often benefit from lots of room ambience, particularly if bare wood or stone make up the raw materials of the room. I’ve also had great results recording guitars in my toilet, so natural space can often be used to create a very individual sound. Indeed, I’ve often heard incredible drum sounds from rooms you wouldn’t think fit to record in.
Fig 2. Reflexion Filter made by SE Electronics.
It is often a good idea to designate a small area for recording vocals or instruments which require relative dead space. It would be unnatural (not to mention almost impossible) to make this an anechoic chamber, devoid of any reflections, but at the same time the area needs to be controllable when we record. Most of us don’t have the luxury of having a separate room for this and have to resort to other means of isolating the sound source like the excellent Reflexion Filter made by SE Electronics. This uses a slightly novel concept in that it seeks to prevent the sound getting out into the room and subsequently cause a problem with reflections in the first place. Failing this, a duvet fixed to a wall is often a good stopgap and the favourite of many a musician on a tight budget.
Time for Reflection
Every room has a natural ambience or reverb, and it should be pointed out at this stage that it is not our aim to destroy or take away all of this. If the control room is made too dry then there is a good chance that your mixes will have too much reverb, the opposite being true if the room is too reverberant.
The purpose of acoustic treatment is to create an even reflection time across all - or as many as possible - frequencies. It obviously helps if the natural decay time of this so called reverb isn’t too excessive in the first place.
Higher frequency reflections, particularly from hard surfaces, need to be addressed as they tend to distort the stereo image, while lower frequency echoes, usually caused by standing waves, often accent certain bass notes or make others seem to disappear. High frequency "flutter echoes", as they are known, can often be lessened by damping the areas between parallel walls. A square room is the hardest to treat for this reason, which is why you generally see lots of angles, panels and edges in control room designs. Parallel walls accentuate any problems due to the sound waves bouncing backwards and forwards in a uniform pattern.
Fig 3. A graph showing different standing waves in a room
Standing, or stationery, waves occur when sound-waves remain in a constant position. They arise when the length of this wave is a multiple of your room dimension. You will hear an increase in volume of sounds where wavelengths match room dimensions and a decrease where they are half, quarter or eighth etc. They tend to affect low end or bass (because of the magnitude of the wavelength). For this reason they are the hardest problem to sort out, and because of the amount of absorption and diffusion needed generally the costliest to sort out. Further explanation is required.
Suppose that the distance between two parallel walls is 4 m. Half the wavelength (2m) of a note of 42.5 Hz (coincidentally around the pitch of the lowest note of a standard bass guitar-an open ’E’) will fit exactly between these surfaces. As it reflects back and forth, the high and low pressure between the surfaces will stay constant – high pressure near the surfaces, low pressure halfway between. The room will therefore resonate at this frequency and any note of this frequency will be emphasized.
Smaller rooms sound worse because the frequencies where standing waves are strong are well into the sensitive range of our hearing. Standing waves don't just happen between pairs of parallel surfaces. If you imagine a ball bouncing off all four sides of a pool table and coming back to where it started; a standing wave can easily follow this pattern in a room, or even bounce of all four walls, ceiling and floor too. Wherever there is a standing wave, there might also be a 'flutter echo'.
Next time you find yourself standing between two hard parallel surfaces, clap your hands and listen to the amazing flutter echo where all frequencies bounce repeatedly back and forth. It's not helpful either for speech or music.
Audio 4. Subtractor in Reason:
Here’s an ascending sequence created in Reason using Subtractor set to a basic sinewave. Whilst in the listening position play back at normal listening level. In a good room the levels will be even but if some notes are more pronounced or seem to dissapear this usually indicates a problem at certain frequencies in your room.
The two main approaches when sorting out sound problems are finding the correct balance between absorption and diffusion. While absorbers, as their name suggests, absorb part of the sound; diffusers scatter the sound and prevent uniform reflections bouncing back into the room.
Absorbers tend to be made of materials such as foam or rockwool. Their purpose is to soak up sound energy. Foam panels placed either side of the listening position help with mid or high frequencies or traps positioned in corners help to contain the unwanted dispersion of bass.
Diffusers are more commonly made of wood, plastic or polystyrene. By definition they are any structure which has an irregular surface capable of scattering reflections. Diffusers also tend to work better in larger spaces and are less effective than absorbers in small rooms.
Companies such as Real Traps, Auralex and Primacoustic offer one-stop solutions to sorting out acoustic problems. Some even provide the means for you to type in your room dimensions and then they come back with a suggested treatment package including the best places to put it. These days I think these offer excellent solutions and are comparatively cheap when you look at the solutions they offer. What they won’t give you is the sound of a high end studio where huge amounts of measurement and precise room tuning is required but leaving science outside the door they are perfect for most project studios.
The DIY approach can be viewed from two levels. The first, a stopgap, where we might just improvise and see what happens. The second, a more methodical, ‘let’s build our own acoustic treatment because we can’t afford to buy bespoke off the shelf tiles and panels’ approach.
This could simply be a case of positioning a sofa at the back of the room to act as a bass trap. Putting up shelves full of books which function admirably as diffusers. Hanging duvets from walls or placing them in corners for use as damping. I even know of one producer who used a parachute above the mixing desk to temporarily contain the sound!
Build your own acoustic treatment. I personally wouldn’t favour this as it is very time consuming and also presumes a certain level of abilty in the amateur builder department. The relative cheapness of ‘one solution’ kits where all the hard work is done for you also makes me question this approach. However, there are numerous online guides for building your own acoustic panels and bass traps which can save you money.
Though speakers aren’t directly responsible for acoustic treatment their placement within an acoustic environment is essential. I’ve already suggested how we might find the optimum location in a room for the speakers; the next critical thing is to determine the distance between them. If they are placed too close together the sounds panned to the centre will appear far louder than they actually are. If they are spaced too far apart then you will instinctively turn things panned in the middle up too loud. The sound is often thin and without real definition.
Finally, speaker stands or at least a means of isolating the speaker from the surface on which it rests is always a good idea. The object is to prevent the speaker from excessive movement and solidify the bass end. MoPads or China Cones also produce great results
The role of headphones in any home studio becomes an important one if you are unsure of whether to trust the room sound and the monitors. This, in essence, removes acoustics from the equation. Though I would never dream of using them as a replacement for loudspeakers, they are useful for giving us a second opinion. Pan placement can often more easily be heard along with reverb and delay effects
With only a small amount of cash and a little knowledge it is relatively easy to make vast improvements to the acoustics of a project studio. A science-free DIY approach can work surprisingly well, particularly if you use some of the practical advice available on the websites of the companies offering treatment solutions. Unfortunately, most musicians tend to neglect acoustic treatment and instead spend their money on new instruments or recording gear. When we don’t get the results we expect it is easy to blame the gear rather than look at the space in which they were recorded or mixed. Do yourself a favour - don’t be frightened, give it a go. Before you know it you’ll be hearing what’s actually there!
Gary Bromham is a writer/producer/engineer from the London area. He has worked with many well-known artists such as Sheryl Crowe, Editors and Graham Coxon. His favorite Reason feature? “The emulation of the SSL 9000 K console in 'Reason' is simply amazing, the new benchmark against which all others will be judged!”