Nothing can elevate a beat to sound like a catchy real song faster than a good vocal. But on the other hand, nothing can sabotage an otherwise great beat to sound like an amateur mess than a bad vocal. And sometimes, all that stands between one thing and the other is mix technique.
In this video, Ryan shows us how he added vocals to his own song and went about making them sound every bit as polished and perfected as the instruments that make up the beat. You'll see how to make your vocals pop out of the mix, get natural tuning, and find that balanced with effects that are audible without being overpowering.
NU.F.O. stands for Newly Formed Objective and that objective for Boston residents Moses and EP1C is to combine their past experience creating music across nearly all styles and genres into a whole new electronic animal. If you listen to their catalog of releases, you’ll find it filled with House, Drum & Bass and Electro beats with vocals at the forefront, sung by both of NU.F.O.’s members: Moses & EP1C. Their vocal style has roots in Hip Hop, Pop, Rock and even R&B with lyrics that are both thoughtful and thought provoking.
How do you use Reason in your music making? All of our music is written, produced, mixed and mastered in Reason. The software promotes a creative workflow through its intuitive interface and hands-on feel. In comparison to other DAW's, we have found that production in Reason just feels more organic, resulting in sounds and songs that stand out. Our identifiable sound wouldn't be possible without it.
You work a lot with vocals in your tracks, any tips for vocal production? We produce a rough version of each track to serve as a canvas for writing the vocals. Once we track them, we re-approach the song to make sure everything works together in the best way possible. Reason's tools and Rack Extensions make it easier to treat vocals more like a sequenced instrument that can be edited and effected in the same impressive ways.
How does your live set-up look? Coming from a band background, when we first set out to do this project we wanted to make sure that our live show had some form of musical performance beyond what is typically expected of EDM artists. We perform parts of each song using MIDI keyboards and controllers in addition to dual live vocals. We accomplish this by running Reason on laptops with Balance interfaces. We use our actual studio project files and remove the parts that we play live. We also control our own vocal mix by running the mics into the Balance interfaces, through Reason. The best thing about doing it this way is that all of the vocal fx, routing and automation we used in the studio version are retained and reproduced in real-time on our live vocals.
Do you have any favorite sound or patch? It's too hard to name just one since we try to use different sounds as often as possible. In general, though, our go-to synth is Thor; especially when you run it into an Etch Red with some drive gain.
What has been the best moment in your music making career thus far? The best moments we experience are those where we're on stage performing something we've worked tirelessly to create and know that in that moment we're connected with the audience in a way that is unlike anything else. It's incredibly powerful and will never get old for us.
Any words of wisdom for aspiring producers and musicians? Art doesn't have to follow rules. We believe you shouldn't be bound by the limitations of genres, styles, formulas or expectations. Write music that you want to hear, not what you think someone else wants to. In the end, your opinion is the only one that really matters.
In these Reason Tips videos Mattias gives you some valuable pointers on mixing vocals. Since vocals are often what carries the track, it's important to get them sitting right in the mix! Learn how what frequencies to pay attention to, how to make a de-esser in Reason's mixer, about doubling, compression and parallel processing to make sure your vocals sound great!
So you finally finished recording all your vocal tracks, but unfortunately you didn't get one take that was perfect all the way through. You're also wondering what to do about some excessive sibilance, a few popped "P"s, more than a few pitchy lines and some words that are all but too soft to even be heard - don't worry, there's hope! And hey, welcome to the world of vocal editing.
A Little History...
Since the beginning of musical performance, singers (and instrumentalists) have craved the possibility of re-singing that one "if only" note or line. You know the one: "if only I had hit that pitch, if only I had held that note out long enough, if only my voice hadn't cracked", etc. With the advent of early recording technologies, these 'if only' moments were now being captured, and performers were forced to face reliving those 'if only' moments forever! One 'if only' moment could ruin an entire take.
With the popularity of analog tape recording in the mid 20th century also comes the popularity of splice editing. Now you can record the same song two different times, and choose the first half of one take and the second half of another. Next comes multi track recording, where you don't even have to sing the vocal with the band!
Multi track recording introduced punching in and out, which allowed re-recording of just the "if only" moments on an individual track. But more importantly as it relates to the subject at hand, multi-track recording also introduced the idea of recording more than one pass or 'take' of a lead vocal, leading to what is now known as "vocal comping". More on that in just a bit.
But before we get into the nitty-gritty, here's a brief outline of the typical vocal editing process for lead and background vocals. Of course, much of this is subject to change according to production direction, or the vocalist's skills and past experience.
Recording: This ranges from getting the first take, to punching in on a take, to recording multiple takes for comping.
Comping: Combining various takes into one final track, tweaking edits to fit, crossfading if needed.
Basic Cleaning: Listen in solo one time through. Typical tasks include removing the obvious things like talking, coughing, mouth 'noises' etc., checking all edits/crossfades, fading in/out where necessary.
Performance Correction: Timing and pitch correction takes place after you have a solid final comp track to work with.
Final Prep: this includes everything from basic compression/EQ, to de-essing, reducing breaths, filtering out Low Frequencies, etc.
Leveling: During the final mix, automating the vocal level (if needed) to sit correctly in the mix throughout the song.
Note that at many stages along the way you will be generating a new 'master vocal' file (while still holding on to the the original files, just in case!). For example, let's say you record 4 vocal 'takes' which become the current 'masters'. The you comp those takes together to create a new "Comp Master" vocal track, and then you tune/time the Comp Master and sometimes create a "Tuned Vocal Master" track (which is then EQ'd and compressed to within an inch of its life while simultaneously being drowned in thick, gooey FX, all before being unceremoniously dumped into what we like to call the mix).
Recording Vocals for Comping
In order to comp a vocal, you must first have multiple vocal tracks to choose from. Recording comp tracks can be slightly different from recording a 'single take' vocal. For one thing, you don't have to stop when you make a mistake — in fact, many times a performer gets some great lines shortly after making a mistake!
I tend to ask for multiple 'full top to bottom' takes from the vocalist, to preserve the performance aspects and to help keep things from getting over-analytical. Then I use comping to work around any mistakes and 'lesser' takes, choosing the best take for each line. Often the vocalist will be involved with the comping choices, so be prepared to be a good diplomat (and don't be too hard on yourself if you're comping your own vocals)!
How many tracks?
This will be different for every singer, but for comping I generally suggest recording around three to five tracks. Any less and I don't feel that I have enough choices when auditioning takes — any more and it becomes difficult to remember how the first one sounded by the time you've heard the last take.
When recording tracks that I know will be comped, I usually let the singer warm up for a few takes (while setting levels and getting a good headphone mix) until we get a 'keeper' take that is good enough to be called 'take one'. From there, simply continue recording new takes until you feel you have enough material to work with. If you find yourself on take seven or eight and you're still not even getting close, it may be time to take a break!
In Reason, when tracking vocals for future comping, you simply record each 'take' on the same track. With 'tape' recording this would erase the previous take, but with 'non-destructive' recording you are always keeping everything (with the newest take laying on 'top' of the previous take). When you enter Comp Mode, you will see each take just below the Clip Overview area (with the newest take above the older takes). The 'takes' order can easily be rearranged by dragging them up or down. Double-click on any 'take' to make it the 'active take' (it will appear in color and in the Clip Overview, and this is the take you will hear if you hit play). Now comes the fun part.
Vocal Takes in Comp Mode
Vocal takes in comp mode.
To combine or 'comp' different parts of different takes together, use the Razor tool as a 'selector' for the best lines/words. After creating cut lines with the Razor tool, you can easily move them earlier or later by dragging the 'Cut Handles' left or right. You can delete any edit by deleting the Cut Handle (click on it and hit the 'delete' key). Create a crossfade by clicking/dragging just above the Cut Handle. Silence can be inserted by using the Razor to make a selection in the "Silence" row, located below the Clip Overview and above the Comp Rows.
Comping (short for compositing): picking and choosing the best bits from among multiple takes, and assembling them into one continuos 'super take'.
Now that you have your vocal tracks recorded, how do you know which parts to use? I've approached this process differently through the years. Previously, I'd listen to each take in its entirety, making arcane notes on a lyric sheet along the way — this was how others were doing it at the time that I was learning the ropes. More recently I've taken another approach that makes more sense to me and seems to produce quicker, smoother, and better comps.
Currently, my auditioning/selection process consists of listening to one line at a time, quickly switching between the different takes and not stopping for discussion or comments. This is the basic technique you will see me demonstrate in our first video (see below).
Now it's time for a little thing I like to call a Video Detour. Enjoy De-tour (a-hem). Follow along in this 'made for internet' production as I comp the first verse of our demo song "It's Tool Late" (by singer/songwriter Trevor Price).
Note: watch your playback volume - the music at the top comes in soft, but it gets louder when the vocals are being auditioned.
Comping a Vocal using Reason's "Comp Mode"
The three most common issues with vocals are pitch, timing, and level/volume. All three are easy to correct with today's digital tools and just a little bit of knowledge on your part.
After comping, I usually move on to correcting any timing issues. You may also jump straight into dealing with any tuning issues if you prefer. Often times there isn't a lot of timing work that needs to be done on a lead vocal. But when you start stacking background vocals (BGVs) things can get 'messy' very quickly. User discretion is advised.
In our next video example (it's coming, I promise), I will show you how to line up a harmony vocal track with the lead vocal. I will use the lead vocal as the timing reference, moving the harmony track to match the lead. Since you can only see one track at at time when editing, I use the playback curser (Song Position Pointer in Reason) to 'mark' the lead vocal's timing, then when I edit the harmony track using this reference point to line it up with the lead vocal.
I will also use the following editing techniques:
Trim Edit, where you simply trim either end of a selected clip to be shorter or longer as desired, which will expose or hide more or less of the original recording that is inside the clip.
Time Stretch (called Tempo Scaling in Reason), where you use a modifier key [Ctrl](Win) or [Opt](Mac) when trimming an audio clip, allowing you to stretch or shrink any clip (audio, automation, or MIDI) which changes the actual length of the audio within the clip.
Clip Sliding (my term), where (in Comp Edit mode) you use the Razor to isolate a word or phrase, and you slide just that clip right or left to align it - using this technique allows you to slide audio forward or backwards in time without leaving any gaps between the clips!
OK, thanks for waiting - here's the video:
Possibly an entire subject in itself, as everyone has their own take on vocal tuning. Of course, it's always best to 'get it right the first time' if you can. But sometimes you are forced to choose between an initial performance that is emotionally awesome (but may have a few timing or pitch flaws), and one that was worked to death (but is perfect in regards to pitch and timing). If only you could use the first take with all its emotion and energy. Well now you can!
Neptune Pitch Adjuster on the lead vocal
In Reason, using Neptune to naturally correct minor pitch issues is about as simple as it gets. The following video demonstrates using Neptune for simple pitch correction, as well as using it in a few more advanced situations.
Vocal "Rides" (as they are called for 'riding the fader/gain'), have been common from almost the beginning of recording itself. In rare cases, you may have to actually ride the vocal while recording the vocal(!) - this is the way it was done back with ‘direct to disk' and ‘direct to two-track' recordings. But luckily you can now do these ‘rides' after the vocal is recorded, or you can even draw in these moves with a mouse (with painstaking detail, if you are so inclined). Most of the time I use a combination of both techniques.
The basic idea with vocal rides is to smooth out the overall vocal level by turning up the soft parts and turning down the loud parts (in relation to the overall mix). The end game is to get the vocal to sit ‘evenly' at every point in the song, in a way that is meaningful to you. Or as I like to say, to get the vocal to ride ON the musical wave, occasionally getting some air but never diving too far under the musical water.
Great engineers learn the song line by line and ‘perform' precision fader moves with the sensitivity and emotion of a concert violinist. It really can be a thing of beauty to watch, in an audio-geeky sort of way. For the rest of us, just use your ears, take your time, and do your best (you'll get better!).
There's no right or wrong way to edit vocal levels, only a few simple rules to follow: Obviously, you don't want to ever make an abrupt level change during a vocal (but you can have somewhat abrupt automation changes between words/lines), and you don't want to be able to actually hear any changes that are being made. All level rides should ideally sound natural in the end.
As for techniques, there are three approaches you can take in Reason. The most familiar is probably Fader Automation, which can be recorded in real-time as you ‘ride' the fader. You can also draw in these moves by hand if you prefer. Additionally, you can do what I call "Clip Automation", which involves using the Razor to create new clips on any word, breath or even an "S" that is too loud or too soft. Since each separate clip has it's own level, you simply use the Clip Level control to make your vocal ‘ride'. Alternatively, you can use the clip inspector to enter a precise numeric value, increase/decrease level gradually in a ‘fine tune' way, or simultaneously control a selection of clips (even forcing them all to the same level if desired).
The ‘pros' to Clip Automation are that it is fast, you can see the waveform change with level changes, you can see the change in decibels, and you can adjust multiple clips at once. The main con is that you can't draw a curve of any sort, so each clip will be at a static level. All I know is it's good to have options, and there's a time and place for each technique!
Using "Clip Automation" to reduce multiple "S"s on a Vocal Track
As a 'fader jockey' myself, I prefer to begin vocal rides with a fader (real or on-screen). From there I'll go into the automation track and make some tweaks, or to perform more 'surgical' nips and tucks (if needed) on the vocal track. It's these smaller/shorter duration level changes that are more often ideally created with a mouse rather than a fader. Reducing the level of a breath or an "S" sound come to mind as good examples of 'precision' level changes that benefit from being drawn by hand.
Vocal Track with Level Automation (with the first clip ready for editing...
Leveling the vocal must ultimately be done in context, which means while listening to the final mix that the vocal supposed to be is 'sitting' in (or 'bed' it is supposed to 'lay' on, or choose your own analogy!). This is because you are ultimately trying to adjust the vocal level so that it 'rides' smoothly 'on' the music track at all times (ok, so I'm apparently going with a railroad analogy for now), which doesn't necessarily imply that it should sit at a static level throughout the song.
You would think that a compressor would be great at totally leveling a vocal, but it can only go so far. A compressor can and will control the level of a vocal above a certain threshold, but this doesn't necessarily translate into a vocal that will sit evenly throughout a dynamic mix. Speaking of compression, this is probably a good time to mention that all processing (especially dynamics) should be in place before beginning the vocal riding process, as changing any of these can change the overall vocal level (as well as the level of some lines in relation to others). Bottom line - do your final vocal rides (IF needed) last in the mixing process.
Let's begin - set your monitors to a moderate level and prepare to focus on the vocal in the mix. Oftentimes I prefer smaller monitors or even mono monitoring for performing vocal rides - you gotta get into the vocal 'vibe' however you can.
Things to look for:
Before you get into any actual detail work, listen to the overall vocal level in the mix throughout the entire song. Sometimes you will have a first verse where the vocal may actually be too loud, or a final chorus that totally swallows up the vocal. Fix these 'big picture' issues first before moving on to riding individual lines and words.
When actually recording the fader moves (as in the video), I'll push the fader up or down for a certain word and then I will want it to quickly jump back to the original level. In the "Levels" video, you will see me hit 'Stop' to get the fader level to jump back to where it was before punching in. The reason why I'm doing it this way is that if you simply punch out (without stopping) the fader won't return to it's original level (even though it's recording it correctly). Long story short, it's the quickest way I found to create my desired workflow, and it works for me (although it may look a bit weird at first)!
Often times you will find that it is the last word or two in a line that will need to be ridden up in level (sometimes the singer has run low on air by the end of a line). Also watch for the lowest notes in a vocal melody - low notes require more ‘air' to make them as loud as the higher notes, so they can tend to be the quieter notes in a vocal track. Another thing to listen for are any louder instruments that may ‘mask' the vocal at any time - sometimes the fix is to raise the vocal, other times you can get better results by lowering the conflicting instrument's level momentarily. In extreme cases, a combination of both may be required!
Other problems that rear their heads from time to time are sibilance, plosives, and other 'mouth noises'. These can all be addressed by using creative level automation, or by using a device more specifically for each issue - a 'de-esser' for sibilance, a High Pass Filter for plosives, for example.
Now, enjoy a short video interlude demonstrating the various techniques for vocal level correction, including the fader technique as well as automation techniques including break-point editing, individual clip level adjustments, and some basic dynamic level control concepts including de-essing and multi-band compression.
Controlling Vocal Levels in Reason.
Multi-bands for Multi Processes
I will leave you with one final tip; you can use a multi-band compressor on a vocal track to deal with multiple issues at once. The high band is good for a bit of 'de-essing', the mid band can be set as a 'smoother' to only 'reduce' gain when the singer gets overly harsh sounding or 'edgy', and the lower band can be used to simply smooth the overall level of the 'body' of the vocal. If there are four bands available, you can turn the level of the bottom-most band totally off, thus replicating a high pass filter for 'de-popping' etc. Additionally, adjusting the level of each band will act like a broad EQ!
Setting the crossover frequencies with this setup becomes more important than ever, so take care and take your time. Remember you are actually doing (at least) four different processes within a single device, so pay attention not only to each process on it's own but to the overall process as a whole. When it works, this can be the only processor you may need on the vocal track.
Multi-band Compressor as 'Multi Processor'
...all of the techniques in this article, however helpful they can be, are not always required - do I even need to remind you all to 'use your ears' at all times? Using vocal rides as an example, I've mixed two songs in a row (by the same artist), one where the vocal automation looked like a city skyline and the very next mix where the vocal needed no automation whatsoever!
As always; "listen twice, automate once"!
Annex Recording and Trevor Price (singer/songwriter) for the use of the audio tracks.
Giles Reaves is an Audio Illusionist and Musical Technologist currently splitting his time between the mountains of Salt Lake City and the valleys of Nashville. Info @http://web.mac.com/gilesreaves/Giles_Reaves_Music/Home.html and on AllMusic.com by searching for “Giles Reaves” and following the first FIVE entries (for spelling...).
Performing a lead vocal is arguably the toughest job in the recording studio. This in turn puts more emphasis on the importance of capturing and recording the vocal performance as perfectly as possible. Vocalists often tire easily and generally their early takes tend to be the best (before the thinking and over-analyzing takes over!)
Usually, and in a very short space of time, an engineer has to decide which mic, signal path (preamp, compressor eq etc) to use, set the correct level for recording and headphone balance, create the right atmosphere for singing and generally be subjected to, at best, minor grunts, at worst verbal abuse until the penny drops! Vocalists are a sensitive bunch and need nurturing, cuddling and whatever else it takes to make them feel like a supertar!
During this article I shall attempt to set out a strategy for accomplishing these goals and maybe throw in a tip or two I’ve picked up along the way to assist in capturing the perfect take.
Selecting a Microphone
Microphones come in all shapes and sizes but a basic understanding of how they work will help in any assessment of which one we choose.
All microphones work in a similar way. They have a diaphragm (or ribbon), which responds to changes in air pressure. This movement or vibration is in turn converted into an electrical signal, which is amplified to produce a sound. This is very simplistic but essentially the basic science behind making a sound with a mic.
There are three main types of microphone to choose from.- Dynamic, Ribbon and Condenser.
Dynamic microphones are generally used for more close miking purposes such as drums or guitar cabinets, their sound is usually more mid-range focused and they can cope with higher sound pressure levels (SPL’s)
Condenser, or Capacitor mics, as they are also known, are more sensitive to sound pressure changes. They also tend to have a greater frequency response or dynamic range than dynamic mics. For this reason they tend to be the de-facto choice for vocals. Condenser microphones require a power source, called phantom power, to function. This is needed to power the built in preamplifier and also to polarize (power) the capsule. However this may not always be the choice. Bono from U2 for example likes to use a Shure SM-58 dynamic mic as it allows him more freedom to move around or perform the vocal as if in a live environment! A condenser mic, due to its sensitivity, might prohibit being held in the hand due to noises or indeed it may have too greater frequency range!
Ribbon mics are the curved ball here, so to speak, as they are often richer in tone to both our alternatives. They are softer or more subtle; they tend not to have the hyped top-end of condenser mics and unlike dynamic mics are very sensitive to SPL changes. For this reason they have to be treated with care as the ribbon will not tolerate excessive movement from either loud sound sources or being thrown around! They also generally have a much lower output level to the other two and subsequently need more gain from a pre-amp.
Here’s a short snippet of a vocal I’m currently working on recorded with a Shure SM58:
The same recorded with a Neumann U-87:
And with a Coles 4038 ribbon mic:
When comparing models there are a number of important specifications we need to consider.
When we look at the frequency response of the vocal mic we select will it sound flat or natural or will it boost certain frequencies? It is often preferable, particularly with vocals, for a microphone to enhance or accentuate certain frequencies, which suit a particular singer. Check out www.microphone-data.com for a detailed look at different mics specs. I love this site and spend hours trawling through the pages…does this mean I’m sad and need to get a life?
Sound Pressure Level (SPL)
How much dynamic range or level can the microphone cope with? This is the difference between the maximum sound pressure level and the noise floor or in basic terms the range of usable volume without distortion at high level or noise at low level. Dynamic mics are generally much better at dealing with loud source material than the condenser or ribbon variety.
Noise Floor or Noise level
How loud is the background noise created by the microphone itself? Obviously for someone who sings rock music this is less of an issue than some body who sings ambient jazz. As a rule of thumb Capacitor mics are more adept at capturing subtleties and nuances than dynamic mics.
Scientifically this is a measure of how efficiently a microphone converts sound pressure changes to control voltage or electrical signals. This basically is how loud the microphone is capable of being. You remember I mentioned earlier that Ribbon mic require a preamplifier with lots of gain to get the correct level to feed the mixer or recorder-in our case ‘Reason’.
Don’t worry…nothing to do with global warming!
Our final consideration in choosing a microphone is the pickup pattern or as it is more commonly known the Polar Pattern. On a circular graph this is a representation of which direction a mic picks up sound. The diagram illustrates how this works.
Fig 2. The diagram shows three basic polar patterns. All other patterns are variations of these. The blue circle is an Omni pattern, the red circles show a figure of eight and the green line shows the cardiod.
There are essentially three basic patterns for us to consider when understanding where the mic will pickup sound:
As its name suggests, the microphone will pickup sound equally from all directions. Useful if you want to record all the ambience or space around the source.
Otherwise known as (and as its name suggests) ‘heart-shaped’ Picks up the source mainly from the front, while rejecting most sound from the sides and rear. The advantage here is that the microphone captures only the source that it is pointing at. Hyper-cardiod is similar and often cast in the same category. These simply have a narrower field of pick up than the normal cardiod and are very well suited for singers where more isolation is required or else where feedback is a problem.
Also known as ‘figure of eight’. Here sound is picked up equally from the front and rear, whilst signal from the sides is rejected.
Generally speaking there is no rule as to which type of microphone or which pattern we should use when recording vocals although most engineers tend to veer towards a condenser mic and use a cardiod pattern. There is good argument to suggest that an omni pattern is the ultimate setting but this poses a further question relating to the recording environment, which I shall touch on, later in this article.
In summary our checklist when choosing a suitable microphone should look something like this:
Consider the frequency response. Is it flat and will it therefore produce a more natural result or does it boost particular frequencies and thus enhance our vocal sound?
Check the polar pattern. Does it have the pick up pattern we require?
Check the sensitivity. How much gain will we need on our preamp to get the required level for recording?
Check the dynamic range and the noise level. Can the mic handle the softest and loudest levels for capturing the vocal performance?
Practical tips and further considerations
It is generally beneficial to use a shock-mount when recording vocals. This prevents low frequency sounds from getting in to the microphone.
Fig 3. Hi-pass filter in the Reason eq section, set to take out any unwanted noises below 80Hz.
It is also a good idea to use a pop shield. Often when a singer stands too close to the microphone sudden puffs of air such as ‘p’ and ‘f’ sounds produce unwanted noises. Pop filters can be bought ready-made to prevent this but the more resourceful amongst us have sometimes resorted to the DIY approach and stolen a pair of our wife’s (or husbands!!!) stockings and stretched them over a wire coat hanger to achieve a similar result.
To help with both of the above problems we might also try a hi-pass filter. If the microphone does not have a dedicated switch then we could use the filter in the eq section such as the one in the channel strip in Reason.
The Proximity Effect
Nothing to do with over-use of garlic in cooking and need for breath fresheners!
As we get nearer or further away from a microphone the bass frequencies increase or decrease accordingly. Typically a cardiod microphone will boost or cut frequencies around 100Hz by as much as 10-15dB if we move from 25cm to 5cm. and back again. This phenomenon is known as the ‘Proximity Effect’. We can use this to our advantage if the singers mic technique is good, producing a richer, deeper and often more powerful sound that pop or rock singers really like! Radio DJ’s or announcers have been using this technique for years, particularly on the late night luuuurrrrv show!
Unfortunately the use of this effect requires the vocalist to maintain a fairly consistent distance from the mic and for this reason it is often more desirable to select a mic which has less of this proximity effect. As a generalisation condenser microphones are better at this than dynamic mics. Here are a few audio examples demonstrating this principle.
Vocal recorded using a Neumann U-87 recorded at a distance of 3cm:
At a distance of 18cm:
Finally at 60cm:
The Tube Effect
A ‘tube’ or ‘valve’ microphone uses a valve as the preamplifier for gain rather than a conventional solid-state (usually FET) circuit. Most early condenser microphones such as the Neumann U-47 or the AKG C-12 employed this circuitry, at least until transistors were invented. The tonal characteristic is often warmer and more pleasing to the ear, the sound is however coloured and not always suitable for every singer. In reality the tube is adding a small level of distortion and if overused can sound muddy and unfocused!
The sound of the classic AKG C-12 Valve microphone:
Something people often overlook and underestimate is the effect of the room or recording environment on the sound of the vocal. Indeed you can be using the best microphone in the world and still obtain an awful sound if the acoustic space is reflective or badly treated. To an extent our ears are able to block out or ignore deficient room acoustics whereas the microphone only records what’s there! An omni-directional microphone will accentuate this whereas a hyper-cardiod will, to an extent, minimise it. Unfortunately, not all of us are blessed with perfect recording environments all of the time and often have to adapt or improvise with the set of conditions we have.
Reflection filters have become very trendy these days with the advent of bedroom studios. The SE reflection filter is one I use personally at home and highly recommend. Failing on this, duvets, carpets or anything absorbent, will help in alleviating the situation. In summary it is often not your mic that is the problem but the environment in which it is being recorded which is the real problem. Save some money, get down to Ikea, and buy a couple of new duvets before spending another $1000 on a bespoke microphone!
The headphone mix, after microphone selection, is probably the most critical part of the recording process for a vocal. We can save ourselves hours, not to mention several tantrums, if the balance is good for our singer. Some vocalists also like to sing with one side of their headphones off. In my experience this can be an indication that their headphone mix isn’t completely right. Singers also tend to have their ‘cans’ too loud because they believe they can’t hear themselves when in reality they are not relaxed or are hearing themselves incorrectly. When pitching is flat this can also be a tell tale sign of headphones being too loud, the reverse being true when the singer is performing sharp! (‘ I should point out at this stage that ‘cans’ is on the list of banned word in certain studios!)
It is often a good idea to setup a separate headphone mix or, cue mix, as it is sometimes called for the singer. This is useful for a number of reasons. If we want to tweak the vocal whilst the singer is singing, but without them hearing us do so, we need to set up a separate balance from the one we are listening to in the control room. For this we simply use one of the sends in the mixer in Reason. We route the output from one of the sends in the master section to the hardware interface, which in turn feeds our headphones via dedicated outputs on our soundcard. Provided we have the Pre button engaged, to the left of the Send knob, when we solo a channel in the mixer the vocalist will still hear only their individual cue mix.
Fig 4. Shows Send 8 being used as a cue send for the vocalists headphone mix. Note the send is sent Pre-fade so that whatever changes we make to the channel (level, solo mute etc) do not affect what is heard in the cans. Also note that in the Master section we can monitor the FX send via the control room out.
Fig 5. Here we see the back of the Master section where send 8 is routed out of Outputs 3-4 of our soundcard via Reasons hardware interface.
Time is of the essence
Capture that take before it’s too late! Singers have a tendency to over-analyse or be over critical of their performance. Often the first thing they sing, before the inhibitions set in, are the best things they sing. My strategy is to record everything. It is after all much easier to repair a less than perfect vocal sonically, even if the compressor and pre-amp weren’t set up perfectly, than it is get the vocalist to repeat that amazing performance.
Singers often like to sing with reverb. This is in itself okay but not if it’s at the expense of pitching and timing. It’s harder to find the pitch of a note if all you’re hearing is a wash of reflective sound. Isn’t that what we spent all that time trying to eliminate when we created the recording environment? Not exactly, no, sometimes vibe is an important factor but there is a happy medium here!
Selecting a preamplifier, and if necessary a compressor, is often almost as difficult as choosing the right mic. Whether you are using a Neve 1073 or an Apogee Duet as your preamplifier the principle of setting up and recording a vocal is the same. Increase the gain on the pre-amp until you start to hear a small amount of audible distortion or see slight clipping and then reduce the level by 5-10dB.
I have often heard it said that if we select a valve microphone for our vocal then the preamp and compressor might be better suited to being solid-state. Too many valves in the chain can often add too much colour! Personally, in my chain, I like a valve mic with solid-state preamp followed by a valve compressor.
We only really need a compressor (or more likely a limiter) if we have a particularly dynamic vocalist. An LA-2A opto or 1176 FET style is ideal if the budget allows! Be very sparing as compression if set incorrectly cannot be undone! The M-Class compressor can be set up to behave in a subtle way, conducive to controlling level fluctuations but not squashing the sound.
Fig 6. A typical compressor set-up for recording vocals. Moderate attack and release settings ensure a relatively inaudible effect on the input when recording a vocal.
The Hardness Factor!
An interesting exercise when evaluating different microphones for different vocalists is to rate them on a scale of 1-10 on a hardness factor. A Shure SM-58 dynamic mic might get an 8 while a Rode NT2 condenser mic might get a 4. When selecting the microphone, we also give our singer a rating as well. A hard sounding voice gets a softer microphone whilst a more subtle vocal may require a harder sounding mic.
Whilst writing this article I have been conscious of not being too preachy! These are only guidelines to recording a vocal and often the great thing with recording is breaking rules. A basic understanding of how the microphone works is helpful but the single most important thing is getting the atmosphere right for the vocal to happen in the first place. Many great vocal performances have been captured with strange microphone selections and singers insisting on stripping off in the vocal booth to get the vibe! Don’t question it, just remember…record everything!
Gary Bromham is a writer/producer/engineer from the London area. He has worked with many well-known artists such as Sheryl Crowe, Editors and Graham Coxon. His favorite Reason feature? “The emulation of the SSL 9000 K console in 'Reason' is simply amazing, the new benchmark against which all others will be judged!”