Recently upgraded to 3.0 and enjoying the "punch" I can add to my Reason tracks. I'm scratching my head, however, trying to figure out how I want to handle vocals now. Up til now I have used Reason rewired to Cubase: Reason for 95% of the instrumentation/music tracks and Cubase for vocals and additional mixing work. The problem is, if I fatten all my Reason audio tracks with MClass prior to sending to Cubase, I won't have any sonic "space" left for vocals.
I'm trying to come up with a audio work flow that makes sense (sonically), and the only thing I can come up with is:
1) Record my midi tracks as usual (I usually use the Cubase sequencer and Reason as a huge rewired sound module. Use MClass at this point is fine because I'm not rendering to audio.
2) Use the playback of the midi tracks as a reference to record vocals in Cubase as audio tracks. Clean them up, but otherwise leave unprocessed.
3) Import the .wav files of the vocal audio tracks back into Reason and load them up in NN-XT's.
4) In Reason, begin the process of "blending" the vocals into the mix with the other Reason tracks. Both instrument and vocal tracks will now be able to compete on equal footing for their own place in the mix.
Then when I maximize and master the mix, I'm brick-walling everything, not just the instrumentation.
Is anybody else doing anything like this or am I crazy? For music without vocals, none of this is an issue, but I'm wondering if anyone else finds it a challenge to use MClass in conjunction with rewired audio tracks.
If anyone's figured out a decent way to do this I'd love to hear what works for you...