TL;DR: A Combinator-like device which turns a monophonic chain polyphonic.
EDIT: I made an illustration to... well, illustrate. I'll post it here first, and JP probably wants it in the MoM thread.
EDIT AGAIN: Moved the illustration to the top of the post as to spare viewers the wall of text. Original post is down below and more detailed.
EDIT ONCE AGAIN (2013-03-10): Reuploaded the illustration for original size.
Here's my idea (or at least, one of them, this is my most parsimonious one) for polyphonic modular synthesis in Reason. Well, it's not exactly modular synthesis, in fact pretty far from - it's modular in the sense that Reason itself is kind of modular. You'll get it. Much can be guessed from the name alone.
I call it the Polyphonizer, and it's a wrapper/container/nesting/whatever device like the Combinator, except you can put this inside Combinators. To have it make sense visually I propose that the device section of it have thin sheet edges with flaps that extend over the Combinator's sides.
Other than the pitch and mod wheels and the patch window, the device itself has only a voice count dial and a button for showing or hiding the device section. On the back are CV inputs for the wheels, a main output and a "from devices" input.
This is how the thing works: Everything inside the device section is "cloned" in real time a number of times according to the Polyphonizer's polyphony count. These "clones", or "voices" as one could call them, are identical in configuration and out of sight, and what one edits in the rack is a "prototype voice". MIDI signals introduced to the Polyphonizer are allocated to the appropriate clone, and presto - the whole chain becomes polyphonic. Any automation to any of the devices in the prototype voice is automatically reflected in all the clone voices.
All the instruments inside this wrapper should probably be monophonic by default. Audio routed to a device in the prototype voice from outside the Polyphonizer is distributed to each voice. Audio from within to the outside is instantly compiled at that stage. CV from outside is handled the same way as audio (external CV will probably be the only practical way of getting the same effect on all voices, since internal LFOs will tend to dissynchronize, random modulators definitely so), and the tricky part comes with CV from within to ouside (I'd go with latest-voice-basis, but one could argue for other ways). I still haven't figured out what the meters and indicators of devices inside the prototype voice should show.
So, this thing would basically make possible polyphonically everything that can already be done monophonically. We could use distortion devices as shapers as well as using the shapers on Thor and Malström as stand-alones, we would get polyphonic chorus and unison, we could create voice-independent gated effects and keyboard-tracking EQ, we could pan voices individually, create endless chains of stand-alone filters with envelopes, keyboard-tracking and filter modulation, and much more. And it would all be forwards-compatible.
Wait, isn't this wasteful and inefficient??
I don't think so. Let's consider why Reason is so efficient in the first place: It has a limited selection of devices which you use over and over again. It's redundant and contained in its own environment - it couldn't be more obvious how this software is structured. One way to think of it is that basically, there's only one of each instrument. Every instance of each instrument follow the same rules, uses the same functions and calculations. In other DAWs, most plugins are loaded independently, and each incarnation constitutes a full dose of memory and cpu usage, whereas with Reason, that kind of waste can be brought down to almost naught.
But it gets better - It's testable. I won't claim that I did so thoroughly or accurately - limited sample size - but the fact that my system didn't go up in flames is good enough. I compared four 32-voice Thors to one hundred and twenty eight monophonic ones, all-in-all 128 voices of the same basic thing. Although the latter tacked on two conspicuous percent of idle cpu usage, the increase in active cpu was about a tenth, to a grand average of about 9% cpu usage. The idle cpu taken into account, it didn't take any more whatsoever. Also, this raw configuration wasted cpu and memory for reasons that could be optimized away with the Polyphonizer. And for one thing, the UI got really sluggish... Those cables...
Admittedly, it got a bit heavier when I started creating 128-piece chains of effect devices just to see how many could be handled, if three bars on the dsp meter can be considered heavy (and no, by the way, I don't have a Deep Thought space computer - I run Windows 7 with an i7 930 and six gigs of RAM).
So yeah, I think this is a great idea and that I'm a bloody genius, but tell me what you think.
In b4 "If they add this they will never add a proper modular synth"