Propellerhead Software
  #1  
Old 2013-02-16, 21:18
audax's Avatar
audax audax is offline
 
Join Date: Sep 2007
Posts: 44
Machine Learning done via Reason

Hi everyone,

this might sound like a blasting topic for this forum, but consider this: we have a lot of "state" saving modules via the rack extension on the cv and audio site. do you think it might be possible to do machine learning inside reason? this sounds like madness, but synthesizer parameter learning might be possible even thought some limitations might occur. too bad there are still no real ml based rack extensions, for everyone who remembers, the hartmann neuron was a synthesizer engaging a (backprop) neural network to "learn" sounds passed by as wav files. even though backprop neural networks are pretty slow and old, it would be nice to have some time-series learning on a bunch of samples to extract synthesizer "features" (mentioned above). what do you think?

cheers,
audax
  #2  
Old 2013-02-17, 02:06
vanKloot's Avatar
vanKloot vanKloot is offline
 
Join Date: May 2003
Posts: 2,160
I have no idea what you just said, but for some reason it conjures up the word "skynet"...

Sounds pretty cool though!
__________________
I've decided to get serious about my music again at last.
So go ahead... ask me how my new album is coming!

CLICK HERE TO EMAIL ME & SAY HI



Some stuff I've made with Reason...

Loading SoundCloud…
  #3  
Old 2013-02-17, 02:25
budbjames budbjames is offline
 
Join Date: Feb 2013
Posts: 1
Quote:
Originally Posted by audax View Post
Hi everyone,

this might sound like a blasting topic for this forum, but consider this: we have a lot of "state" saving modules via the rack extension on the cv and audio site. do you think it might be possible to do machine learning inside reason? this sounds like madness, but synthesizer parameter learning might be possible even thought some limitations might occur. too bad there are still no real ml based rack extensions, for everyone who remembers, the hartmann neuron was a synthesizer engaging a (backprop) neural network to "learn" sounds passed by as wav files. even though backprop neural networks are pretty slow and old, it would be nice to have some time-series learning on a bunch of samples to extract synthesizer "features" (mentioned above). what do you think?

cheers,
audax
I don't think that this idea is crazy at all. I believe that the possibilities of machine learning are limited only by the data that we have available to provide. I could see how ML could be used in reason to learn parameters that are common to certain styles of music. you could then set a preset and perhaps set a randomization seed.. and you would get a unique, but relevant set of parameters for your specified style preset. Machine learning is the future.. Great question!

Buddy James
http://www.refactorthis.net
http://www.twitter.com/budbjames
  #4  
Old 2013-02-17, 02:57
EpiGenetik's Avatar
EpiGenetik EpiGenetik is offline
 
Join Date: Feb 2011
Posts: 1,606
Interesting to see if anyone could do a higher resolution version of Newscool, that could be fun
__________________
"I decided to call my music 'organized sound' and myself, not a musician but 'a worker in rhythms, frequencies, and intensities'”
Edgard Varèse 1962
  #5  
Old 2013-02-17, 11:11
audax's Avatar
audax audax is offline
 
Join Date: Sep 2007
Posts: 44
Quote:
Originally Posted by budbjames View Post
I don't think that this idea is crazy at all. I believe that the possibilities of machine learning are limited only by the data that we have available to provide. I could see how ML could be used in reason to learn parameters that are common to certain styles of music. you could then set a preset and perhaps set a randomization seed.. and you would get a unique, but relevant set of parameters for your specified style preset. Machine learning is the future.. Great question!

Buddy James
http://www.refactorthis.net
http://www.twitter.com/budbjames
thanks for your answer
it would be interesting to know a minimal set of possible parameters sufficient enough to represent the learned data (eg. sound). as a starter, there are some time-dependent ml algorithms out there http://citeseerx.ist.psu.edu/viewdoc...0.1.1.143.4232 or http://www.cs.toronto.edu/~hinton/absps/uai_crbms.pdf, that can model timed continuous data rather well, hence
we could go even further and extend the concept of learned parameters to a new kind of sound synthesis. as these two papers rely on a relatively simple energy based ml algorithm, it would be possible to generate sounds from previously learned samples (sometimes called daydreaming in context of visual computing) and it could also be possible "to do" some other forms of remixing

generally speaking ml is the future (or has ever been) but a ml algorithm based synth wouldn't be so easily accessible to everyone (learning couldn't be done online or live, generated samples would be non deterministic etc...). however, i would be happy to see an experimental impl. inside reason

ps: it would be nice to have the sdk. unfortunately there are some restrictions :/
  #6  
Old 2013-02-17, 11:13
audax's Avatar
audax audax is offline
 
Join Date: Sep 2007
Posts: 44
Quote:
Originally Posted by vanKloot View Post
I have no idea what you just said, but for some reason it conjures up the word "skynet"...

Sounds pretty cool though!
terminators running reason
  #7  
Old 2013-02-17, 11:15
audax's Avatar
audax audax is offline
 
Join Date: Sep 2007
Posts: 44
Quote:
Originally Posted by EpiGenetik View Post
Interesting to see if anyone could do a higher resolution version of Newscool, that could be fun
hmm, what do you mean?
  #8  
Old 2013-02-18, 07:41
EpiGenetik's Avatar
EpiGenetik EpiGenetik is offline
 
Join Date: Feb 2011
Posts: 1,606
Smile

Quote:
Originally Posted by audax View Post
hmm, what do you mean?
Hi,

Newscool is a Reaktor Ensemble that has a basic implementation of John Conway's "Life" game in it. The idea is that if you start the "universe" in a predefined state then affect it with particular laws, then it will progress in a specific way. It's pretty much cause and effect, and the Reaktor Ensemble is basically just a little evolving rhythm machine with fx. This idea of course gets much more interesting if you push up the processing power, which then allows for a larger "universe" i.e. self contained blank space with self evolving matter, and also makes it possible to pre-program with more complex laws.

The life model itself is a theoretical masterpiece of philosophy, and it's an interesting way to look at the capabilities of computers in general.

Newscool:
http://media.soundonsound.com/sos/se...meoflife.l.jpg


The Life Game:
http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life


Cheers
__________________
"I decided to call my music 'organized sound' and myself, not a musician but 'a worker in rhythms, frequencies, and intensities'”
Edgard Varèse 1962
  #9  
Old 2013-02-18, 08:37
RXTX's Avatar
RXTX RXTX is offline
 
Join Date: May 2012
Posts: 562
Quote:
Originally Posted by audax View Post
or http://www.cs.toronto.edu/~hinton/absps/uai_crbms.pdf, that can model timed continuous data rather well
Hah, Prof. Geoff Hinton taught our introductory course on neural networks and backpropagation back in college. Good times.
 

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Learning Reason paskel13 General Forum (read only) 6 2013-03-23 07:58
PROBLEMS SYNCING CUBASE SL AND REASON VIA REWIRE!? Muzak Phead User Forum (read only) 8 2008-08-12 20:26
ISSUES: Loading .rsn 2.5 files into Reason 4 abraxis Phead User Forum (read only) 2 2008-04-28 20:00
Reason / DP / OMS Test Results Within... arothman General Forum (read only) 2 2003-03-05 03:26


All times are GMT +2. The time now is 16:04.