This keeps coming up on here, so let's dive into it...

Many people think Brian Eno came up with this type of music. Not true, however. To really look at the origins of generative composition, we need to go back quite a bit and look at "process composition" and "stochastic composition".

Process music is a form of music in which a certain set of procedures gets done, with the musical output as a result. Probably the simplest version of this that comes to mind is Steve Reich's "Pendulum Music", a work which involves using microphones swinging back and forth over speakers, with variations in feedback responses being the musical "output". Then when all speakers are emitting a steady feedback tone, you cut the amps. (see here: ) But anything that works in this way...a series of given procedures (sometimes posing as the "score"), followed by a musical result from them, is referred to as this.

Stochastic composition is also sometimes referred to as "chance-based". And that's not exactly correct, as stochastics involves the odds that ONLY certain specific outcomes will occur, with the composer grading the level of probability as part of the compositional process. Actual chance-based music refers to a music that's assembled out of random sources of audio as a result of totally random processes. But stochastic composition only involves the element of chance as far as choices between specific states and/or outcomes.

Now, generative music involves BOTH of these. The setup of a generative system is essentially identical to the "process" part in process music, and that "process" involves numerous chance potentialities with distinct and only slightly random end-results.

If we go back and look at where generative begins, we'll also be going back to the early stages of synth technology...and also, Albany, New York.

Back circa 1970, the State University of New York at Albany premiered its newest piece of music tech, the Coordinated Electronic Music System (or CEMS for short). This huge modular was co-designed by the composer Joel Chadabe and Robert Moog, and included off-the-shelf Moog modules alongside a number of custom control bus modules specific to the CEMS. And one of the very first works realized on this monster modular was Chadabe's "Ideas of Motion at Bolton Landing" ( and yes, that's part of the CEMS on the cover art), which can probably lay claim to being the first generative electronic composition. The CEMS, once programmed, was simply allowed to play itself...and the tape just ran, and recorded the result. Other than those two things, there was no human input at all. And that definitely makes this work the start-point for generative music.

But how did it work? OK...the CEMS was built up around an array of Moog 960 sequencers, plus a bunch of logic functions to manipulate them. This would then send note info to the "voicing" racks while, at the same time, the "modulation" also could play a part in some of the stochastic determinations. In essence, this is very little-changed in many present-day generative systems. But what did change between then and now was computers...

When small computer systems started to become more commonplace, you started to see various types of "automata" applications that involved aspects of generative processes...but very few of these involved the direct generation of sound. Instead, you saw MIDI applications such as "M", which applies stochastic principles to the generation of MIDI data to be sent to synthesizers. But this situation would change rather quickly, mainly due to two developments.

The first was the development of sound hardware that could act directly under the computer's control. Prior to the first DAW systems, sound cards on computers were kinda "meh". But when Digidesign first put out the Samplecell hardware and the Sound Tools software to work with it, this then opened up a new vista on how the computer could address the synthesizer. Within a couple of years, this was sort of obsoleted by Digidesign...but a new version of some IRCAM software stepped into the gap, namely Max/MSP. Max was, of course, the object-oriented sound programming language developed by Miller Puckette during his tenure at IRCAM for use originally with the 4X machine, then a further iteration was part of the ISPW (IRCAM Sound Processing Workstation). But a couple of iterations later, and Max/MSP was created to directly address the DSP in the Macintosh architecture so that you had what basically was "ISPW, the Home Version". The doors had been flung open!

Around this same time, a startup called SSEYO started to figure out how to create a music composition "system". This eventually became Koan, which was more or less the first "all-in-1" generative package. It included a SoundFont-like system for its voicing, and various stochastic methods for creating various types of musical fragments, which were then combined in a final patch to generate new pieces of music. It was around this time (early 1990s) that Brian Eno began extensive experiments and use of Koan, as part of his interest in "self-regulating systems", such as the tape delay system + sequencer rig used for "Discrete Music". The best examples of this can probably be found on Eno's "The Drop", which is a collection of ambient works primarily programmed and realized in Koan.

BUT...

One of the things that Eno noticed (along with a lot of other people) is that computers don't necessarily deal as nicely with gradual changes as do analog systems. And this can be directly attributed to the fact that computers work in discrete steps, all the way back down to the basic 1 and 0 of their binary architecture. Now, there ARE analog computers...and, interestingly, it was discovered that these "obsolete devices" were capable of dealing with chaotic systems FAR better than their digital counterparts. A digital computer, for example, really, really, REALLY doesn't like dealing with such chaotic systems as Lorenz attractors...but to an analog computer, calculating that is a snap.

Now, synthesizers have an awful lot in common with analog computers. You have certain on-off digital functions...but the "heavy lifting" in an analog computer is done by linear interaction between a set of op-amps, as these can generate voltage curves and functions with NO stepping and NO binary logic. Similarly, synthesizers involve circuits that are generating sound based on linear, non-stepped voltages plus some digital functions...not much difference, really. And this is where we go back to devices such as the CEMS.

Back when the CEMS was built, there weren't a lot of possible choices regarding modules, circuit architecture, and the like. You had Moog, Buchla, and ARP had just popped up on the scene, plus EMS was in startup mode as well. And that was about it! Nowadays...well, just LOOK!!! Choices abound! And this simply makes generative even MORE attractive.

But the problem here is that many people think generative synth programming is pretty simple. You get one or two things, set up a patch that makes them boop and bleep in a relatively even manner, and that's it...right?

Uh-uh. REAL generative systems are capable of running for HOURS (or days, weeks, months, years...) on end, very rarely repeating anything, but still staying in a certain composer-defined "lane". This goes back to another thing that Eno explored while still working with tape loops, in that he found that you could create an essentially non-repeating result by mixing several inequally-long tape loops of homogenous musical material. The most famous use of this is, of course, Eno's "Music for Airports", which were actually deployed in several airports as sonic installation pieces. But you can see how well this can work if you just do a little math; let's say we have three loops, one is 34 seconds long, the next is 55 seconds, and the third is 107 seconds. These will line up again after 200,090 seconds, or 55.58-ish hours of continuous playback. For an installation in a place such as an airport, no one will likely perceive that the music contains these looped iterations provided that the sonic material on the loops gives little to no indication of obvious cadential signals. The piece will simply seem to go on forever.

And this is the sort of thing that should be aimed for in generative modular systems.

Now, HOW to get to this has numerous possibilities. But nearly all of them involve a larger amount of modules than you'd typically find in yr.typ monosynth. And they also involve what's known as "orders of control".

Play a note on a synth. That's "1st order"...your keypress causes two control signals to be generated, one for pitch information, the other for time. Shorter press, shorter note. Higher keypress = higher note. And so on...

But with generative music, this concept basically goes batshit insane!

So...here's our synth again, waiting for notes. But THIS time, we're not using the keyboard. Instead, you've got three LFOs and three quantizers. Set each LFO to a rather long period, use different waveforms. Then feed each LFO to a quantizer, and program the quantizer so that each "step" also causes a trigger to be sent to the EG, etc. As long as you've inputted the same scale/mode for each quantizer, you'll have a three-part result that keeps repeating notes based on the CVs generated from the LFO behavior. Now we're starting into generative turf!

Three lines = boring after a while, though. So, let's now take a fourth LFO and use that to control the rates of the other three...but with a twist, in that we're going to use a multi-attenuator module (think Intellijel's QuadrATT tile here) to change the voltage behavior sent to each LFO from the fourth one. So...one gets timing LFO signals at 2/3rds of the level, the next gets 1/2, and the last is on full...but inverted! This has us at 3rd order, with the "main" LFO controlling the three others, which in turn control note generation via the quantizers. At THIS point, you're getting into "nonrepeating" territory, and you have timing variation.

Is that as far as you can go? HELL, no!!! So...let's toss a comparator into the mayhem, and set it up so that it reads the "main" LFO's output and outputs its gate when the comparator voltage level gets crossed. So then you take THAT gate and invert it, then send it to a VCA that's controlling the output level of your "third voice" in our theoretical system. Now, when the "main" LFO goes over that voltage threshold, the comparator turns "voice 3" off by dropping that VCA's CV to zero (via the inverter). 4th order, and we're just getting going here...

So...a few MORE comparators and VCAs, and we're going to drop these into crossmodulation patches between our VCOs. Now, we not only have the system self-regulating on timing and note generation, plus the "tutti rest" on voice 3, we ALSO have a similar comparator-VCA setup controlling FM aspects between the VCOs. Even better, you could just as well use envelope generators before the VCAs, key THOSE, and get gradual changes of VCO timbre. You can even affect spatialization this way, by using LFOs and EGs to control panning circuits, and those can get as complex as you want (Ambisonics, anyone?).

Sequencers get fun like this, too. This is why you want 'em to have gate and/or trigger outs on each step, because you can use those to mess with things. F'rinstance, take a trigger out from step 11 on a 16-step sequencer. Then connect that sequencer's "reset" function to a Boolean logic AND gate's output...and input two of those comparators to the inputs of the AND gate. And as for that step 11 output, send it to...oh...the "sync" on one of the three LFOs. Now THAT gets wild...the LFO waveform will only reset if and when the sequencer gets to step 11, but with the sequencer's reset keyed to the combination of two LFO waveforms (via the AND gate), this might only happen every once in a while.

And on and on and ON...

So, if you think you can cram a "generative" system into an 84 hp skiff...forget it. To get a SERIOUS generative result requires quite a few modules, because you're creating a system that has to:

1) generate several musical parts
2) have massive order of control capabilities so that you arrive at that nonrepetitive goal.

If the output sounds like something NOT made by a machine, then you win every Internet made since 1896. And you'll have succeeded at building a generative system that really, honestly, ACTUALLY operates in self-regulatory generation. But as noted, this takes space, it takes a lot of "not so typical" modules, and it takes some understanding about how chaos-based systems of this sort work as well as knowledge of modules that can work in tandem toward this result. It ain't cheap. It IS a considerable hassle. But when it comes together and you get that stream of generative sound...yeah, it's worth it.