so i took a quick video of some jamming with this Game of Life generative sequencer in the opsix - nothing special exactly but ended up pretty interesting with the right patch.
-
so i took a quick video of some jamming with this Game of Life generative sequencer in the opsix - nothing special exactly but ended up pretty interesting with the right patch. this is a custom patch designed some months ago if not last year. a few more words at the page.
https://tsrono.music/opsix-game-of-life-generative-sequencer
easter egg found via:
-
T tsrono@mastodon.social shared this topic
-
so i took a quick video of some jamming with this Game of Life generative sequencer in the opsix - nothing special exactly but ended up pretty interesting with the right patch. this is a custom patch designed some months ago if not last year. a few more words at the page.
https://tsrono.music/opsix-game-of-life-generative-sequencer
easter egg found via:
@tsrono This is very beautiful! And it's always so nice to see/hear cellular automata being used this way! I think they have so much (untapped) potential for music/composition... Below is an old project of mine using 1D CAs in my custom generative composition system built for a 2-month long installation:
Karsten Schmidt (@toxi@mastodon.thi.ng)
Attached: 1 video @DBG3D@masto.es @t36s@social.ordinal.garden Okay, I found another nice excerpt, a bit more minimal than the above, but maybe also more clear to hear the approach described earlier. Just to explain once more, all the samples used are only one-shot single notes (produced by Simon Pyke/Freefarm). All melodies, chords, chord progressions, rhythm and the overall arrangement are fully generated (mostly but not exclusively) via cellular automata. The composition system also had other means to create/control, e.g. probabilistically trigger the recording of notes/events of selected tracks/channels for a few bars and then replay these phrases later, maybe using a different time scale, transpose, mirror and/or with different instruments... This proved to be highly effective (and musical) in terms of longer progressions and to create more interesting multilayered compositions/progressions. Some phrases were kept in a memory pool for up to 12 hours (the piece ran for 3 months)... As you can hopefully tell, the visuals for that installation were audio-responsive (not really audio per se, but responding to the events of the composer). Likewise, if the visuals would become too agitated/intense, an event would be sent to the composer to quickly dial down/thin out the musical intensity (e.g. trigger tempo change, mute tracks, lower velocity etc.). This hybrid, coupled two-way feedback worked very well in practice and there were so many moments I wish I would have recordings of... #GenerativeArt #GenerativeMusic #MusicComposition #CellularAutomata #AudioReactive #Installation #VictoriaAlbertMuseum #Video
Mastodon Glitch Edition (mastodon.thi.ng)