Voice instancing is amazing

Captain's Log: Stardate 77973.4

HOLY SHIT I got the first early version of voice instancing working and it far exceeds my expectations, it is so amazing to play with! There are still a ton of glitches to work out, but it's usable in a basic way and it makes Anukari 10,000x more usable for making simple instruments.

I recorded a little demo of a super-simple instrument. In the past, to make a song with this kind of instrument, you'd have to copy and paste the mass/spring/exciter systems N times, once per note, and then manually tune the mass or spring stiffness so that each note produced the correct pitch. This is still a useful thing to do, and you can run Anukari in this mode, but it requires a lot of work.

Now, you just create the instrument, and whatever it is, enable voice instancing mode, and voila you can play the instrument with any MIDI note and it will automatically produce the expected pitch. This is done through the time dilation feature, so it's still doing a full physics simulation for each note, just running the simulation a little faster or slower to produce the right pitch. Note that this is NOT resampling, it's a full physics simulation. That means that things like envelope durations, LFO speeds, etc, can run in realtime (so if you say release is 1s, it is 1s), while the physics run in dilated time.

Here's the example. Note that the visuals are still messed up; it's supposed to be showing the output from the most recently-used voice instance, which it kind of is, but I think the sensor colors are wrong, etc. Eventually I might make it possible to show all the instances at the same time, but baby steps...

One super-awesome thing about this instancing is that the instances run on separate GPU work units, which means that they run completely in parallel, and thus adding voices doesn't slow things down -- it runs at the exact same speed as a single voice. There may be limitations here, if you wanted to run like 500 voices, you might get to the point where the audio is competing with the 3D rendering or whatever, but at least with 8 voices there is literally no slowdown or stutter. It just magically works. This means that even an ultra-complex instrument, as long as it can run in "singleton" mode fast enough for playback, can run in voice-instanced mode.

The time dilation thing also means that ANY instrument can be mapped to all MIDI notes. So even weird sound effects and so on can be played with an 88-key keyboard with correct intervals between the notes, even if the sound is highly atonal or anharmonic.

Here's a much less tonal example, showing that even a super-percussive anharmonic instrument works just fine and can be played melodically

(Again, please ignore the buggy visuals)

And here's a more chord-based demo

Now that I have the basic voice instancing setup working, I can now see exactly what is needed to implement full MPE support, and it is pretty much trivial at this point -- I just need to route the various MIDI signals to the right instances based on the way the MPE spec works. A bit tricky to make sure it's all solid (especially given that there are already a number of bugs), but there's nothing new needed in the GPU infrastructure.

A super exciting thing is that the MPE pitch bending support will work perfectly, based off of the time dilation feature in the physics simulation. As an MPE note is bent, it will just continuously update the time dilation amount for the voice instance.

Today I ordered a used ROLI Seaboard 25 so that once I implement this MPE support I can actually test it. I know I'll be really bad at playing the ROLI but I'll do some demos for sure.


© 2024 Anukari LLC, All Rights Reserved
Contact Us|Legal