Captain's Log: Stardate 77870.1

Today I broke ground on the "instanced voice mode" that will allow any instrument to be instantly turned into a polyphonic instrument with standard tuning. I think that this is the last "big feature" that I consider a must before I start working on making an early alpha test possible. There are still a ton of features I want to add for release, but once instanced voice mode works, I will be satisfied that it should be rich enough and fun for testing.

Right now my plan is to hack together the most basic version of the idea, taking lots of shortcuts, to make sure that the overall principle of using time dilation for the tuning works how I expect, and that the implementation I plan for the GPU code works well, etc. There are quite a few open design decisions where to evaluate them, I just need to get something working at a basic level. I've learned that my best guesses about what kinds of memory layouts will be most performant on the GPU are often wrong, so it's best to just experiment.

The early hacky prototype of this will have only the very most basic GUI support: the 3D view will simply display the instance of the instrument that was most recently used to play a polyphonic note. It's possible that this will be reasonably useful. I hope that's true, because all the other GUI options I've come up with come with significant performance trade-offs. The problem is that if you have, say, 16 voices, then 16x as much data needs to be sent from the audio thread to the GUI thread to display all the instances at once. This isn't a problem for the audio thread, since it writes only to a nonblocking SPSC queue. But it might be a problem in the GUI if the audio thread has to drop some of the 3D updates to avoid dropping audio samples. (Stutter.)


© 2024 Anukari LLC, All Rights Reserved
Contact Us|Legal