Captain's Log: Stardate 77971.1

Today I mostly worked on the changes required to Anukari's core data model to store per-instance data where applicable. This somewhat spiraled into a bigger project than I expected, but I feel like I shouldn't be surprised about that kind of thing at this point.

For the moment, the things that have to be stored in the core data model per-instance are all things MIDI-related, and all modulation information (a lot of which is MIDI-related itself, or could be, depending on how modulators are linked together). The reason for the MIDI stuff is fairly obvious, since the instancing is per MIDI note ON event. Though it's perhaps a little less obvious that even the MIDI control data (including things like pitch wheel, etc) has to be instanced. That is not strictly necessary for basic voice instancing (which is what I'm working on at the moment), but it will become necessary when I get to implementing MPE support.

Fortunately, pretty much all the infrastructural work I'm doing to support voice instancing is a direct prerequisite for MPE support. The instancing itself, as well as the ability to dilate time, are both required (time dilation so that it can be bound to the MPE pitch dimension to get perfect pitch expression). Basically once the current work is done, adding MPE support will be just a bit of GUI work, and then some fiddly bits with adding a third mode for how MIDI information gets routed to different voice instances. Some of that MIDI information will be global, and some will be instanced.

Anyway, the data model changes for instancing seem to be pretty much done at this point. The next sub-project which I've started on is the code that actually does the routing from incoming MIDI note ON events to voice instances. Basically this code needs to allocate a voice (possibly stealing it from an old note), set it up with MIDI info, and give it the right time dilation setting.


© 2024 Anukari LLC, All Rights Reserved
Contact Us|Legal