First GPU implementation for LFOs
Captain's Log: Stardate 77393.2
Today I implemented the most basic LFO: a free-running (non-synced) LFO based on the existing oscillator code. This is wired up all the way through the GUI and totally works for what it is. Syncing and phase correction on frequency change are definitely needed, but already you can have a ton of fun with the LFOs. And of course the LFO parameters can be modulated by MIDI CC knobs (though that seems a bit glitchy so there's more work to do with smoothing the parameters changes).
I made a super simple 3D gear model to represent the LFOs for now. Which brings up a tough problem, which is that obviously the LFO visuals should be animated in time with the LFO pulse, but right now, the LFO calculations are all done on-GPU, and I don't yet have a way to get the LFO values back out of the GPU so they can e.g. be displayed in the GUI.
Like most of the other GPU code, the LFOs are stateless on the GPU, so their simulation uses zero GPU memory writes. The CPU gives them the right parameters at the beginning of a sample block, and the LFO is then a pure function of elapsed time. This is extremely performant, but it means that the LFO value is never stored anywhere that we can read it back!
My first thought was to just duplicate the LFO GPU code on the CPU and then calculate the LFO state in the GUI. Which will work, but it's a lot of duplicated code, and the code is not all that trivial. So now I'm leaning more towards having the GPU code write the LFO value to memory once per sample block. Then the CPU can read that data back from the GPU and use it for the GUI display. This is overall much simpler, but makes the GPU code more complex, which is the last place I want complexity. But I think it probably makes sense in the end.