devlog > gui
Multichannel, ASIO, Radeon, and randomization
Captain's Log: Stardate 79000.1
Whoa, it's been way too long since I updated the devlog. Here goes!
2025 MIDI Innovation Awards
Really quickly: Anukari is an entry in the 2025 MIDI Innovation Awards, and I would really appreciate your vote. You can vote on this page by entering your email and then navigating to the Software Prototypes/Non-Commercial Products category and scrolling way down to find Anukari. You have to pick 3 products to vote in that category. (I wish I could link to the vote page directly, but alas, it's not built that way.)
The prize for winning would be a shared booth for Anukari at the NAMM trade show, which would be a big deal for getting the word out.
Multichannel I/O Support
A while back, Joe Williams from CoSTAR LiveLab reached out to me asking if Anukari had multichannel output support. Evidently the UK government is investing in the arts, which as an American is a pretty (literally) foreign concept. One of the labs working on promoting live performance is LiveLab, and they have a big 28-channel Ambisonic dome. Joe saw Anukari and thought it would be cool to create an instrument with 28 mics outputting to those 28 speaker channels.
I'd received several requests for multichannel I/O, but hadn't yet prioritized the work. The LiveLab use case is really cool, though, and Anukari will be featured in a public exhibit later this month, so I decided to prioritize the multichannel work.
Anukari now supports 50x50 input/output channels. In the standalone app, this is really simple, you just enable however many channels your interface supports and then inside Anukari you assign each audio input exciter or mic to the channels you want.
It also works for the plugin, but how you utilize multichannel I/O is very DAW-dependent. Testing the new feature was kind of a pain in the butt, because I have about 15 DAWs for testing, and multichannel is a bit of an advanced feature, so I ended up watching a zillion tutorial videos. Every DAW approaches it a bit differently, and the UX is generally somewhat buried since it's a nice feature. But it works everywhere, and it is extremely cool to be able to map a bunch of mics to their own DAW tracks and give them independent effects chains and so on.
Behind the scenes, it was really important to me that the multichannel support did not impact performance, especially when it was not in use. I'm very happy to say I achieved this goal. When you're not using multichannel I/O, there is zero performance impact. And even in 50x50 mode the impact is very low. Anukari is well-suited for multichannel I/O since each mic is tapping into the same physics simulation at different points/angles, so none of the physics computations have to be repeated/duplicated. Really the only overhead is copying additional buffers into and out of the GPU. On the Windows CUDA backend, that's a single DMA memcpy, which is very fast. And on the macOS Metal backend, it's unified memory, so no overhead at all. All that remains is the CPU-CPU copy into the DAW audio buffers, which is very, very fast.
I look forward to posting about the LiveLab exhibit once it happens.
ASIO Support
It's a pretty big oversight that the Windows version of the Anukari Beta launched without ASIO support. I'm not quite sure how I missed this important feature, but I've added it now.
I think I always assumed it was there, but when using JUCE the ASIO support is not enabled by default because you need to get a countersigned agreement from Steinberg to use their headers to integrate with ASIO. I already had a signed agreement with them for the VST3 support, but ASIO is a completely separate legal agreement and so I went through the steps to get that as well.
ASIO support makes the standalone app perform much better (in terms of latency) for people with ASIO compatible sound interfaces.
AMD Radeon Crashes
Officially speaking, Anukari explicitly does not support AMD Radeon hardware. This is a bit of a long story, which at some point I will write about in more detail. But the short version is that the Radeon drivers are incredibly inconsistent across the Radeon hardware lineup, which makes it extremely difficult to offer full support. For some Radeon users, Anukari works perfectly, and for others it is unstable, glitchy, or crashes, in many different unique ways.
The story I'll write about for this devlog entry, though, is the extremely frustrating case that I solved for users that have both an AMD Radeon and an NVIDIA graphics card in the same machine. This is actually a common situation, because many (most? all?) AMD Ryzen CPUs include integrated Radeon graphics on the CPU die. So for example there are a lot of laptops that come with an NVIDIA graphics card, but also have a sort of "vestigial" Radeon in the CPU that is normally not used for anything.
In the past, Anukari just worked for users with this configuration, since when it detects multiple possible GPUs to use for the simulation, it would automatically select the CUDA one as the default. However in the 0.9.6 release, Anukari began crashing instantly at startup for these users.
This was pretty confusing, because I have comprehensive fuzz and golden tests that exercise all the physics backends (CUDA, OpenCL, Metal). These tests abuse the simulation to an extreme extent, and I run them under various debugging/lint tools to make sure that there are no GPU memory errors, etc. And across my NVIDIA, macOS, and Intel Iris chips, they all work perfectly.
Luckily I had a user who was extremely generous with their time to help me debug the issue. I sent them instrumented Anukari binaries, and eventually was able to pinpoint that it was crashing inside the clBuildProgram() call.
Now, you might think that what I mean is that clBuildProgram() was returning an error code, and I was somehow not handling it. No, Anukari is extremely robust about error checking. I mean it was crashing inside the kernel and clBuildProgram() was not returning at all due to the process aborting. This is with perfectly valid arguments to the function. So, obviously, this is a horrible bug in the AMD drivers. Even if the textual content of the kernel has e.g. a syntax error, clearly clBuildProgram() should return an error code rather than crash.
The really fun part is that I've only seen this crash on the hardware identifying as gfx90c. On other Radeons, this does not happen (though some of them fail in other ways). This is what I mean about the AMD drivers being extremely inconsistent.
Now, as to why this crash happened at startup, it's because during device discovery Anukari was compiling the physics kernel, and any device where compilation failed would be assumed incompatible and omitted from the list of possible backends. I added this feature after encountering other broken OpenCL implementations like the Microsoft OpenCL™, OpenGL®, and Vulkan® Compatibility Pack which is an absolute disaster.
So the workaround for now is that Anukari no longer does a test compilation to detect bad backends. This resolves the issue, although if the user manually chooses the Radeon backend on gfx90c it will unrecoverably crash Anukari.
Longer-term, given the Radeon driver bugs, I doubt I'll ever be able to fully support gfx90c, but I ordered a cheap used laptop off eBay with that chip in it so that I can at least narrow down what OpenCL code is causing the driver to crash. I know that it's something the driver doesn't like about the OpenCL code because it did not always crash, and the only difference in the meantime has been some improvements to that code. Hopefully I can find a workaround to avoid the driver bug, but if not I might add a rule in Anukari to ignore all gfx90c chips.
(Side-note: actually the first used laptop with a gfx90c chip that I bought off eBay was bluescreening at boot, so I had to buy a second one. These inexpensive Radeon laptops are really bad.)
Not all hope is lost for Radeon support. I recently upgraded my main development machine, and the Ryzen CPU I bought has an on-die Radeon, and it works flawlessly with Anukari. So maybe what I will be able to do one day is create an allow-list for Radeon devices that work correctly without driver issues. Sigh. It is so much easier with NVIDIA and Apple.
Parameter Randomization
Unlike the features above, this one hasn't been released yet, but I recently completed work to allow parameters to be randomized.
For Anukari this turned out to be a bit of a design challenge, since the sliders that are used to edit parameters are a bit complex already. The tricky bit is that if the user has a bunch of entities selected, the slider edits them all. And if the parameter values for each entity vary, the slider turns into a "range editor" which can stretch/squeeze/slide the range of values.
So the randomize button needs to handle both the "every selected object has the same parameter value" and "the parameter varies" scenarios. For the first scenario with a singleton value, it's simple: pressing the button just picks a random value across the full range of the parameter and assigns it to all the objects.
But for the "range editor" scenario, what you really want is for the randomize button to pick different random values for each entity, within the range that you have chosen. There's one tricky issue here, which is that it is very normal for the user to want to mash the randomize button repeatedly until they get a result they like. This will result in the range of values shrinking each time (since it's very unlikely that the new random values will have the same range as before, and the range can only be smaller)!
So the slider needs to remember the original range when the user started mashing the randomize button, and to reuse that original range for each randomization. This allows button mashing without having the range shrink to nothing. It's important, though, that this remembered range is forgotten when the user adjusts the slider manually, so that they can choose a new range to randomize within.
Another kind of weird case is when the slider is currently in singleton mode, meaning that all the entities have the same parameter value, and the user wants to spread them out randomly over a range. This could be done by deselecting the group of entities, selecting just one of them, changing its value, then reselecting the whole group, which would put the slider into range mode. But that's awfully annoying.
I ended up adding a feature where you can now right click on a singleton slider, and it will automatically be split into a range slider. The lower/upper values for the range will be just slightly below/above the singleton value, and the values will be randomly distributed inside that range. So now you can just right click to split, adjust the range, and mash the randomize button.
CPack considered harmful
Captain's Log: Stardate 78553.4
Stability
Since my last devlog update, one big thing I did was to just run the chaos monkey 24x7, fixing crashes it caused until it couldn't crash Anukari any longer. It found some highly interesting issues, including a divide-by-zero bug that has existed since probably about the 3rd week of development on Anukari, in the graphics code. In each case where it found a crash, I tried as much as possible to generalize my fixes to cover problems more broadly, and this strategy seems to have paid off as at the moment the chaos monkey hasn't crashed Anukari in about 48 hours of running.
AnukariEffect
In between solving new crashes from the chaos monkey, I continued to work on launch-blockers, one of which was finally creating a second version of the plugin that allows it to be used as an effects module in a signal chain (rather than as an instrument). This is a bit annoying, because the VST3 plugin actually is perfectly capable of being used in either context, since it dynamically determines how many audio inputs it has, etc. But most DAWs simply don't support plugins that can be used either way.
So now there is a second AnukariEffect plugin. This works great, and it's really nice to be able to just drop AnukariEffect into a track as an effect without having to do complicated sidechain stuff to use it that way. I made a couple of initial effect presets, and already it's producing some extremely cool sounds. I'm very excited about this.
A bunch of small work remains to make AnukariEffect nice to use. For example, the GUI needs to have some subtle changes, like adding a wet/dry slider, hiding controls that have to do with irrelevant MIDI inputs, etc. Also, because it doesn't receive MIDI input, AnukariEffect will only do singleton instruments and not voice-instanced instruments, so I need to put some thought into how to handle edge cases like what to do when the user loads a voice-instanced instrument in AnukariEffect. I think it will likely get converted to a singleton with a warning message to the user. But I need to experiment a bit to find what feels right.
CPack
The introduction of a second VST3 (and AU) plugin necessitated some changes to the installers. Also, separately I am working on getting an AAX plugin up and running. So I realized that now is the time to really get the installers working correctly, allowing the user to e.g. install only VST3 and not AAX, etc.
I had originally used CMake's CPack for generating the installers, using the INNOSETUP generator for Windows and the productbuild generator for MacOS. This seemed really convenient, because I didn't want to learn how to use Inno Setup / productbuild directly, and it looked like CPack could generate the installers without me having to get into the weeds.
In the end, I really wish I had never tried using CPack's generators for this. They are horrible. Basically the problem is that both Inno Setup and productbuild have fairly rich configuration languages, and CPack's generators expose perhaps 10% of the features of each one in a completely haphazard way. So it seems convenient, but then the second you need to configure something about the installer that the CPack authors did not expose as an option, you're completely hosed. Originally I tried to work around the CPack limitations with horrible hacks, such as a bash script that wrapped pkgbuild and took some special actions. But this was a complicated mess and didn't work well.
So for both Windows and MacOS, I decided to just bite the bullet and learn how to use Inno Setup and productbuild/pkgbuild directly. And as it turns out, in both cases, it is much simpler to just go straight to the nuts and bolts without the CPack generators. It resulted in less config code overall, with less indirection, no hacks, and I was able to configure the installers exactly how I wanted.
Frankly at this point I can't see any argument for why anyone would want to use CPack. It's substantially more complicated/obfuscated/indirect, limits you to an eclectic subset of each installer's features, and it truly was harder to learn how to configure CPack than to just figure out Inno Setup and productbuild/pkgbuild. The documentation for the installer tools is way better than CPack's documentation, and anyway, to use CPack you kind of have to understand the installer tools anyway.
So the end result of rewriting the Windows and MacOS installers without CPack is that they both work how I want now, and will be a lot easier to maintain as I continue to get closer to release, adding the AAX plugin and so on. I'm very happy that installers are now a "solved problem" -- one more box checked for the launch.
Getting into the usability weeds
Captain's Log: Stardate 78505.1
It's been a while since I posted! This is partly due to the holidays, but also partly due to my work recently being pretty scattered and piecemeal, so I haven't felt like there was anything super-cohesive to write about.
The main thing that I've been doing is working on Anukari's usability. Most of the big UX things I want are implemented, so now I am focusing on the little details.
My wife Meg was kind enough to do a UX study, which surfaced many small issues. I gave her only some written instructions on tasks to complete, and she was able to do nearly all of them without any help, which was pretty amazing to see, since even a few months ago I think that would have been unrealistic. I watched her carefully, and she also took notes, and this one 30 minute session resulted in a couple of dozen of UX improvement ideas.
With most of those improvements done, I decided that the next thing I should do to find UX problems is to start building the factory presets that will ship with Anukari. So far I've created about 50 presets, and it has definitely been a useful process. The high-order bit is that actually things are working pretty well. That's really nice to see. But there were a bunch of small things that, while they didn't prevent me from getting anything done, added enough friction to be mildly annoying. And when all those little papercuts are added up, they are still pretty problematic.
I won't list all the tiny UX issues here (the release notes will more-or-less cover them), but I'll talk about one of the most long-term persistent issues: physics explosions.
Physics Explosions
In any discrete physics simulation, there are going to be parameters where extreme values can cause problems. At a fundamental level, you can run into things like floating point error, and that is a real thing for Anukari, but it's mostly been easy to avoid by simply providing reasonable bounds for all the parameters (which requires thinking about how parameters interact, what kind of floating point operations are done on them, etc).
The bigger problem for Anukari has always been that there are situations where the fixed time step (one step per audio sample) is just too big for a given set of parameters, and the simulation can get into a situation where the simulation error grows with each simulation step, which will very quickly get out of control and cause a complete explosion. The introduction last year of time dilation for voice-instanced polyphony made this situation much more apparent, because higher notes require exponentially higher time steps. Each octave doubles the amount of time the simulation needs to advance for each discrete step.
I had a few theories about where things were going wrong, and investigated them pretty deeply this last week. It turns out that there's nothing discontinuous happening; i.e. no float is going to infinity or NaN. There's no overflow or underflow. Nothing that subtle is required for the explosions. It really is just that the time steps are getting too big.
For Anukari this is not really a solvable problem. The simulation uses a first-order integration, and that's already extremely computationally expensive. A while back, I did experiments to implement it as an RK4 integration, which is a fourth-order system and thus is substantially much more accurate. And it was more accurate, but the computation required to compute all the extra derivatives outweighed the accuracy benefits. It was just too slow to consider. The most reasonable option in Anukari is simply to use smaller time steps, which the user can do by setting the sample rate to something higher. This does what you might expect -- doubling the sample rate gives you an extra octave of usable simulation without explosions.
Anyway, Anukari is plenty usable within this limitation. But there is a big usability issue where you are having a great time playing an instrument, and then you happen to hit a note that's just a little too high, and the physics explode. In voice-instanced mode, only one of the voice instances will explode, but it will stay broken until you do something about it. So the moment you hit a "bad" note all the fun stops. I want to make sure that you can't just hit a bad note and break things.
I did a bunch of experiments with adding various limits inside the simulation. For example, I added a terminal velocity -- there was a hard cap on how fast a mass could move. This did indeed solve the issue of explosions, but there was a cost, which is that there were a few perfectly-stable presets that relied on very fast velocities for cool sounds. So the safety of terminal velocity would come at the cost of reduced flexibility -- the instrument would be less capable.
I tried all kinds of things to make the terminal velocity idea work. I used a smooth saturation limit instead of a hard cap, I tried terminal acceleration, terminal per-step dv/dt, etc. Most of these ideas had even bigger problems than straight velocity capping, and certainly they all reduced flexibility by imposing limits on the simulation.
There were other issues with velocity capping. When velocities were just under the cap, you could get really weird situations where a system would gain enough energy to hit the cap, get capped, slow down, build back up, and oscillate between hitting the cap and slowing down. In some cases this led to instruments that permanently rang out; damping became ineffective.
Automatic Physics Explosion Mitigation
Ultimately I decided that I didn't want a solution that limited the flexibility of the instrument. So I came up with a way to automatically mitigate explosions without capping velocity, acceleration, etc.
The solution is quite simple: over the course of simulating the physics for an audio block, the GPU simulator keeps track of the highest velocity observed for each mass. Then on the CPU side, the max velocity is compared to a very high threshold, and if any mass in a voice instance exceeds it, that voice instance is hard-reset to initial conditions and its audio is dropped for the last block. The user also gets a little toast notification about what happened.
What this means is that if you play too high of a note, from your perspective, nothing happens; it just doesn't work. The explosion gets detected immediately and the voice is reset without making any noise. This is much better than having the note explode, requiring you to do something to fix it. You just find that the range of your instrument is a bit limited.
This works extremely well, and the velocity threshold can be so high that there's no realistic instrument that would hit it. I tried some extreme examples and it does not seem possible to trigger a false positive for the explosion detection.
Now, I said that this mitigation happens without any noise. That's not entirely true. In some cases, you can hear a small click due to the discontinuous reset. This isn't a huge deal, but I want to make the factory presets work perfectly, so I also added a feature to set the MIDI note range that the instrument responds to. Thus for "perfect" presets, you build the instrument, figure out where its usable range is, and then limit it to only those notes. This isn't completely necessary, but it guarantees super stable instruments with absolutely no clicking or anything. Woohoo!