devlog > gui

Getting into the usability weeds

Captain's Log: Stardate 78505.1

It's been a while since I posted! This is partly due to the holidays, but also partly due to my work recently being pretty scattered and piecemeal, so I haven't felt like there was anything super-cohesive to write about.

The main thing that I've been doing is working on Anukari's usability. Most of the big UX things I want are implemented, so now I am focusing on the little details.

My wife Meg was kind enough to do a UX study, which surfaced many small issues. I gave her only some written instructions on tasks to complete, and she was able to do nearly all of them without any help, which was pretty amazing to see, since even a few months ago I think that would have been unrealistic. I watched her carefully, and she also took notes, and this one 30 minute session resulted in a couple of dozen of UX improvement ideas.

With most of those improvements done, I decided that the next thing I should do to find UX problems is to start building the factory presets that will ship with Anukari. So far I've created about 50 presets, and it has definitely been a useful process. The high-order bit is that actually things are working pretty well. That's really nice to see. But there were a bunch of small things that, while they didn't prevent me from getting anything done, added enough friction to be mildly annoying. And when all those little papercuts are added up, they are still pretty problematic.

I won't list all the tiny UX issues here (the release notes will more-or-less cover them), but I'll talk about one of the most long-term persistent issues: physics explosions.

Physics Explosions

In any discrete physics simulation, there are going to be parameters where extreme values can cause problems. At a fundamental level, you can run into things like floating point error, and that is a real thing for Anukari, but it's mostly been easy to avoid by simply providing reasonable bounds for all the parameters (which requires thinking about how parameters interact, what kind of floating point operations are done on them, etc).

The bigger problem for Anukari has always been that there are situations where the fixed time step (one step per audio sample) is just too big for a given set of parameters, and the simulation can get into a situation where the simulation error grows with each simulation step, which will very quickly get out of control and cause a complete explosion. The introduction last year of time dilation for voice-instanced polyphony made this situation much more apparent, because higher notes require exponentially higher time steps. Each octave doubles the amount of time the simulation needs to advance for each discrete step.

I had a few theories about where things were going wrong, and investigated them pretty deeply this last week. It turns out that there's nothing discontinuous happening; i.e. no float is going to infinity or NaN. There's no overflow or underflow. Nothing that subtle is required for the explosions. It really is just that the time steps are getting too big.

For Anukari this is not really a solvable problem. The simulation uses a first-order integration, and that's already extremely computationally expensive. A while back, I did experiments to implement it as an RK4 integration, which is a fourth-order system and thus is substantially much more accurate. And it was more accurate, but the computation required to compute all the extra derivatives outweighed the accuracy benefits. It was just too slow to consider. The most reasonable option in Anukari is simply to use smaller time steps, which the user can do by setting the sample rate to something higher. This does what you might expect -- doubling the sample rate gives you an extra octave of usable simulation without explosions.

Anyway, Anukari is plenty usable within this limitation. But there is a big usability issue where you are having a great time playing an instrument, and then you happen to hit a note that's just a little too high, and the physics explode. In voice-instanced mode, only one of the voice instances will explode, but it will stay broken until you do something about it. So the moment you hit a "bad" note all the fun stops. I want to make sure that you can't just hit a bad note and break things.

I did a bunch of experiments with adding various limits inside the simulation. For example, I added a terminal velocity -- there was a hard cap on how fast a mass could move. This did indeed solve the issue of explosions, but there was a cost, which is that there were a few perfectly-stable presets that relied on very fast velocities for cool sounds. So the safety of terminal velocity would come at the cost of reduced flexibility -- the instrument would be less capable.

I tried all kinds of things to make the terminal velocity idea work. I used a smooth saturation limit instead of a hard cap, I tried terminal acceleration, terminal per-step dv/dt, etc. Most of these ideas had even bigger problems than straight velocity capping, and certainly they all reduced flexibility by imposing limits on the simulation.

There were other issues with velocity capping. When velocities were just under the cap, you could get really weird situations where a system would gain enough energy to hit the cap, get capped, slow down, build back up, and oscillate between hitting the cap and slowing down. In some cases this led to instruments that permanently rang out; damping became ineffective.

Automatic Physics Explosion Mitigation

Ultimately I decided that I didn't want a solution that limited the flexibility of the instrument. So I came up with a way to automatically mitigate explosions without capping velocity, acceleration, etc.

The solution is quite simple: over the course of simulating the physics for an audio block, the GPU simulator keeps track of the highest velocity observed for each mass. Then on the CPU side, the max velocity is compared to a very high threshold, and if any mass in a voice instance exceeds it, that voice instance is hard-reset to initial conditions and its audio is dropped for the last block. The user also gets a little toast notification about what happened.

What this means is that if you play too high of a note, from your perspective, nothing happens; it just doesn't work. The explosion gets detected immediately and the voice is reset without making any noise. This is much better than having the note explode, requiring you to do something to fix it. You just find that the range of your instrument is a bit limited.

This works extremely well, and the velocity threshold can be so high that there's no realistic instrument that would hit it. I tried some extreme examples and it does not seem possible to trigger a false positive for the explosion detection.

Now, I said that this mitigation happens without any noise. That's not entirely true. In some cases, you can hear a small click due to the discontinuous reset. This isn't a huge deal, but I want to make the factory presets work perfectly, so I also added a feature to set the MIDI note range that the instrument responds to. Thus for "perfect" presets, you build the instrument, figure out where its usable range is, and then limit it to only those notes. This isn't completely necessary, but it guarantees super stable instruments with absolutely no clicking or anything. Woohoo!

Digging into usability

Captain's Log: Stardate 78422.5

Recently I've started to feel pretty good about Anukari's reliability and compatibility with a broad cross-section DAWs. Having 12+ DAWs set up to test against has helped, and the chaos monkey has also really started to give me confidence that it's getting harder to crash the plugin. And performance is also looking pretty good on many machines. So finally I'm feeling like I can spend some time working on making it more usable. This is something I've wanted to work on for quite a long time, but when a piece of software is crashing a lot, it's hard to argue that the crashes themselves aren't the biggest usability issue, so that comes first.

The Entity Palette

The first big improvement I made this week was adding the "entity palette," which is a horizontal menu at the bottom of the 3D view that presents little thumbnails of all the different kinds of entities that you can create, allowing them to be dragged into the 3D world to place them. Previously you had to either use the right-click context menu or hotkeys to create entities, and that only allowed the creation of top-level entity types. So for example, to create an Oscillator exciter, you had to create a Mallet exciter and then change its type to Oscillator. Now, you can just scroll over to the little picture of an Oscillator and drag it into the world. Here's what it looks like:

The thumbnails are a huge help in terms of figuring out what you want. They're way easier to work with than just a textual list. They are automatically rendered based on the actual 3D skin that you've selected, so the visuals you see in the palette match what you'll actually get in the 3D world. This made it substantially more complex to build, but I feel that it was worth it so that the visuals are consistent. Also, I think it's just a nice touch.

Another really helpful part of the palette is that it provides a place to add tooltips for all of the entities. Anukari's interface has problems with discoverability; there is a lot of complex stuff you can do, but historically there hasn't been any way to learn about it, other than perhaps watching a tutorial video. But now you can go through the palette and mouse over any of the entities and get a quick idea what it does and how it functions. In particular, the tooltips discuss how entities need to be connected to one another, which is a bit of a tricky issue which calls for another solution.

Connection Highlighting

In Anukari, entities that interact with one another have to be explicitly connected via springs or various kinds of "links." It's not immediately obvious how this works; for example, springs can only be connected to Bodies and Anchors. Exciters can be connected to induce vibration in free Bodies but not Anchors, but also they can be connected to Modulators to have their parameters modulated. Even more complex systems exist, such as Delay Lines, which connect to entities that operate on waveforms.

Previously, the only way to learn what kinds of connections were possible was to select an entity, choose the "connect" command (or hotkey), and when you dragged the resulting link around it would only snap to the entities that were valid connections.

The new system that I've implemented is much better. What happens is that whenever you are placing a link, all of the entities that you can connect it to are automatically highlighted. And the highlighting color is color-coded based on what kind of link you'd get if you were to make the connection. And the really cool thing is that with the entity palette, you can drag in a specific kind of link, and immediately see what you can do with it. This makes the link system dramatically more discoverable.

Here's an example of the highlighting in action. In this video I drag a spring from the palette into the world. Immediately it highlights all the entities that it could be attached to, which is all Bodies and Anchors. I drop it on an Anchor, and the set of highlighted entities changes: Anchors can't be connected to other Anchors, so only the Bodies are highlighted.

Discoverable Camera Controls

For a long time it's irked me that the mouse/keyboard controls used to move the 3D camera around are so obscure. It's kind of an unavoidable problem to some extent, because between zoom, x/y rotation, and x/y/z pan, the user has control over 6 degrees of freedom, and a touchpad only has 2 degrees of freedom. Gestures help with this a bit, for example two-finger scroll adds a degree of freedom. And on a mouse you can use the wheel for that. But ultimately it's difficult to avoid the use of modifier keys to access all the camera dimensions.

During the pre-alpha, I've had to point users to a document that lists all the camera controls. I'm pretty sure that this is super not-fun. So it's been a goal for a long time to make the controls discoverable in the plugin itself. But I wanted something better than just a help menu that parrots what's in the doc.

Finally what I ended up with is a system that's loosely based on a feature in Blender (3D modeling software that has to solve this problem): there are now icons hovering over the 3D viewport that you can drag with the mouse (or touchpad) to adjust the camera. So there's a zoom icon, a rotate icon, and a pan icon. Each of them highlights on mouseover, and changes the cursor to the open hand, and when you click changes to a grabby hand. So it's fairly obvious that you can interact with them by dragging, and then once you try it, it's quite obvious what it does. The 2D icon graphics need improvement, but they're pretty communicative at this point.

I think for a lot of users, dragging these icons will be a fine way to control the camera without any further learning. But for power users the icons serve an additional purpose: like the entity palette, they provide a place to put tooltips, and in the tooltips, the hotkeys are explained for advanced users:

There's still a lot of work to do to make Anukari easy to use. One thing that I've been experimenting with is how to show that snap-to-grid is enabled in the 3D view, since this has been a source of confusion in the past (a user created new entities and was confused by their placement -- it was because they forgot that snap-to-grid was on). I've tried a few kinds of visuals but so far haven't found anything that is helpful and also looks good. Another thing I want to do is to automatically highlight buttons in the GUI when certain events occur, to hint to the user what to do next. For example, the circuit breaker feature can optionally pause the simulation if it detects a physics explosion. It pops up a message explaining this, but I'd also like the button that resets the simulation to pulse/highlight in some way, so that the user will immediately see what to do.

But even with these few improvements, I'm starting to be quite optimistic that I'll be able to make the experience for new users pretty fun.

The chaos monkey lives

In the last couple of days I finally got around to building the "chaos monkey" that I've wanted to have for a long time. The chaos monkey is a script that randomly interacts with the Anukari GUI with mouse and keyboard events, sending them rapidly and with intent to cause crashes.

I first heard about the idea of a chaos monkey from Netflix, who have a system that randomly kills datacenter jobs. This is a really good idea, because you never actually know that you have N+1 redundancy until one of the N jobs/servers/datacenters actually goes down. Too many times I have seen systems that supposedly had N+1 redundancy die when just one cluster failed, because nobody had tested this, and surprise, the configuration somehow actually depends on all the clusters being up. Netflix has the chaos monkey, and at Google we had DiRT testing, where we simulated things like datacenter failures on a regular basis.

But the "monkey" concept goes back to 1983 with Apple testing MacPaint. Wikipedia claims that the Apple Macintosh didn't have enough resources to do much testing, so Steve Capps wrote the Monkey program which automatically generated random mouse and keyboard inputs. I read a little bit about the original Monkey and it's funny how little has changed since then. They had the problem that it only ran for around 20 minutes at first, because it would always end up finding the application quit menu. I had the same problem, and Anukari now has a "monkey mode" which disables a few things like the quit menu, but also dangerous things like saving files, etc.

The Anukari chaos monkey is decently sophisticated at this point. It generates all kinds of random mouse and keyboard inputs, including weird horrible stuff like randomly adding modifiers and pressing keys during a mouse drag. It knows how to move and resize the window (since resizing has been a source of crashes in the past). It knows about all the hotkeys that Anukari supports, and presses them all rapidly. I really hate watching it work because it's just torturing the software.

The chaos monkey has already found a couple of crashes and several less painful bugs, which I have fixed. One of the crashes was something completely I completely didn't expect, and didn't think was possible, having to do with keyboard hotkey events deleting entities while a slider was being dragged to edit the parameters of such entities. I never would have tested this manually because I didn't think it was possible.

The chaos monkey is pretty simple. The biggest challenges were just keeping it from wreaking havoc on my workstation. I'm using pyautogui, which generates OS-level input events, meaning that the events will get sent to whatever window is active. So at the start, if Anukari crashed, the chaos monkey would start torturing e.g. VSCode or Chrome or something. It was horrible, and a couple of times it got loose and went crazy. It also figured out how to send OS-level hotkeys to open the task manager, etc.

Eventually the main safety protection I ended up implementing is that prior to each mouse or keyboard event, the script uses the win32 APIs to query the window under the mouse, and verifies that it's Anukari. There's some fiddly stuff here, like figuring out whether a window has the same process ID as Anukari (some pop-up menus don't have Anukari as a parent window), and some special handling for file browser menus, which don't even share the process ID. But overall I've gotten it to the point where I have let it run for hours on my desktop without worry.

The longest Anukari has run now with the Chaos monkey is about 10 hours with no crashes. Other things looked good too, for example, it doesn't leak memory. I have a few more ideas on how to make the chaos monkey even more likely to catch bugs, but for now I'm pretty satisfied.

Here's a quick video of the chaos monkey interacting with Anukari. Note that during the periods where the mouse isn't doing anything, it's mashing hotkeys like crazy. I'm starting to feel much more confident about Anukari's stability.

Loading...

© 2025 Anukari LLC, All Rights Reserved
Contact Us|Legal
Audio Units LogoThe Audio Units logo and the Audio Units symbol are trademarks of Apple Computer, Inc.
Steinberg VST LogoVST is a trademark of Steinberg Media Technologies GmbH, registered in Europe and other countries.