devlog > gpu

Captain's Log: Stardate 78206.5

Today I decided to focus on writing the code for the new renderer that loads assets, creates instances for them, and then does all the transformations to draw them on the screen. This was pretty straightforward, as most of it was just porting from glm:: vector/matrix operations to filament::math:: equivalents. The one tricky bit was figuring out how to efficiently add and remove entity instances from the scene, but ultimately that turned out to be pretty straightforward.

I started by re-exporting all of my Blender models as .glb. This gives me geometry, but because my old handmade OpenGL engine did textures separately, my Blender models don't have textures and thus the .glb files don't either. At some point I will go through them and add the textures, but I haven't done that yet. (This is slightly painful, because really I just want to pay someone to start over and redo all the models and give them really nice shapes and textures, which means that my texturing work will be wasted. But I'll probably do it anyway to get textures in the meantime.)

So everything is just a flat gray color for now. But one cool aspect is that Filament does shadows and ambient occlusion by default, so I magically get those, which you can kind of see below.

Another thing I'm quite excited about is to add a proper skybox, and have indirect lighting and reflection maps computed based on its irradiance. The "metal" masses have never looked metallic, and the lack of reflection maps and environmental lighting are the main reasons. So this should help a lot in terms of making metallic things look metallic.

At this point all the entities are displayed correctly (in terms of position/rotation/etc). However there are still missing renderer features, such as the 2D overlay for the selection box (and eventually hover-over GUI elements). And I don't yet know how I'm going to highlight the selected entities. I think I'm going to have to compile my own Filament material to give it a selection color uniform.

Another thing I am realizing I need to do now is asset caching. I had this working really well with my hand-written renderer, and now that's blown. It's really important because when running as a plugin in a DAW, every time you open and close the GUI it is completely loaded and destroyed. And if there are multiple instances of the plugin, they'll both load assets. So it's all just too slow and annoying to not cache them across instances / GUI loads.

Captain's Log: Stardate 78203.4

Today I got the new rendering approach working on MacOS, using an NSView that hovers over the main editor window. It works just like on Windows, with the right click pop-up menu correctly displaying on top (via another NSView). I have fully weeded-out OpenGL from the app, using Vulkan on Windows and Metal on MacOS. That means that I'm no longer using any APIs that Apple has deprecated, which was a big blocker for releasing the production version of the app.

The renderer currently does all the camera operations correctly, so you can zoom, rotate, use orthographic views, etc, just like with the old renderer, and it all works. However none of the entities are displayed -- it just loads the "broken helmet" glTF demo model and displays it. The fact that I can now load and render arbitrary glTF models is wonderful, because it means that I can now hire an artist for the 3D assets and get them exactly how I want them. With my custom renderer this would have been a lot trickier, since the artist would have to understand my formats.

Next I need to convert all my existing .obj models to .glTF, load them in Filament, instance them, and translate/rotate them into their correct positions for display. The other thing I need to do is rework the parts of the GUI that hover over the 3D window, since that's no longer possible (except via native windows, which have to be square). Both of these things are fairly straightforward, but may take a bit of time to get right.

Now the bad news: running the renderer in Metal did not fix the MacOS audio performance issues. This means that there's something really funny happening, because when I run the app in headless mode for golden tests, it performs much better. And it still performs poorly in GUI mode even if I disable the 3D renderer entirely, so it's not the 3D graphics interfering with the audio. I'm thinking the OS may have some weird heuristics about what kinds of processes to prioritize for GPU compute. So this is still an open area of investigation.

Captain's Log: Stardate 78201

Finally I've made some progress on the 3D renderer replacement.

As noted in yesterday's update, finding a way to integrate a fully custom 3D renderer that wants to own its own swap chain with the JUCE GUI has not been totally obvious. Today I spent a lot of time attempting to get the JUCE GUI to render in a Windows HWND that had the WS_EX_LAYERED property, so that it could be partially transparent, with the 3D graphics rendered to an HWND beneath it. This would have had the advantage that all of the overlaid controls (the toggle buttons for camera mode, reset button, and so on) would work. However, WS_EX_LAYERED has some bad performance consequences, because Windows needs the pixel data on the CPU side to do hit tests -- it allows mouse clicks on transparent pixels to fall through to the window beneath.

So that approach turned out to be a no-go. But eventually I realized that there's a much simpler solution, which is to just draw the 3D graphics in a window that's on top of the JUCE GUI. Originally I didn't think this would work, because the 3D graphics would overwrite the GUI elements that pop up, such as the right click context menu. But I was being stupid: those pop-up menus are in actual OS pop-up windows. This is why when you record the Anukari window in OBS (and not the full desktop), you don't see the pop-up menus in the recording.

So actually everything works how I want if I just create an HWND and assign it to the 3D graphics. The pop-up menus appear on top, because they are native OS windows. Now, the one drawback is that the JUCE GUI elements that hover over the 3D graphics will have to be redone in terms of the 3D renderer, but that's not a huge issue, it will be easy to do. I've already figured out the weird HWND setup to get mouse clicks and keyboard events to pass through to the JUCE GUI. So basically on Windows I see a clear path to this all working. I have a glTF model displaying using Google Filament.

I am pretty sure that this same approach will work fine for MacOS as well, but obviously that's the next thing I have to test before I start rewriting the renderer using this approach. Fingers crossed...

Loading...

© 2024 Anukari LLC, All Rights Reserved
Contact Us|Legal