Goodbye SDL
As announced previous month, we've spent a lot of time removing SDL from the codebase and replacing it with Qt. Before, we used both SDL and Qt in tandem. This generally worked okay, but was always a bit weird, since both frameworks have essentially the same features. Mixing them together in our codebase also required a few workarounds, like having to convert SDL window events to Qt events (and vice versa), wrapping the Qt window inside SDL for GUI drawing (which also created problems on some OS's display servers), and numerous smaller forms of jank.
As of now, everything related to SDL graphics has been ported to Qt6. This includes these components for example: - Window Creation - Input Events (mouse clicks, key presses) - OpengL Context Management - Texture Loading - GUI Management and Drawing A lot of the glue code connecting SDL and Qt could also be removed, which reduces the code complexity quite a bit. While SDL is not part of the graphics pipeline anymore, it still remains in other parts of the code, e.g. audio management, so it's not completely removed yet. However, we will probably replace that with a Qt equivalent in the near future.
Decoupling Renderer and Engine
Another side effect of the rework of the graphics code is that the remaining code is now separated from the main engine where the gameplay calculations happen. Before, graphics and gameplay were rather closely coupled, with direct communication between GUI, renderer and gameplay. There was also no clearly defined path between the components, so keeping the complex gamestate intact when the code was changed was not easy.
We've now reworked this wild-west approach into something which is hopefully more manageable in the future. Basically, the new workflow of the engine considers the renderer and GUI components as optional, i.e. everything in the engine concerning gamepay should work on its own without access to a display. Whenever the components have to communicate, there are now clearly defined paths that operate in one direction. For example, user input events will be funnelled into the engine by pushing them into the engine's event queue where they will be delegated to the correct place. Similarly, the engine can push animation requests into the rendering queue, where the renderer decides what to do with them.
What's next?
There are still some parts in the renderer which need improvement, so work on that will continue for the next month. To support the new rendering workflow, the renderer needs to provide connector objects, so that the engine can make rendering requests. For these requests, the renderer then has to decide where the objects have to be drawn on screen, what texture to use and potentially handle animation states.
Since the engine is the main user of the renderer, we will also have to work on more basic engine stuff. This will probably involve a few rendering tests, before we actually implement "real" gameplay.
Questions?
Any more questions? Let us know and discuss those ideas by visiting our subreddit /r/openage!
As always, if you want to reach us directly in the dev chatroom:
- Matrix:
#sfttech:matrix.org