During the end of the last game project, I decided to start a hobby project where I would create a new game engine from scratch since I wanted to recap everything we learned during the school course. So, I started working on my engine named Dynamo. I worked on this engine mainly during the winter break and got far enough that we decided to use it for our 7th game project.
Meshes & Animations
During the first weeks of this project, I worked on implementing animations. Our last project didn't require any advanced animations and blending would do just fine but for this project, we were way closer up to the player and would therefore require both Additive and Override animations.
Once we started getting more meshes into the scene, it became apparent that it took quite a long time to load a mesh in. So I then worked on implementing our binary mesh and animation formats.
Instanced rendering is a technique where you only send data to the GPU once and then have the GPU draw multiple instances of the same mesh. This will get rid of a lot of overhead that you usually get when submitting data between the CPU and GPU. This lowered our draw calls from about 15000 to <100, which was a massive increase in performance and allowed us to have bigger levels with a higher framerate.
Pointlights & Shadow mapping
Due to our game being indoors, we knew it would require a lot of point lights in our scenes with most of them requiring shadows. So I implemented shadow map generation using a TextureCube. A texture cube is a texture type that can be used in shaders to sample from using a three directional vector. The cube itself consists of 6 Texture2Ds and the problem with this was that when you set them as an active target, DirectX will only draw to the first texture in the array. So to work around this, I had to implement a geometry shader that would duplicate all vertices onto all six different textures in the array.
In our last project, we didn't require point light shadows and could then batch render all our point lights using a single fullscreen pass. However, I realized when implementing shadows that it was taking way too long to render 30+ fullscreen passes just for point lights and we had an FPS of <20.
My first attempt was implementing a shadow map array such as this:
But I quickly stumbled into some issues. The main issue was that when trying to index in an array in HLSL the compiler is forced to unroll the loop. This is to make sure that it doesn't go out of bounds in the array. So I came up with a workaround:
This loop could then be unrolled and the compiler then understood that the index would not go out of bounds. The problem with this solution is that branching in HLSL is slow and it gave me worse performance than before batching the lights. So I had to come up with another solution. With some research, I found the TextureCubeArray. The TextureCubeArray barely has any documentation at all and I had no idea how I was supposed to map my shadow maps to it. I managed to get one TextureCube to map to the first array index but no other.
I came up with a solution to create one ShaderResourceView (SRV) which holds an array of texture cubes. I did this by creating a normal texture cube but then increasing the array size to 6 * numberOfPointlights. Now I had a resource that I could bind to the TextureCubeArray. The next problem was how I would generate my shadow maps to this array. I created a shadow manager that hands out an index to each point light that I then used in my geometry shader to generate the shadow maps, as seen below.
This was a good optimization, but I still wasn't satisfied as I was still rendering all lights visible in our frustum even though they were hidden behind meshes or too far away to be noticeable. I decided to implement a new GBuffer texture that would hold information on what lights act on each pixel. I could then sample this texture to retrieve four indexes of point lights that I should render for each specific pixel.
I expanded my work on the unreal exporter I created in the last project. It was tedious work for the level designers to iterate on the scenes and export them into a folder and then manually move it to our engine. Due to this, I wanted to rework this tool entirely.
I implemented the first change to move the exporter code from the previous actor component into a plugin. This would remove the need for an object in every scene with the component attached to itself while also being more easily distributable to our level designers. Now it was just a matter of adding a folder instead of creating actor components and copy-pasting code that also is project name dependent.
The second change I wanted to implement was a connection between unreal and Dynamo. With the click of a button in Unreal, the scene should automatically be exported and loaded into our engine. I did this by firstly exporting the scene to a JSON format and saving it in AppData. Then I would search for a windows process with the name Dynamo and send a Window Proc message if found. When this message was received, Dynamo would then load the scene from AppData into our engine. The result can be seen below.
I implemented threaded rendering using command patterns. This rendering thread would then handle all culling and rendering.
Emissive maps are extra textures that graphics artists use to create emissive in different colors.
Shader compilation in-engine
To further develop our technical artists pipeline I implemented a recompile all shaders button so they dont have to restart the entire engine.
Editor play mode
I implemented a play mode where you can run the game through the editor. I did this by saving a copy of the scene and then playing the current scene. When the user stops play mode I would just load the old scene back in again.
I implemented PhysX as our physics engine. Added support for Box, Sphere, Capsule, Mesh, and Convex colliders aswell as Character controller and collision & trigger filtering.