Creating Platform Agnistic Renderer (BEAR)



BEAR - back end agnostic renderer.


Creating your first cross-platform wrapper for graphics.

Entry level:

Knowledge of what is graphics API.

Read Time:

15 minutes.

Project Link:


GitHub Link:


Table of Contents:


Why would you spend multiple weeks or months, wrapping graphics APIs for just 2 platforms? Isn't it easier to just work with 2 APIs, especially in a team with specialists in both APIs? I believe, it highly depends on the time bounds of the project. If the game or other application will be in development for more than half a year (in our case 1 year), I strongly recommend investing your time in wrapping APIs. My reasoning is simple: 1 month of work on wrapping graphics (both creation and maintenance) can speed up your graphics work N times, where N is the number of platforms. So, from my amateur perspective, start implementing if

" i + g <= g * N", where i is the expected hours for creating API, g is the expected graphics features implementations hours and N is the number of platforms.

Of course, this is a very rough estimation. Moreover, most probably, if you work in any custom engine of a AAA company, the team will have some kind of graphics wrapper. So trying to implement it yourself, is a great learning opportunity.


  • Wrap PS5 AGC and PC DX12 APIs.
  • Wrap PSR and DXR functionality for ray tracing.
  • Wrap Compute functionality of both platforms.
  • Exception: we will not work on wrapping rasterization, as our project relies fully on ray tracing.


This article and the BEAR wouldn't been possible without close collaboration with a PS5 AGC specialized graphics programmer Angel Angelov.

Moreover, I want to thank Jesper Enschede, who worked on the BSL - the cross-compiling shading language, which made the unification of shaders much easier.

Steps Overview

  • Angel and I gathered and went through 2 solutions (with model loading and rending via ray tracing) we listed all the functions, structures, and classes, that we might need.
  • We started looking for similarities, based on which we could unify the APIs.
  • We began thinking about wrappers for each class and the way we can find compromises between APIs on a whiteboard.
  • Upon finding unmatching functionality, we researched more to find alternatives. For instance, we got rid of the function shaders and shader tables in favor of inline ray tracing.
  • We created a UML.
  • We made headers for every planned class.
  • We created .cpp files for each of the platform-specific implementations.
  • The first proof of concept – simple compute clear screen shader pipeline with Bear.
  • The first ray tracing shader with Bear.
  • Model loading and rendering with Bear.
  • Wavefront path-tracing with Bear.


1. Before beginning the implementation, I and Angel needed a solid base and a clear prototype to achieve in-engine. Both of us already had a year of DX12 and AGC programming experience. However, this mostly involved rasterization and briefly compute.

2. Therefore, we began our journey by researching the ways DXR and PSR changed the game. The outcome of this preparation was a lot of summary documentation and 2 example projects with a simple bindless (resources aren't bonded to a pipeline, instead are accessed through a resource heap in the shader) gltf model loading and rendering via ray tracing.

3. With these 2 projects, we began to list every class, function, shader specifics, etc. which each of us used in these test projects.

4. Going through each other's lists together, we tried to connect similar bullet points.

5. After writing similar concepts on the board, we started to list the functions, which each of these concepts would have if it was a separate class.

6. Of course, in this process, multiple classes from the original APIs are merged and simplified. Try to think about the MVP - the functions that would 100% be used in your prototype, do not wrap every single format and flag. This would simplify debugging a lot, as well as code reviewing and overall readability.

6.5. Most of the graphics APIs do not have full functionality compatibility, for example, Root Signatures are unique for the DX12, try to simplify them as much as you can. The more complex you make it, the more another API's implementation has to improvise. Moreover, do not choose a "hard way": for instance, DXR has a concept of Shader Tables and Function Shaders, which can be useful for your ray tracing applications. However, this is +1 complex concept to implement. Always check the alternatives first - in our case inline ray tracing.

7. After you feel like everything is set in place and feels rather robust, go through your test project and try to replace on paper the whole code with your planned functions. This helps to be sure nothing is left out (even though with such a complex task, something always is).

8. This is a good moment to move on from a whiteboard to a UML. A more structured overview will help a lot with the creation of headers.

Implementation Details

0. Project setup: the solution has a 4 project structure - engine, game, Windows platform project, PS5 platform project.

1. Headers: the implementation began with creating headers. We ended up with 10 headers, which wrapped resources: Buffer, Texture, Sampler, compute pipeline: Command List, Compute Pipeline Description, Resource Descriptor Heap, Sampler Descriptor Heap, Shader Layout and ray tracing specific: BLAS and TLAS.

This might seem like too little for the API, but for student-level applications, which focus on path/ray tracing this is more than enough (for instance samplers can be hardcoded).

2. Implementations: in parallel in 2 projects .cpp files for each class with platform-specific functionality are created.

As you can see in the example images, it is somewhat similar to the DX12 API, yet it is simplified. For Root Signatures (in our case called shader layout), we limited functionality to just CBV, UAV, SRV, and 32-bit constants and made PSO creation with 1 line.

Working with resources is also simpler, as now buffers and textures are created with 1 line of code and can be pushed and replaced in the resource heap or bound through command list functions. All the resource states are automatically switched, so no need to track them.

Another important change was shaders. At first, we used shader "translators", which mapped the corresponding functions in .hlsl to .pssl, so that just 1 shader was used. However, as shader complexity increased, we started searching for a better solution. And we were very lucky to get Jesper to join, as during the development of the BEAR, he managed to create BSL - shading language, that would compile on both platforms, so now we write both code and shaders for graphics just once for 2 platforms.

Want to learn more about the implementation, visit the project's Wiki and Source Code.

Proof Of Concept

1. The prototype of ours, that we strived to achieve was a simple compute pipeline with a shader, which will read the input color buffer, write it to the output texture, and after execution, the texture is copied to the render target.

2. The second prototype was "going bindless". Same as the first one, but instead of binding a buffer, we used a resource heap and a sampler heap and output a loaded texture.

3. The next step was adding a ray-traced model. So, the prototype expanded on the previous one by introducing BLAS, TLAS, and the ability to bind and use TLAS in shaders.

4. As a logical improvement of the previous prototype, we moved the GLTF model loading and rendering fully to BEAR.

5. After this we were pretty sure the base functionality was there. Of cource, some complex implementations require small implementations and additions (like the update functionality for textures was added to support Ultralight).

The final step was being able to recreate a complex rendering algorithm: wavefront path tracing with the BEAR and BSL. In this step, most changes and fixes were made to the BSL. In the end, it all of it functioned like magic.


1. It slows down the production process at the start - make a regular basic renderer before spending your time on the global wrapper! This will allow the whole team not to be blocked because of you. We didn't account for this issue early enough and this mistake caused a lot of work hours wasted within the physics and gameplay strike teams.

2. Platform-specific optimizations. A lot of DX12 and PS5 AGC functionality is not used, just to marry the APIs.

3. Fillers. Some of the unmatching functionality had to be "simulated" on one or another platform. This makes the code not behave optimally. So again, we sacrifice top performance for the development time.


Developing BEAR was a tough, yet great experience for me. I know that I've learned a lot about cross-platform development, PS5 AGC, and even DX12.

I believe, the initial idea succeeded and was proved to be a correct decision for the project. Wavefront path tracing would make us suffer if not the BEAR, as graphics programmers from both platforms would have to know the algorithm in depth. And the newer additions to the project like ReStir and SVGF will prove the correctness of our decision again.

Even though I am not sure, how many practical steps you can take from this article, hope it was at least an interesting reading!

About this Portfolio

Welcome to my portfolio website! I'm Andrei Bazzaev, a game developer and graphics programmer. Check out my projects and other parts of my portfolio to see my skills and experience.

I'm currently seeking an internship, so feel free to contact me with any opportunities. Thank you for visiting!






Phone: +31615426315


Created with ©