Anteru's blog
  • Consulting
  • Research
    • Assisted environment probe placement
    • Assisted texture assignment
    • Error Metrics for Smart Image Refinement
    • High-Quality Shadows for Streaming Terrain Rendering
    • Hybrid Sample-based Surface Rendering
    • Interactive rendering of Giga-Particle Fluid Simulations
    • Quantitative Analysis of Voxel Raytracing Acceleration Structures
    • Real-time Hybrid Hair Rendering
    • Scalable rendering for very large meshes
    • Spatiotemporal Variance-Guided Filtering for Motion Blur
    • Subpixel Reconstruction Antialiasing
    • Tiled light trees
  • About
  • Archive

Porting from DirectX11 to OpenGL 4.2: API mapping

October 13, 2013
  • Direct3d
  • Graphics
  • Opengl
approximately 9 minutes to read

Welcome to my Direct3D to OpenGL mapping cheat-sheet, which will hopefully help you to get started with adding support for OpenGL to your renderer. The hardest part for me during porting is to find out which OpenGL API corresponds to a specific Direct3D API call, and here is a write-down of what I found out & implemented in my rendering engine. If you find a mistake, please drop me a line so I can fix it!

Device creation & rendering contexts

In OpenGL, I go through the usual hoops: That is, I create an invisible window, query the extension functions on that, and then finally go on to create an OpenGL context that suits me. For extensions, I use glLoadGen which is by far the easiest and safest way to load OpenGL extensions I have found.

I also follow the Direct3D split of a device and a device context. The device handles all resource creation, and the device context handles all state changes. As using multiple device contexts is not beneficial for performance, my devices only expose the “immediate” context. That is, in OpenGL, a context is just use to bundle the state changing functions, while in Direct3D, it wraps the immediate device context.

Object creation

In OpenGL, everything is an unsigned integer. I wrap every object type into a class, just like in Direct3D.

Vertex and index buffers

Work similar to Direct3D. Create a new buffer using glGenBuffers, bind it to either vertex storage (GL_ARRAY_BUFFER) or to index storage (GL_ELEMENT_ARRAY_BUFFER) and populate it using glBufferData.

Buffer mapping

Works basically the same in OpenGL as in Direct3D, just make sure to use glMapBufferRange and not glMapBuffer, which gives you better control over how the data is mapped, and makes it easy to guarantee that no synchronization happens. With glMapBufferRange, you can mimic the Direct3D behaviour perfectly and with the same performance.

Rasterizer state

This maps directly to OpenGL; but it’s split across several functions: glPolygonMode, glEnable/Disable for things like culling, glCullFace, etc.

Depth/Stencil state

Similar to the rasterizer state, you need to use glEnable/Disable to set things like the depth test, and then glDepthMask, glDepthFunc, etc.

Blend state

And another state which is split across several functions. Here we’re talking about glEnable/Disable for blending in general, then glBlendEquationi to set the blend equations, glColorMaski, glBlendFunci and glBlendColor. The functions with the i suffix allow you to set the blending equations for each “blend unit” just as in Direct3D.

Vertex layouts

I require a similar approach to Direct3D here. First of all, you can create one vertex layout per vertex shader program. This allows me to query the location of all attributes using glGetAttribLocation and store them for the actual binding later.

At binding time, I bind the vertex buffer first, and then set the layout for it. I call glVertexAttribPointer (or glVertexAttribIPointer, if it is an integer type) followed by glEnableVertexAttribArray and glVertexAttribDivisor to handle per-instance data. Setting the layout after the vertex buffer is bound allows me to handle draw-call specific strides as well. For example, I sometimes render with a stride that is a multiple of the vertex size to skip data, which has to be specified using glVertexAttribPointer (unlike in Direct3D, where this is a part of the actual draw call.)

The better solution here is to use ARB_vertex_attrib_binding, which would map directly to a vertex layout in Direct3D parlance and which does not require lots of function calls per buffer. I’m not sure how this interacts with custom vertex strides, though.

Draw calls

That’s pretty simple once the layouts are bound, as you have to handle the stride setting there. Once this is resolved, just pick the function which maps to the Direct3D equivalent:

  • Draw: glDrawArrays
  • DrawInstanced: glDrawArraysInstancedBaseInstance
  • DrawIndexed: glDrawElementsBaseVertex
  • DrawIndexedInstanced: glDrawElementsInstancedBaseVertex
  • DrawAuto: glDrawTransformFeedback
  • DrawIndirect: OpenGL is much more powerful in this area, providing not only the basic glDrawArraysIndirect and glDrawElementsIndirect, but also multiple indirect draw calls using the ARB_multi_draw_indirect extension (core in 4.3)

Textures & samplers

First, storing texture data. Currently I use glTexImage2D and glCompressedTexImage2D for each mip-map individually. The only problem here is to handle the internal format, format and type for OpenGL – I store them along with the texture, as they are all needed at some point. Using glTexImage2D is however not the best way to define texture storage. These APIs allow you to resize a texture later on, which is something Direct3D doesn’t, and the same behaviour can be obtained in OpenGL using the glTexStorage2D function. This allocates and fixes the texture storage, and only allows you to upload new data.

Uploading and downloading data is the next part. For a simple update (where I use UpdateSubresource in Direct3D), I simply replace all image data using glTexSubImage2D. For mapping I allocate a temporary buffer and on unmap, I call glTexImage2D to replace the storage. Not sure if this is the recommended solution, but it works and allows for the same host code as Direct3D.

Binding textures and samplers is a more involved topic that I have previously blogged about in more detail. It boils down to statically assigning texture slots to shaders, and manually binding them to samplers and textures. I simply chose to add a new #pragma to the shader source code which I handle in my shader preprocessor to figure out which texture to bind to which slot, and which sampler to bind. On the Direct3D side, this requires me to use numbered samplers, to allow the host & shader code to be as similar as possible.

Texture buffers work just like normal buffers in OpenGL, but you have to associate a texture with your texture buffer. That is, you create a normal buffer first using glBindBuffer and GL_TEXTURE_BUFFER as the target, and with this buffer bound, you bind a texture to it and populate it using glTexBuffer.

Constant buffers

This maps to uniform buffers in OpenGL. One major difference is where global variables end up, in Direct3D, they are put into a special constant buffer called $Global, in OpenGL they have to be set directly. I added special-case handling for global variables to shader programs; in OpenGL, they set the variables directly and in Direct3D globals are set through a “hidden” constant buffer which is only uploaded when the shader is actually bound.

The nice thing about OpenGL is that it gives you binding of sub-parts of a buffer for free. Instead of using glBindBufferBase to bind the complete constant buffer, you simply use glBindBufferRange, no need to fiddle around with difference device context versions as in Direct3D.

Shaders

I use the separate shader programs extension to handle this. Basically, I have a pipeline bound with all stages set and when a shader program is bound, I use glUseProgramStages to set it to its correct slot. The only minor difference here is that I don’t use  glCreateShaderProgram, but instead, I do the steps manually. This allows me to access the set the binary shader program hint (GL_PROGRAM_BINARY_RETRIEVABLE_HINT), which you cannot obtain otherwise. Oh I grab the shader program log manually as well, as there is no way from client code to append the shader info log to the program info log.

For shader reflection, the API is very similar. First, you query how many constant buffers and uniforms a program has using glGetProgramiv. Then, you can use glGetActiveUniform to query a global variable and glGetActiveUniformBlockiv, glGetActiveUniformBlockName to query everything about a buffer.

Unordered access views

These are called image load/store in OpenGL. You can take a normal texture and bind it to an image unit using glBindImageTexture. In the shader, you have a new data type called image2D or imageBuffer, which is the equivalent to an unordered access view.

Acknowledgements

That’s it. What I found super-helpful during porting was the OpenGL wiki and the 8th edition of the OpenGL programming guide. Moreover, thanks to the following people (in no particular order): Johan Andersson of DICE fame who knows the performance of every Direct3D API call, Aras Pranckevičius, graphics guru at Unity, Christophe Riccio, who has used every OpenGL API call, and Graham Sellers, who has probably implemented every OpenGL API call.

Previous post
Next post

Recent posts

  • Effective meetings
    Posted on 2022-09-12
  • Advent 2021: Open source
    Posted on 2021-12-24
  • Advent 2021: Blender
    Posted on 2021-12-23
  • Advent 2021: Visual Studio Code
    Posted on 2021-12-22
  • Advent 2021: Thunderbird
    Posted on 2021-12-21
  • Older posts

Find me on the web

  • GitHub
  • GPU database
  • Projects

Follow me

Anteru NIV_Anteru
Contents © 2005-2023
Anteru
Imprint/Impressum
Privacy policy/Datenschutz
Made with Liara
Last updated July 27, 2019