# Using right/left-handed viewing systems with both DirectX & OpenGL

One problem many 3D graphics programmer constantly run into is left/right-handed view matrices. In many cases, beginners get stuck with left-handed coordinate systems because they start with DirectX. Worse of all, some sources on the web claim that DirectX somehow mandates a left-handed coordinate system which leaves beginners even more puzzled.

So let’s take a look at how much truth is in this claim, by trying to derive how to use a right-handed coordinate system with DirectX. In particular, we want to be able to use exactly the same matrices as for OpenGL, i.e. a view matrix which is looking down the negative z-axis and a projection matrix which works with this view.

Before we start, keep in mind that the graphics hardware or API doesn’t care at all what your chirality your coordinate systems have. All they expect is that the depth values are in the correct range (for OpenGL, -1..1 and for DirectX, 0..1) and some order to determine if a face of a triangle is back-facing. That’s all, as long as depth values in the correct range and order will be produced, your can use anything you want.

So let’s try to use a right-handed view and projection with DirectX. We have to make sure that both are indeed right-handed (i.e. not mixing the “handedness”. There’s a good post describing the possible issues in that case) – in the simplest case, we can directly use the matrices generated by `gluLookAt`

and `gluPerspective`

(there’s lots of source code around for how those are implemented.) Using those, we have now to resolve two problems:

- DirectX uses a 0..1 z-Range, while OpenGL uses -1..1
- The default DirectX triangle winding is "left-handed"

Let’s tackle the problems one by one. We can solve the first problem easily with a scale matrix \(S\) which scales the depth range by 0.5 and a bias matrix \(B\) which translates depth by 1 after the projection:

All we need to do now is to apply the projection matrix \(P\) first, and then \(S\times B\), i.e. the total projection matrix is \(S\times B\times P\). We can use now an OpenGL-style projection matrix \(P\) with DirectX as the depth range is now correctly mapped. However, if we use back-face culling, we will notice that we cull exactly the opposite faces, which brings us to our second problem.

The graphics APIs define the front-face by the vertex order. For DirectX, if you have a triangle with three vertices a,b,c, then the triangle is facing towards you if the normal (computed by the cross product of the edges \(b-a,c-a\)) points towards you. However, DirectX assumes per default a left-handed coordinate system, so you must use the “left-hand” rule for the normal. This is of course opposite now to the right-handed view we’re using, but it can be trivially fixed. When creating a rasterizer state, set `FrontCounterClockwise`

to true and now everything behaves consistently.

There’s one problem with the scale/bias matrix approach though, which is numerical precision. Projections already have notorious precision problems, and if we work on the coordinates after the projection, precision is going to get even worse. However, we can factor in the output depth range directly into the projection matrix:

Here, \(s_x, s_y\) are the aspect ratio dependent scale factors (\(s_y = \cot(\text{fov}), s_x = s_y / r_a\)) and \(d_n, d_f\) are the depth values of the near and far plane after the projection (for DirectX, use \(d_n = 0, d_f = 1\), for OpenGL, use \(d_n = -1, d_f = 1\).) This is exactly the same matrix I use for both DirectX and OpenGL without any modifications whatsoever.

That’s all, there isn’t any more magic involved!

**Update**: Fixed the combined depth-range projection matrix, thanks Marc!