This post is very old. Please bear in mind that information here might be incorrect or obsolete, and links can be broken. If something seems wrong, please feel free to comment or contact me and I'll update the post.
A premiere on this blog – a look at a a few interesting things going on in the graphics scenes. If I get positive feedback, I’ll try to repeat this, for now, it’s just a test. So let’s start this walk of random graphics topics by looking at a few recent papers.
- “RenderAnts: Interactive REYES Rendering on GPUs”: A very interesting paper, using the BSGP framework by Kun Zhou. This paper describes a full REYES renderer running completely on the GPU. That is, from bounding over dicing to shading and sampling, everything is done on the GPU. The interesting part is how they do the scheduling across 3 GPUs with very little overhead. For rendering the micropolygons, they use a simple scanline based rasterizer. I think this is also quite interesting, as the rasterization of micropolygons is still a problem, especially when effects like depth of field or motion blur are added.
- “Data-Parallel Rasterization of Micropolygons with Defocus and Motion Blur” is a paper which solely investigates how to rasterize micropolygons. It contains a description how Pixar’s rasterizer (used in PRMan), and also efficiency measurements. The paper also proposes a new rasterization algorithm, which outperforms Pixar’s – which makes it worth to take a look.
- “A bit more deferred”, a presentation by Crytek’s Martin Mittring. It describes how the CryEngine 3 is approaching lighting. The basic idea is to decouple lighting from surface shading – just as in deferred shading – but still execute the actual shading while forward rendering the surfaces. Sounds a bit weird at first, but it’s similar to what “light shaders” in RenderMan do. If you look at the RenderAnts paper, you will see that they group together lighting calculations (illuminance in RSL) – with the same idea in mind as here. This approach is also described by Wolfgang Engel in his “Light pre-pass renderer”, and has been picked up by the Realtime-Rendering blog as well. I guess that in the future we will see even more approaches which compute the lighting information independently of surface shading, and I’m curious to see how far this can be pushed. There are a few more interesting papers out there So much for the first time, if you are interested in reading more like this, feel free to comment!