Virtual texture mapping, part 2
Over the last months, I’ve written a virtual texture mapping implementation as part of my student research work. Some people have already got a copy to read (you know who you are ;) ), rest assured that I’ll continue to work on this stuff. I’m going to post about it on this blog, as soon as the work becomes a bit more mature, currently the framework is in early alpha stage, and we are working on a better content creation pipeline. Our artist – although very talented – had a hard time to produce demo content, and hence we (that is, a co-developer and me) have to write some tools to help him.
My solution is basically a reimplementation of Sean Barret’s “Sparse Virtual Textures” (about which I already blogged about), this time with DX10, though I didn’t use anything DX10 specific. However, I measured lots of stuff, and tweaked based on that, and I still have lots and lots of things I have to try and measure. The implementation supports 4:1 anisotropic hardware filtering, and requires roughly 5x more texture space than framebuffer size (for a framebuffer with 400k pixels, you would need a 2M cache). No special shader tricks are needed, the lookup costs < 10 cycles (of which most are fixed overhead costs, so it becomes cheaper with more lookups).
Not all is lost though, you can use the comments to ask specific questions, and I’ll try to answer them.