Visual Studio 2008 SP 1 /.NET 3.5 SP 1 out

Well, as expected, the Visual Studio 2008 SP1 has been released yesterday, together with an updated TR1 implementation.

It contains some important fixes. Quoted right from the VC++ blog:

The three most significant fixes are:
  • A massive performance improvement in regex matching.
  • An across-the-board performance improvement in STL containers of TR1 objects (e.g. vector >).
  • A fix allowing tr1::function to store function objects with non-const function call operators.

#3 was reported by me (found it while working on a thread-pool and migrating some stuff from Boost over to TR1), and unlike most other bugs that I reported, it was not postponed until VC10 or 11, but really fixed :) Well done, Microsoft. (Side note: Those other bugs are mostly in their compiler, and as they are rewriting it, I do understand that it might take a bit longer).

What are you waiting for?

If you are serious about programming, you know this feeling: You start working with something, and after some while, you hit some kind of road-block. Looking around for an easy fix, you find out that version N+1 is going to solve this problem. And now you postpone this nasty feature and start waiting for N+1 ... like for example:

  • .NET 3.5 SP1: If you are doing WPF now, SP1 is the next big release of it. Vastly improved designer support in VS2008, better performance, easier usage, new GUI widgets.
  • VS2008 SP1: Finally TR1 support and more ... (note: supposed to be released today!)
  • DX11: Built-in tesselation support, no need to implement it on your own! Shader linkage, etc.
  • Mono 2.0: Finally most of C# 3.0 on Linux, so you can eventually start working for Linux
  • Firefox 3.1: You want to use videos on your homepage? Just wait for 3.1, which will make everything simpler by supporting videos out-of-the-box
  • C++0x: Will make the world a better place to be (seriously ;) ), at least as long as you are a C++ developer.
  • A compiler for C++0x
  • OOo 3.0: Ahh, finally notes/comments, how can I live without them currently?
  • Larabee: A truly programmable GPU

And the list goes on. Looking at it, I wonder how it is possible to do something right now -- surely, our tools must be extremely bad if each new revision brings lots of revolutionary improvements. I find myself quite often thinking "it's not worth to spend time with this here, the next gen is around the corner", but if you look back at the last 10 years or so you'll notice a few things:

  1. Updates cause problems. While 10 things work, one will be broken, and fixed in the next release again. This is getting better actually, but it'll come and get you anyway.
  2. The updates bring nothing really new to the table. Sure, some things might get slightly simpler or more comfortable with each new release, but they are not going to improve your productivity by 100%. For sure, in the time you wait for them, you'll be able to implement that feature, even if it takes 10% longer with the current version.

It's not like C++0x will allow us to rewrite whole applications to be vastly simpler. Same goes for example for C# 3.0: While it brings really nice additions on the table (lambda functions, LINQ), this is not going to help you writing your next game engine twice as fast. The increments are rather getting smaller.

Don't get me wrong. I'm not advocating to never upgrade and always work with old stuff; I'm on the bleeding edge usually. Just don't wait for the new stuff, but take a look at it when it comes out -- it might indeed make your life simpler, but waiting for it is not worth it. If you really have to do something, you can do it right now -- just shut up and do.

SQL Server Compact 3.5 SP 1 released (x64 inside)

Microsoft finally released a x64 enabled version of their SQL Server Compact Edition, as part of the SP1 release of it. Previously, the SQL CE was 32bit only, meaning if you were working on a x64 OS, you would have to set the target machine to 32bit. This was especially inconvenient for managed code, as it would default to x64 ... well, finally no problem any more :) It will be also part of Visual Studio 2008 SP1, so if you can wait for it (rumours say it'll come out on Monday) you'll get it there eventually.

SQL CE is a pretty simple SQL database provider with very nice integration into Visual Studio, allowing you to easily access the underlying database (using SQLMetal and LINQ).

Code quality notes

During the last term, I've been coding quite a bit again. Read on for some stuff to keep in mind when writing code, and for some things you better avoid.

War story one: I was debugging some math class which was using -- somewhere deep within -- Matrix::trace, which computes the trace of a (quadratic) matrix. Well, all good, some unit tests showed that the class seemed to work but in one special case, the result was wrong. I immediately thought that I've hit some corner case of that particular algorithm and I'll have to add loads of unit test to get close to it, but after half an hour of debugging in turned out that:

  • The class was totally broken actually, and the reason that it worked was ...
  • that the trace function was computing the trace and instead of returning it, it had a return true in it -- which returned 1.

By chance, the matrices I had used for the first unit tests had indeed a trace of one. Be aware of seemingly trivial functions, as errors in those functions are extremely hard to spot. They are easy to test, so just do it!

Copy & paste

Just as in this case: While refactoring some code at work, I came to a function which was processing a vector. The vector was implemented using four named floats, so you had to write vec.x to access the first element. The code in question was just copying from one data set to this vector. The source data was a float array though, and you can probably guess what happened. I wrote something like:

vec.x = source [0];
vec.y = source [0];
vec.z = source [0];

Well, just a simple copy & paste error, but still, this could be easily avoided by providing:

  • Index-based access, in that case, a loop would have been the right thing to do here.
  • An assign function which takes a float array.

Copy & paste errors can be easily avoided by the interface design, as they are usually a sign of a missing helper function. For the record, I did a similar mistake just a few lines below, so this is really something that you should guard against ;)

Header mayhem

Headers are dangerous, especially, if they come from 3rd party libraries. I can only recommend to wrap each such header in a custom header, so you can easily

  • turn on/off warnings for this header. Some headers are pretty badly written and tend to emit lots of warnings when compiling at high warning levels. This is no problem if you turn off the warnings before including the header and turning them on afterwars.
  • #undef stuff. Library headers tend to #define many things. Get a chance to clean up directly afterwards!

When including, include the header for the class you are implementing first, then C/C++ library headers, then your external headers, and then your project headers. Using this order, you immediately spot problems caused by external headers, which will save you quite some time looking for that mysterious "it-breaks-when-windows-h-is-included-somewhere-first" problems where, for example, the token ERROR gets defined.

NVI, interface checks

Prefer to use non-virtual interface classes, that is, interfaces where the implementation is virtual and private, but the client functions are public and non-virtual. This allows you to add checks for every function into the interface, saving you from checking again in the implementation. Less duplicated code, and each derived class immediately benefits from the base class checks. For debug mode, enable extended checks that also catch invariant violations etc. Additionally, you can easily provide helper functions without touching the classes implementing your interface.

For free functions and other classes, provide runtime checks to validate the parameters, together with a #define to turn them off globally (you might need this switch for release builds). Basically, if a function takes a pointer, check if it is 0, if you call something, verify the result is not broken, etc. I tend to have two macros, CHECK, and ASSERT, with CHECK being also enabled for runtime builds. As soon as a profiler tells me (a profiler! Not that colleague that says "this is going to be slow, I can feel it") that it causes a problem, I change it to ASSERT. This catches API abuse immediately, and by issuing a __debugbreak();, your Debugger will show you directly where it happened.

Test

The single most effective thing to get your code bug free is of course testing it. I cannot stress this enough, but by testing your code extensively, you get better API design, robust code and improved flexibility. If you have to change something later, and your code has good test coverage, you can be confident that you didn't break things. Just remember to run both unit tests and larger ones (integration, functional, you name it).

Voxels, sparse octrees, virtualization

Recently, I was blogging about virtual texture mapping, which is a method to solve texture budgeting problems. Back then, John Carmack was also talking about raycasting into sparse voxel octrees, to render an infinite amount of geometric detail. Let's take a look what this is about, complete with an overview of current research papers on this topic.

Voxels

Voxel based graphics are by no means a new idea. For those of you who remember games like Outcast or Comanche, well, these were voxel-based. Basically, a voxel is a small block, similar to Lego block. A pretty good explanation is available at the Wikipedia.

Usage

Well, voxels alone are not that interesting, because for any reasonably detailed modeled, you would need extremely huge amounts of voxels (if using an uniform grid). So, a hierarchical system is needed, which brings us to octrees. An octree is a very simple spatial data structure, which subdivides each node into 8 equally large subnodes. A sparse octree is an octree where most of the nodes are empty, similar to the sparse matrices that you get when discretizing differential equations. One problem I see is that no DCC tool is built for voxel modeling, so triangle meshes (including their texture) have to be "baked" into a voxel structure. This does have an advantage though, as the geometry and texture information is stored in exactly the same data structure.

For rendering, a sparse octree does have a few advantages, as it is quite easy to handle level-of-detail (just stop at a higher level of the hierarchy), streaming in of data is pretty simple (as it is totally uniform. You just stream in new "block"), and it gives a very high level of geometric detail. Jon Olick -- from id Software -- is going to give a presentation of their approach at this year's SIGGRAPH. Basically, they are going to use the voxel structure for all static world geometry and polygons for dynamic objects and characters; however, they are also evaluating doing dynamic characters with voxels (for sources, see the announcement on OMPF and some discussion about it on Beyond3d). There is obviously even some company in Australia working on a voxel based renderer now, called Unlimited Detail, which is not yet public. For information about this, check out this article about their renderer.

Plus, there has been quite some research into this. The following papers seem relevant for this topic (updated as soon as a new one crops up):

An interesting tid-bit is that ATI has been using voxel based rendering for their HD4870 demo. At least, they used technology from a company called LightStage, check out this video at youtube for a look at their stuff.

All in all, pretty impressive stuff. However, there is currently no DCC tool which can work with voxels, meaning each engine that wants to use voxels, has to bring in own tools to convert triangle based geometry to it. There is also little support for this from the API side (DX10 and OpenGL are primary based about rasterizing, they are not designed to support mixing different rendering algorithms). And finally, I'm still not convinced that voxels are the way to go. Tessellation of higher-order primitives, together with displacement mapping, seems at least as promising to me, and it has the advantage of being production proven (just go to the cinema to see the results of it).

[Update:] Added link to the SIGGRAPH presentation from Jon Olick about voxel rendering.

[Update 2:] Added link to a paper about perfect hashing.

[Update 3:] Added an article about the renderer used in the Ruby demo.

[Update 4:] Add link to the 2010 efficient sparse voxel octree paper & source code.