Skip to main content

GDC Europe 2011: Day 3, with a tiny amount of Gamescom

Last day of the GDC, and three talks for me. The first one was "The three eras of Gaming" by Richard Garriot, His three eras were: Single-player games, massive-multiplayer games and finally social games, with the last category being rather diffuse. The talk provided a nice overview of game history, but I would have wished for some more actual analysis. In particular, I believe there is going to be more diversity in the future beyond social games like Farmville, and the talk was a quite diffuse at this point just stating that the games will somehow evolve but with no particular details.

The next talk I attended was "Lighting in Crysis 2", which was quite close to the Gamefest 2011 talk from a few weeks ago. The main difference is that this talk started with a short overview of the various lighting solutions used before Crysis 2. For some reason or another, this talk had also less technical details than the one from Gamefest ... making it not too interesting for a graphics developer. I wish they would have described only one or two techniques, even if those techniques got scrapped to understand what issues they care about and how they approach problems.

The last talk for me was "The Triforce of Courage" by Epic Games about the production of the game "Bulletstorm". A fun talk on distributed development and in particular how an American manager met Polish people; including all the cultural problems one would expect. The talk was presented in a refreshingly honest manner and also has some relevance for researchers, as we're often forced to work distributed.

So much for the GDC, and I went straight to the gamescom afterwards. Luckily, I managed to get a fast-access pass from Bethesda and go straight into the Rage presentation. I could play the game on PC, while a friend of mine tried the XBox 360 version. We both have implemented virtual texture mapping on our own, so we were of course quite curious to see how it's implemented in Rage. First of all, even though the machines were running 560 Ti, GPU transcoding was disabled in the options. The texture resolution overall is also low, in particular, areas in shadow are extremely compressed up to the point where block artefacts become visible. Note: This is the graphics researcher point of view, the graphics are looking great, it's just that I took special care to look at details. Another interesting tidbit is that the engine always loads through the whole mip-map chain (i.e. from low to high resolution.) I thought it would be able to stream in the finest level immediately if the data was present in the host cache. We managed to find a bunch of locations where a 180° would always fill the cache (so turning around back and forth would result in cache trashing.) In the worst case we managed to produce, the loading would take <1 sec, so I believe 99% of the users won't notice it.

Shadow resolution was quite low in many places with next to zero blur, but this might be due to the prerelease version. I'm pretty sure that hacking in some PCF or increasing the shadow map resolution is no problem (maybe it's already exposed through a config option.) So much for Rage; definitely looking forward to it and thanks again to Bethesda for the fast-access pass.

Time for a last rant before I finish my GDC & Gamescom coverage: Somehow the GDC guys totally messed up with the Gamescom organisation. Wednesday was the "trade visitor" day, which includes basically everyone who wants (I've seen loads of people under 16 going around as "trade visitors".) This means ridiculous amounts of crazy people everywhere, blocking the queue to each major game. Having a GDC pass does not help you get faster access as the Gamescom people are not aware of the GDC at all. I wanted to take a look at Battlefield 3 with a friend, and we would have to stand for roughly one hour straight. That's totally unacceptable, as we do have appointments and in general more interesting stuff to do than to queue up. Especially as the amount of GDC people among the trade visitors was surely <1%, and I guess there might have been maybe 50 GDC people total who were interested in Battlefield 3 for instance. Giving them slightly faster access would be great, heck, I would have no problem to check the games one day before in the evening or get shorter access. Alternatively, the Gamescom should limit the "trade visitors" to actual trade visitors only (i.e. 18+, from press, game vendors or GDC attendees) before opening it for everyone. If this doesn't get changed in the future, then this was the first and last GDC Europe for me.

That's point one, the second point is that lots of the parties I went to were not suited at all for socialising. In general, the parties had extremely loud music and not enough place to sit down. If you want to meet with people, make sure to check where the local restaurants and bars are.

So much for this year's GDC Europe and Gamescom, thanks for reading!

GDC Europe 2011: Day 2

As expected, the GDC night was a long one so I decided to skip the first slot of the second day and went straight for IllFonic's CryEngine 3 talk. I was hoping for some interesting war-stories, and at the least the first part didn't disappoint me. IllFonic went out to develop an updated version of Nexuiz, a first-person shooter originally based on the GPL-ed Quake 1 source code. They decided for their first version to take the updated Quake 1 renderer and beef it up even more to make their game. Soon they found out that the radiosity process in Quake 1 takes a lot of time and that QRadiant is not that user-friendly any more. The managed to get a single level running, but the iteration time for light changes was somewhere around 14 hours. As we learned yesterday, iteration time is important, so it's no surprise they get frustrated quickly with the Q1 engine and started looking for something different.

Short rant: At this point, I wonder what kind of R&D game developers do before deciding for an engine or other piece of technology. If iteration times are important, using a 10-year-old radiosity solver is of course not right, and my feeling is that any friendly graphics researcher could have told you that right away.

Anyway, they somehow got a CryEngine 3 license and are pretty happy with it, showcasing some of the nice editing features of CE3. I wonder what the licensing deal looks like, as IllFonic is a 6-man studio. I assume Crytek went for a royalty-based licensing deal to get those guys started quickly, similar to the UDK license. The presentation went deep into editor features and stuff at this point.

Time for another rant: Being a researcher, I'm mostly interested in technical details and "hard facts", and for some reason or another lots of talks start with the "I'm not going too much into technical details ..." even though this is supposed to be the game developer conference. The Slant Six games talk was a welcome exception, which even mentioned which particular tools they used and some alternatives they have evaluated. Of course I don't expect every talk to describe the shader optimisations they did or other low-level stuff, but I'm still curious to hear what implementation hurdles people are encountering (is your bake time too long? Do you have problems validating your art? Configuration space computation taking days?) and not just extremely general high-level "art integration is a problem" statements.

After lunch, I went to Epic Games' Mike Capps' keynote "Size doesn't matter". A very refreshing and entertaining look at the history of Epic Games and their game development attitude. I especially liked how they have identified key events in the company history ("Epic Events") and their influence on various strategic decisions. Among other things, they also mentioned that the Unreal Engine 4 is well under development. A friend of mine (graphics researcher turned game dev pro) and me concluded that this is a strong sign that UE4 is going to be similar to Frostbite 2; what leaves me curious is on what compute API they are betting on. Doing compute-based graphics is clearly the future, but the compute part is still very much fluctuating and unstable. They also mentioned that Epic Games has 5 titles in development, so I guess there is a bunch of mobile titles coming up as well.

Just after Mike's talk, Mark Cerny presented a "Long view" on the game history. Truth to be told, I haven't heard of Mark before this talk, but his list of credits is surely impressive and he can be only described as a real industry veteran. His talk started at arcade machines with some cool stories (no credits due to memory constraints, for instance) and continued all the way until the present day. An interesting analogy he mentioned is how arcade games went over the point where producing them was economical, showcasing similar developments in the current movie and game industry.

Three slots were still left, and I've picked "More firefighting troubled projects". An interesting overview of various project management and publisher relationship issues and solutions.

Next up was "Crysis 2 Multiplayer: A programmer's postmortem" for which I had some hope that it would be finally another technical talk with at least some nitty-gritty details. And I didn't get disappointed. They described their automated testing procedures and lots of internal debugging tools. Among other things, they have manually instrumented the code to get a nice hierarchical profiling together with extensive logging, which allowed them to quickly fix spikes in the frame rate. The other cool tool instrumented all writes in the code using GCC magic so they could track all memory overwrites. Basically they have some large buffer for all threads and store the program counter, the address and the memory contents for each write (I assume for each write through pointer.) During the game, writes should be rather spurious, so a relatively short list of the last million writes turned out to be helpful for debugging. Cool stuff for sure. I asked about visual testing to identify cases where the graphics just got plain broken, but that hasn't been implemented yet. A good talk overall, make sure to grab the slides from the Crytek page once they are live.

The last talk I briefly attended before the party night was Cevat Yerli's keynote on AAA online game development. The key takeaway from this for me was the cultural differences he mentioned and how games are consumed eastern countries, for instance in Korea. Sounds like a very challenging environment for games. He also said that he assumes that free to play is the future for all games. I sure hope that there is still some market for large single-player titles with good story, as I'm not looking forward to a future where each game is going to be consumed piecewise. Free to play makes a lot of people in the game industry really nervous now and I'm convinced that there must be other solutions between retail and free to play which haven't been explored yet.

Tomorrow, there's much less going on, so I'm going to spend some quality time on the gamescom and check out a bunch of games. If anyone is interested to meet, DM me on twitter.

GDC Europe 2011: Day 1

The first day at the GDC Europe, and time for a live-reporting experiment. I'll try to blog a daily summary of the talks I've seen at the GDC; let's see how this works out.

The first talk I went into was by Geomerics and their Enlighten/Unreal integration. Their solution has been already showed at SIGGRAPH a bunch of times, and there wasn't too much new in the talk. The key takeaway is that they managed to integrate their solution into Unreal which will certainly increase the use base. I guess many more engine developers will have to look into either licensing their solution or come up with in-house solutions for GI, so interesting time ahead for sure.

On the technical side, the talk fell a bit short. They mentioned that there is a precomputation step which is light independent and rather quick. However, while the light solution is applied to dynamic objects, they showed an interesting problem case. In the demo, the player entered a large warehouse and started shooting off the roof. The more the roof got uncovered, the more the warehouse got lit by the sky light. The way this was implemented is by lighting the warehouse without the roof geometry, and tweaking a "light exchange factor" between the outside and the warehouse based on how much of the roof has been already destroyed. My conclusion from this is that they have some cool way to precompute visibility between patches, which allows them to run their radiosity transfer very quickly, but rebuilding the visibility solution is still a problem (and they probably have to compress it a bit.)

The second talk I went into was "Supersize your production pipeline" by Slant Six Games, which was an excellent talk about how to automate your production pipeline and how to write good tools. I can only recommend to get the slides from their website and spend some quality time with your favourite scripting language to get the glue in place.

After the lunch break I went to two talks: "An open world game in 60 fps?" and "From boxes to life". The first presentation showed a few of the challenges encountered when making the game "Driver: San Francisco". My key takeaway from the presentation was that making an open-world game is horribly work-intense: 250 people for several years! Clearly, the content creation side of things requires some serious research.

Block based prototyping in Mass Effect 3 was a very fun and entertaining talk showing how they moved from a classic waterfall process to an agile, highly iterative one. In particular, they use small groups of people from multiple disciplines (gameplay, concept art and animation) to prototype creatures very early in the process. They focus to get the behaviour elements first by using box-based models. Interestingly, the speaker mentioned that the concept artists have an easier job to design the creatures as the behaviour served as a rough guide. Similar to the earlier talk by Slant Six Games, having a process with fast iteration time were key to reach the required quality.

After a break, I went to "Making MMOGs more storylike" which was about how to include a story into an MMO. After a short introduction which covered the basic points -- like an overall story arc -- the speaker went on to describe a story-focused MMO: "The Blitz Online". The game is supposed to take place during the second world war in London during a time of nightly air raids. The player can play a support role and tries improve the situation in the city by raising a global "The Spirit of the Blitz" until its high enough or the air raids end, at which point the game ends. While the basic idea is interesting, the game design very much looked like a clean-room game design. Some of the concepts are interesting on paper, but I have some doubts that the game would make fun as it is designed and some of the core concepts, like nearly no user-visible progression or penalizing people who play irregularly, definitely need some testing first. Unfortunately, there is no game prototype and there has been no user-testing so far, so at the point of the presentation, it's more a thought/social-experiment than an actual game, so we have to wait and see whether it ever gets implemented.

The last presentation I attended was "Implementing Robust and Scalable Art Integration" by EA Games. They described their in-house process for asset imports. While interesting, I was surprised that the process they described is not standard. Basically, they move assets through multiple review stages and use a latest/stable code bases. Again, this talk wasn't as technical as I would have wished for. I also asked about long-term art planning at EA, and it seems that there is no real asset reuse ... I wonder at which point the costs of re-creating assets over and over becomes so high that the studios will start looking into stable authoring and storage formats.

Last point on my schedule is the GDC night. Overall, the day was interesting mostly because of discussions with other people, and a few good talks. But I definitely would like to see more technical content. Maybe tomorrow, there's a bunch of interesting talks and keynotes coming up.

Review: PVS Studio

Disclaimer: The friendly folks at Viva64 have kindly provided me a review version of PVS Studio. I could test it completely on my own, on my machine, just as if I would have bought it.

As I have been curious for some while how PVS Studio stacks up against solutions I have already tried (C++-Check, Visual Studio Code Analysis, Intel C++ Code Analysis), I gladly took the opportunity to test-drive it one two of my projects: VPlan4, a small Qt based task tracker and niven, my research engine. VPlan4 consists of ~ 20 source files, compromising ~ 4 kLoC while niven consists of ~ 550kLoC, with 70 kLoC in the main library and the rest being external dependencies (400 kLoC), test code, infrastructure and scaffolding.

I tested PVS Studio on a quad-core Core i7 with 2.8 GHz and 12 GiB of memory; all data coming from a 7200 rpm HDD RAID-0. The interesting timings are for the larger project: For one library (~17 kLoC) is takes several minutes. The whole project requires ~ 45 minutes. PVS studio compiles each file and stores the preprocessed output, so using precompiled headers hurts in this case. On the upside you can easily restrict PVS studio to run on a single project or file only. For comparison, processing VPlan4 takes only a few seconds.


PVS Studio directly integrates itself into Visual Studio 2010. It also features an auto-update tool which is run at each startup, so there is no annoying resident update application. Painless procedure, just as it should be. After the installation, you can immediately run it on any C++ project (no need to modify the project files!)


After the checking is done, PVS Studio groups the warnings into 3 categories sorted by severity. For the niven libraries, this results in ~10k warnings, the majority in the lowest-impact category. PVS Studio usually provides distinct warnings for similar-but-not-exactly-the-same issues which enables precise disabling of warnings. That and the fact that you only need two clicks to disable a warning makes the warning list manageable.

Now, the most important part: In the niven code base, it did find three real bugs -- which haven't been found by C++-Check and the Visual Studio Code Analysis. Three bugs might not sound a lot, but many code analysis tools are not even able to find a single real bug, and the niven code base is pretty mature and has been tested with lots of static code analysis tools already, so three bugs are an excellent result here. On VPlan4, I got very few warnings and nothing serious, so I guess the code was written ok'ish :)

To give you an idea of what kind of bugs PVS Studio captures, here is one from niven:

tc.cbSize = sizeof (tc);
tc.pszWindowTitle = L"Title";

if (ok) {
    tc.pszMainInstruction = L"Done";
    tc.pszMainIcon = TD_INFORMATION_ICON;
} else {
    tc.pszMainInstruction = L"Error";
    tc.pszMainIcon = TD_ERROR_ICON;

    // No longer valid after assignment, caught by PVS Studio!
    wchar_t errorString [256] = {0};
    wsprintf (errorString, L"Error code: %d", GetLastError ());
    tc.pszExpandedInformation = errorString;

On the other hand, using PVS Studio does not mean you can throw away all other analysis tools you are using, as some bugs reported by other tools are not found. For instance, C++-Check finds this one but PVS Studio doesn't:

char s [5];
strcpy (s, "Sixchr");

There's also a bunch of things that get reported which shouldn't, but the folks at Viva64 told me that future releases will get those fixed. Overall, as far as I can tell, the analysis quality is pretty good (make sure you take a look at the list of issues which PVS Studio can find) and I couldn't spot any junk warnings or incorrect warnings.

Short rant: Having used lots of static code analysis tools so far, I'm still puzzled why nobody bothers to check for portability issues. niven compiles with GCC and MSVC. I'd be very happy to have a tool which gives me a warning if I'm using some construct which is not understood by GCC. That would definitely save me a lot of porting pain.



  • Works directly on Visual Studio solutions
  • Finds a lot of issues
  • Good, precise warning messages
  • Good support


  • Slow
  • Too many spurious warnings*
  • No porting/non-standard C++ warnings

In practice, I found that I'm checking my code base once every 2-3 months. As it's pretty infrequent, I don't worry too much if the tool produces a few spurious warnings but I do worry if something is missed. In that regard, PVS Studio fits the bill very well.

Final remarks

If you are new to static analysis, I can recommend giving PVS Studio a shot (there is a trial version.) Static code analysis for C++ is still at an early stage, but even now, a tool like PVS Studio can already help you discover lurking bugs. Especially if your code base is not already covered by unit tests, a static code analysis tool can quickly give you a hint which parts of your code base are ripe for review.

Oh, and before I forget: It also gets regular updates and the support is good -- in fact, I reported a bug at the beginning of my review which was fixed in just a few days.

Thanks again to Viva64 for the review version, and keep up the good work!

Poor man's animation framework revisited

Recently, I described the simple animation system I used for videos while at NVIDIA. After coming back, I've finally had some time to fix some remaining issues and and get a useful tool out of it. The core requirements remained the same:

  • Frame-by-frame processing: No .avi or other intermediate files, only plain images.
  • Leverage ImageMagick for the actual processing
  • Parallel: Efficiently scale with core count

The ImageMagick part was working reasonable well, so I decided to stick with this. However, the original framework required to manually compute the frame offsets which made stitching cumbersome and blending nearly impossible. Looking closer at what was happening, it became quickly obvious that the framework was stream oriented, but it was not explicit anywhere.


Enter Ava, the graph-based video processor. Right from the start, Ava is designed around the processing graph. For example, here is the graph for the SRAA video, which is already pretty big:

The key property is that the everything is designed around an image stream. Each node transforms one or more inputs into a single output; and can be evaluated independently of all other nodes. All frame indices are relative, that is, the graph can be easily composed as there is no "global" frame count. And finally, the processing order is bottom-up, or pull, instead of push. This means that each node executes its own inputs first before applying the transformation, minimising wasted work.

An interesting part is how the inputs are handled. As I use ImageMagick for the actual processing, the inputs have to present as files. What Ava does is while it walks the graph (where each node must have a unique name) it also generates unique file names for each node's input and process. Generating them from the input side is necessary so that for instance a single node which gets piped into several other nodes does not continuously overwrite its own input. Of course, this means some duplicated work, but the net win is that there is no synchronisation whatsoever. Overwriting the same file all the time also has the added advantage that less file system flushes are necessary. Originally, I would delete each file and recreate it again for the next frame, but overwriting directly seems to be a tad faster on Windows.

All of the stuff used to generate the SRAA video is also really compact. There are 11 different node types in total (roughly 200 lines of Python), plus 300 for the graph and finally 100 lines or so of scaffolding. Individual frames can be easily piped into FFMpeg, so there is no comfort loss anywhere. The amount of code is smaller than the original framework, which had less features -- mostly because the declarative part (the graph) is split into JSON instead of being intermingled with the processing code.

So much for the definitive version of the animation framework. I completely scrapped the old one in favour of Ava, and when reading the old blog post you should be aware that it was only a proof-of-concept while this here is the real McCoy.