Skip to main content

Book review: OpenCL Programming by Example

I just finished reading through "OpenCL Programming by Example", a new book on OpenCL programming. Before we start though, I'm going into full disclosure mode here: I didn't buy the book, but I got it from Packt Publishing for reviewing. All they did is send me the book for free -- thanks! Seems it does pay off I write regularly about OpenCL :)


Without further ado, let's get started. The book is available as PDF, EPUB, MOBI and in the Packt Publishing online reader. I've looked at all of them: I the PDF version is the best one, closely followed by the online reader. The ePub is good but has occasional formatting problems (for example, math formulas are rasterized) and the MOBI didn't format correctly on my Kindle out of the box (the fonts were too small.) You get all versions if you buy it, so you can pick which one works best for you.

Target audience

This book is aimed at experienced C programmers who haven't used OpenCL before. I doesn't hurt to have some parallel programming experience before buying this book, but the authors do a decent job of explaining the programming model.

The content

I've been reading through this book over the last two weeks, and it's definitely not the kind of book you can just read through in one go. It contains a lot of reference documentation, explaining the OpenCL API functions in similar scope and detail as the official manual pages. There's also no clear path through the book motivating the individual chapters. Ideally, you pick it up if you are interested on a particular topic, read through the chapter, and go straight to implementation.

Another difficulty that arises is that this book has quite a few spelling mistakes and weird sentence structures. If you expect at least some "reading flow" from your book, you'll be disappointed. While it makes it hard at times, it's not too horrible, but during reading, I had the urge to edit and fix a paragraph more than once.

That said, the content itself is solid and obviously written by someone who knows his way around OpenCL. It is written first and foremost for OpenCL 1.2, but it explains the 1.1 APIs that have to be used if your platform doesn't support 1.2. API wise, the books covers all OpenCL 1.2 core APIs and the OpenGL/OpenCL interop extension. It also mentions SPIR, without going into any detail.

The book itself starts with explaining parallel programming architectures, which are probably the weakest chapters in the whole book. It then continues with the OpenCL API, explaining buffers, images, kernels and events. This is a part where I would have swapped the order to be buffers and kernels, then go on with the OpenCL C language, and then continue with images and events. It finishes off with an optimization chapter, image processing algorithms, OpenCL/OpenGL interop and finally a "case-study" chapter. What's missing are a few performance numbers. Especially when it comes to optimizing code, a result like this makes it 50% faster would be appreciated.

Even though the book is authored by AMD people, they did a fairly good job of keeping everything vendor-neutral. They present different hardware architectures in the book and they don't focus on AMD specific optimizations. Which is even a bit unfortunate, I would have liked to see one algorithm being picked and explained for instance which optimizations benefit which platform. Just to give the reader an idea, what even why in general, OpenCL is performance portable, some optimizations are still better suited for GPUs and other for CPUs. If you expect some really useful advice on optimizing OpenCL, this book won't be enough for.

The code

There's also example code included. It compiles out-of-the-box on Linux and all samples work fine. On Mac OS X, I didn't have success building it, and I didn't try on Windows. There is no sample code for the OpenCL/OpenGL interop chapter. The code uses OpenCL 1.1 functions, so it should work on all platforms.

The verdict

So let's sum this up: If you haven't been using OpenCL yet, and you want a comprehensive introduction which gets you up to speed quickly, this book is for you. It gives you a good idea of the OpenCL API, the programming model and has a few good hands-on examples. You should have some idea of parallel programming though. What you can't hope for is writing serious OpenCL application right after reading this book, as you'll need a deeper understanding of the execution model as well as more optimization techniques than this book provides.

That should also make it clear it's really focused on beginners. If you already know some OpenCL, and hope for some new insights, you can most likely skip this book as it doesn't contain any advanced OpenCL programming techniques.

Porting from Windows to Linux, part 2

Welcome back to the second part of the Windows to Linux porting blog series. Today, we're going to look at how to port the graphics part which we omitted last time, and we're also going to actually use Linux this time!


Here comes the ugly part of the porting story: There is no Direct3D on Linux. Ideally, if your application can work just as well using OpenGL, you can ditch Direct3D completely, otherwise -- and that's what I use in my framework -- you'll have to port your graphics system to OpenGL while keeping the Direct3D part intact. The way I solved this is by providing an abstract interface and two implementations for it, one using OpenGL, the other using Direct3D. The APIs are actually close enough that this is feasible. Clients then choose at run-time which one to use, and the implementations are fully self-contained in a DLL/shared object. Unfortunately, OpenGL and Direct3D don't use the same shading language, so I had to rewrite all shaders in GLSL again. This is not that hard (the languages are quite similar), but it's still quite a bit of typing. Before you ask, no, I'm not using the Direct3D effects framework, so I just had to rewrite the actual shader source.

For compute on graphics, I'm using OpenCL, so there's nothing to do, porting wise. It works exactly the same with Direct3D as with OpenGL. The only minor nuisance is that you need to implement the OpenGL interop twice, once for platforms supporting OpenCL 1.1 and once for 1.2, as they use a different function for texture sharing, but that's it.

For the actual work, I would suggest to start with a cross-platform library to create the OpenGL window first, and then start to (re-)implementing your existing Direct3D code in OpenGL. This is going to be rather straightforward for most of the code (see my porting guide for hints.) On caveat about window handling: In my framework, I did the mistake of not using a cross-platform library for window creation. Instead, I manually created a window on Windows using the WinAPI and on Linux, I did the same using by using the libX11 directly. This led to differences between the Windows OpenGL path and the Linux OpenGL path which I'm still ironing out -- learn from my mistakes and use either GLFW or SDL2.

There isn't much more to say about the actual porting from Direct3D to OpenGL. It is a lot of work, and there is little to speed this process up. One thing that helped me was to use a right-handed coordinate system in both DirectX and OpenGL, which was one less source of error during porting. Other than that, the only help you'll get here are the various visual debugger tools for OpenGL -- apitrace is currently probably the best tool currently, as you can trace on Windows and Linux -- and of course the OpenGL wiki, which provides lots of good documentation.


Before we start running the code on Linux, you'll have to pick a distribution. This is mostly a religious choice, really. My current favourite is Kubuntu, which is the Ubuntu flavour using KDE as its window manager. The nice thing about (K)ubuntu is that it's really popular, so you can find help easily if needed. If you are less adventurous, you'll also might enjoy one of the enterprise Linux distributions like RHEL or CentOS. Keep in mind though that those distributions tend to ship pretty old compilers. While compiling your own GCC on them is certainly possible, I wouldn't recommend this if you are just getting started with Linux.

On Linux, you'll also want to use an IDE. Qt Creator, KDevelop and Eclipse are all fine -- I'm using Qt Creator right now, as is Valve. Just make sure to spend a day or two to get used to your IDE. If possible, try to debug a small application first to get a grip on how GDB works. There was a great presentation on getting started on Linux at the 2014 Steam Dev Days, make sure to check out the videos before proceeding.

First compile

So, everything compiles cleanly on Windows, you're only using portable dependencies, you've replaced Direct3D with OpenGL and you're eager to start working on Linux? Let's not loose time then. Check out your code on your Linux machine and run CMake. As the target generator, use either Makefiles or Ninja files -- Ninja is typically way faster and more likely to choke on incorrectly specified dependencies.

Once the build files are generated, run your compilation from the command line and prepare yourself for extremely quick compile times, as your application will most likely fail immediately with a compile error. That's fine, and this is the part where you're going to have a lot of fun trying to figure out what exactly went wrong. The reason why you should be doing this from the command line is that you get the full error messages and you can quickly abort the build. The fastest and most effective way here is to try to fix each compile error individually on Linux and quickly check the fix on Windows. This will save you from long cycles where you fix lots of bugs on Linux, go back to Windows, fix problems there again, go back, just to see you have broken the Linux build, etc. Avoid this and try to get both platforms into a stable state. If you hit a function which you haven't implemented yet, take a note, write a stub implementation which just exits and go on. This shouldn't happen too often though if you have identified the platform dependencies correctly. If it happens all the time, I would rather go back, fix the Windows version first and continue porting after that.

Even though it might seem at first that Visual C++ and GCC speak two different languages, changes required to get the code work with both should make the code cleaner and better. One minor source of frustration you might hit are templates, as GCC supports two-phase lookup, while Visual C++ doesn't (see this question of Stackoverflow for details.) One note on GCC: If you use Visual Studio 2010, 12 or 13, you'll want to compile with -std=c++11. Unfortunately (well, depends on how you put it), GCC supports way more C++11 than Visual Studio, so you have to apply special care before you use any new C++ 11 feature.

There's also one problem with GCC's (or more precisely, the default C++ standard library on Linux, libstdc++) regular expression library, which is simply non-existent. I do a configure-time check using CMake to see if the standard library provides a working regular expression implementation, and if it doesn't, I transparently fall back to boost's regular expression library. With GCC 4.9, this will be fixed and you'll be able to just use the standard library on all platforms.

I know you were all hoping for some magic trick which allows you to skip the "lots of work" part of actually porting source code, but I haven't found one. That said, it's usually pretty obvious which code is calling platform APIs and which isn't. It's also really helpful that GCC has better and more extensive support for C++ (11) than Visual C++, as you typically won't have to work around bugs or missing features. So much for today -- next week, we'll take a look at how to keep it working, and a few other porting tips & tricks.

Porting from Windows to Linux, part 1

Hi and welcome to a blog series about how to port graphics applications from Windows to Linux. The series will have three parts: Today, in the first part, we'll be looking at prerequisites for porting. These are things you can do any time to facilitate porting later on, while still working on Windows exclusively. In the second part, the actual porting work will be done, and in the last part, I'll talk a bit about the finishing touches, rough edges, and how to keep everything working. All of this is based on my experience with porting my research framework; which is a medium-sized project (~ 180 kLoC) that supports Linux, Windows and Mac OS X.

However, before we start, let's assess the state of the project before the porting begins. For this series, I assume you have a Visual Studio based solution written in C++, with Direct3D being used for graphics. Your primary development environment is Visual Studio, and you haven't developed for Linux before. You're now at the point where you want to add Linux support to your application while keeping Windows intact -- so we're not talking about a rushed conversion from Windows to Linux, but of a new port of your application which will be maintained and supported alongside the Windows version.


Let's start by sorting out the obvious stuff: Your need a source control solution which will work on Linux. If your project is stored in TFS, now is the time to export everything to your favourite portable source control. If you are not sure what to choose, take Mercurial, which comes with a nice UI for all platforms.

Next, check all your dependencies. If you rely on WIC for image loading, you'll have to find a portable solution first. In my experience, it's usually easier to have the same code running on Windows and Linux later on than having a dedicated path for each OS. In my project, I wrapped the low-level libraries like libpng or libjpg directly instead of using a larger image library.

Now is also the time to write tests. You'll need to be able to quickly verify that everything is working again. If you haven't written any automated tests yet, this is the moment to start. You'll mostly need functional tests, for instance, for disk I/O, so focus on those first. I say mostly functional tests, as unit tests tend to be OS agnostic. In my framework, unit tests cover low-level OS facilities like threads and memory allocators, while everything else, including graphics, is covered by functional tests.

For testing, I can highly recommend Google Test. It's not designed for functional tests right away, but it's very easy to write a wrapper around a Google Test enabled project for functional testing. My wrapper is written in Python and sets up a new folder for each functional test, executes each test in a new process and gathers all results.

Finally, if you have any build tools, make sure that those are portable now. I used to write them in C# when it was really new, but since a few years, I use only Python for build tools. Python code tends to be easy to maintain and it requires no build process whatsoever, making it ideally suited for build system infrastructure. Which brings us to the most important issue, the build sytem.

Build system

If you are using Visual Studio (or MSBuild from the command line), stop right now and start porting it to a portable build system. While in theory, MSBuild is portable to Linux using xbuild, in practice, you'll still want to have a build system which is developed on all three platforms and used for large code bases. I have tried a bunch of them and finally settled with CMake. It uses an arcane scripting language, but it works, and it works reliably on Windows, Linux, and Mac OS X.

Porting from Visual Studio to CMake might seem like a huge effort at first, but it'll make the transition to Linux much easier later on. The good thing about CMake is that it works perfectly on Windows and it produces Visual Studio project files, so your existing Windows developer experience remains the same. The only difference is that adding new source files now requires you to edit a text file instead of using the IDE directly, but that's about it.

While writing your CMake files, here's a few things you should double-check:

  • Are your path names case-sensitive? Windows doesn't care, but on Linux, your include directory won't be found if you mess up paths.
  • Are you setting compiler flags directly? Check if CMake already sets them for you before adding a huge list of compiler flags manually.
  • Are your dependencies correctly set up? With Visual Studio, it's possible to not define all dependencies correctly and still get a correct build; while other build tools will choke on it. Use the graph output of CMake to visualize the dependencies and double check both the build order, and the individual project dependencies.

With CMake, you should also take advantage of the "Find" mechanism for dependencies. On Linux, nearly all dependencies are available as system libraries, serviced by the package manager, so it definitely makes sense to link against the system version of a dependency if it is recent enough.

The end result of this step should be exactly the same binaries as before, but using CMake as the build system instead of storing the solutions directly in source control. Once this is done, we can start looking at the code.

Clean code

Did you ever #include system headers like <windows.h> in your code? Use system types like DWORD? Now is the time to clean up and to isolate these things. You want to achieve two goals here:

  • Remove system includes from headers as much as possible.
  • Remove any Visual C++ specific code.

System headers should be only included in source files, if possible. If not, you should isolate the classes/functions and provide generic wrappers around them. For instance, if you have a class for handling files, you can either use the PIMPL idiom or just derive a Windows-specific class from it. The second solution is usually simpler if your file class is already derived from somewhere (a generic stream interface, for instance.) Even if not, we're wrapping an extremely slow operating system function here (file reads will typically hit the disk), so the cost of a virtual function call won't matter in practice.

To get rid of Visual C++ specific code, turn on all warnings and treat them as errors. There are a bunch of bogus warnings you can disable (I've blogged about them previously), but everything else should get fixed now. In particular, you don't want any Visual C++ specific extensions enabled in headers. The reason why you want all warnings to be fixed is that on Linux, you'll be getting hundreds of compile errors and warnings at first, and the less these are swamped by issues that are also present on Windows, the better.

While cleaning up, you should pay special attention to integer sizes. Windows uses 32-bit longs in 64-bit mode, Linux defaults to 64-bit longs. To avoid any confusion, I simply use 64-bit integers when it comes to memory sizes.

The better you clean up your code, the less work you'll have to spend later during porting. The goal here should be to get everything to build on Windows, with platform specific files identified and isolated.

So much for today! Next week, we'll look at how to get rid of Direct3D and how to start bringing up the code base on Linux. Stay tuned!

My wish list for 2014

I'm sometimes a hopeless optimist, especially when it comes to hardware and software releases. But sometimes, I wish for something like a Javascript based, declarative UI in 2008, and it becomes reality in 2010, so let's try again for this year ;)

AMD driver fixes

AMD's Direct3D & OpenGL driver works just as expected, but sadly, everything else could use some love. For example:

  • OpenGL/OpenCL interop is broken since several months on Windows and Linux.
  • Using Direct3D, I get a "hitch" every 3-4 seconds in a few applications (the hitch is a frame which takes nearly 1 second; AMD is aware of the issue and told me it's a driver bug ... months ago.)
  • On Windows, my machine still "looses" a DisplayPort-connected screen occasionally when waking up from stand-by.
  • On Linux, after enabling a tear-free desktop, I get machine lock-ups during booting when I don't switch fast enough to a text console.
  • Power usage is still high when two or three different monitors are hooked up.
  • The latest Linux driver refuses to install due to a missing patch in the kernel module.

I assume that the AMD driver team is busy working on the PS4, XBox One and Mantle, but I do hope once they get those things shipped, they will go back and fix all the small things which make the difference between having a working driver and a good driver. I also hope that AMD gets up some proper continuous integration solution to avoid regressions like breaking OpenCL/OpenGL interop or releasing drivers which don't install. This kind of things seems to be something that you can completely avoid with automation. NVIDIA did this investment quite some time ago (see this blog post about their Moscow-based testing team)

Wish for 2014: AMD moves focus back to their Linux/Windows drivers.

Micro-server CPUs

For a home-server, a AMD's 4-core Opteron X2150 or the 4-core Avaton Atoms is the perfect CPU. With ECC support and low-power usage, it allows you to build a real home-server which can work as a NAS and DNLA server. Unfortunately, even though AMD announced the X2150 in May 2013, as of December, there is still not a single board available in retail with it. In the meantime, AMD stopped shipping low-power Athlons with ECC support. Intel is a bit faster, but currently, there is only one 4-core Atom board available in Europe. I know of a few people waiting to build their own home-server who don't want a closed NAS-black-box, and we're all twiddling our thumbs while we wait for AMD and Intel to ship their stuff.

Wish for 2014: AMD and Intel ship affordable micro-server CPUs in retail!

Widespread OpenCL 1.2 and SPIR support

OpenCL 1.2 provides a few nice additions over 1.1, in particular: SPIR, allowing you to pre-compile kernels Easier image creation, both for interop and in normal API usage * Buffer & image filling without having to run a kernel

SPIR is the biggest and most important addition which is enabled by OpenCL 1.2. Strictly speaking, you could do SPIR and OpenCL 1.1, but the official SPIR specification is written against OpenCL 1.2. With SPIR, it becomes easier to ship large OpenCL applications, as we no longer have to pay with long start-up times on the client. It also becomes possible to compile other languages down to SPIR. Finally, SPIR frees us from problems with compilers; it's not uncommon that one driver version chokes on a particular kernel. With SPIR, we can compile once in advance and rest assured that it'll work on all clients supporting SPIR, as loading SPIR has a lower chance to fail load than compiling OpenCL C. There's not much you can do wrong when starting from SPIR :)

The other two changes are nice API improvements. In particular, buffer & image filling is something which can be implemented extremely efficiently in the driver but emulation is tricky (for example, instead of zeroing out a buffer, the driver can simply give you "fresh" memory.)

The only vendor not implementing OpenCL 1.2 is NVIDIA. Even though implementing the OpenCL 1.2 API is a rather small task, I have doubts that NVIDIA will do it unless Adobe or Autodesk will require it. On the SPIR side, things are looking even worse. As far as I know, only Intel ships SPIR support.

Wish for 2014: AMD ships SPIR, NVIDIA updates to OpenCL 1.2 & SPIR.

24", 4k monitors

High-resolution displays are great, not because you can suddenly run games at insane resolutions, but because font rendering improves dramatically. Everyone who has a mobile phone knows how good fonts look on really high-resolution screens. While anti-aliasing can help, higher resolution is the "correct" solution. You can try for yourself if you have an Retina Mac Book Pro -- just turn off font anti-aliasing, and what happens? You still get gorgeous looking text. As a programmer, reading and writing text is my daily job. Unfortunately, while mobile and to some degree notebooks went to 150 dpi and beyond, we're stuck with at most 2560x1600 on 27" in the desktop market with a pretty embarrassing 110 dpi or so.

Just recently however, Dell announced a 24", 3840x2160 screen, and I can't really wait for them to appear in retail. The reason why I'm very interested in a 4k screen is cause I believe it's going to close to being good enough that further improvements won't matter. While I can clearly see the difference between 100 and 200 dpi text, 200 and 400 is much harder to spot, and anything beyond is probably no longer useful. What this means is that once we get 8k 24" screens, we'll be practically done unless we get better eyes :)

Wish for 2014: EIZO ships an affordable 4k, 24" screen.

Games 2013

Just as I did last year, here's a look at the games I played this year. Played means for single-player games that I have finished the complete game at least once.

Battlefield 4

This game was the biggest disappointment for me this year. It sports a 6-hour single-player campaign which is just as bad as the previous one, but it looks quite a bit nicer in many places. That wouldn't be too bad, as Battlefield 3 didn't have a good single-player either. However, on the multi-player front, it doesn't deliver this time. I personally had a lot of problems with bugs & hackers on servers, so much that I paused playing after reaching level 10 so they can start fixing stuff. Overall I'm really surprised that this game has so many bugs, as Frostbite 3 doesn't look that much different from Frostbite 2, and with a mature engine, you would expect a game to get out with more polishing and not less. The post-releases patches also didn't help. For instance, the fix for "rubber-banding" actually made the game rubber-band for me.

From EA, the communication about these issues was also weird at best. I don't buy it that they stopped development on the DLCs to fix the issues with the game, as it's highly unlikely that all artists crunching out content for the DLCs will now twiddle their thumbs waiting for some game-play fixes. At the same time, the engine development team doing the graphics, net-code, audio and other things does continue working on Frostbite 3 for other licensees as well, as EA can't afford delaying all other games just because of some Battlefield 4 specific bugs. Realistically, they might change some internal priorities to get bugs affecting Battlefield 4 resolved a bit faster, but I believe what we see here is simply the effects of having a separate game & engine team.

Given a few more weeks or months of fixing, and more content, I do expect this game to be as good as Battlefield 3 again. I'm also curiously awaiting the Mantle-backend, but as it seems, it got delayed from November over December into early 2014 at least. Graphics-wise, it's the new reference for me, even though they have still not resolved the z-fighting problems (far away objects tend to mush together with the background and flicker.)

Chaos on Deponia

The second part of the Deponia series. Compared to the first one, there's not much difference; it's basically more of the same. The humor is great and the puzzles are usually well designed but I would wish there would be a few more clues for some of them. Without a walkthrough guide, you have to fall back to trial & error, and that shouldn't be necessary. One of the best point & click adventures for sure, and I'm really looking forward to playing the final part of the trilogy in 2014!

Crysis 3

Oh well. I somehow get the crisis when playing Crysis, and this time, it wasn't better. Graphics wise, it's very solid and sometimes fantastic; in particular, the first location outside in the rain and then the grass fields were great. But that's it, the rest was very good, but not that memorable unfortunately. The problem I see here is two-fold. First of all, a detailed world doesn't mean it looks interesting; the dam is a good example. It has a high visual complexity and features a lot of props, but it doesn't convey the sheer size of a dam at all. The second problem can bee seen towards the end of the game. What happened is that game-play asked for three anti-aircraft locations, and voilà, the level features three isolated pockets instead of a single, continuous world. The contrast between detail makes it obvious that this world only exists to serve the game-play, and this doesn't help to make it feel real.

Story wise, they tried real hard this time to add a character you can relate to by introducing Psycho. Unfortunately, he becomes less and less important over the course of the game. Second, the setting itself is quite confusing and never explained properly. It seems as if the game was started from the graphics side (we want to showcase these three environments) and then some story was built around it. What made me really sad is that they have very nice character rendering technology (some of the faces are astonishing!) and yet they fail at story-telling completely. I also have my doubts that telling a good story can work in this nano-suit wearing super-hero scenario at all. Crysis 1 didn't have this problem, as it was mostly a sand-box/exploration style game, but for Crysis 2 & 3, they try to make a terminator with emotions, and that is doomed to fail.

The final nail in the coffin was the short play time. I think it took me slightly less than six hours this time, even less than Crysis 2, and that's just too short. Especially as a lot of that time is spent in repetitive combat due to the low variety of different weapons & enemies.

Far Cry 3

I really enjoyed the first Far Cry, but I never managed to finish the second one. Third times a charm, and indeed, this Far Cry is the best of the three so far. That said, it suffers from a similar problem as Tomb Raider: A guy who "can't kill" at first winds up dispatching enemies a dozen at a time. At least on the content side, this game does deliver -- including most side missions (i.e. all camps cleared and all unique items crafted) it took me 18 hours to finish this game, which I consider reasonable for a block-buster game. Some of that time is however pretty boring, in particular, climbing the antenna towers is repetitive, as is clearing out camps later in the game.

Story wise, the beginning is actually good, but it becomes worse over time. In particular, the double-ending is just a waste of time and it would have been better to just do a single ending and somehow involve all characters there (one last attack on Hoyt together with the Rakyat, maybe?) On the graphics side, the thing I enjoyed most about this game is that it is actually bright enough to enjoy the graphics, unlike for example Tomb Raider. The only problematic area I found were caves, where the lighting was looking really weird (only SH/IBL based specular?)

Hitman: Absolution

It's the basic Hitman formula, well executed and with nice graphics. If you never played Hitman before, it might be a bit boring, but if you enjoy the mix of puzzle and shooter, I can highly recommend it. The story is confusing at best, but the levels themselves are very well executed and each level is really distinct. What I also liked is the crowd rendering, in particular, the Chinese market is large and has enough people to resemble a real market.

Max Payne 3

There's one thing I remember about this game, and this is how ridiculously different the game-play itself is different from cutscenes. From the beginning to the end, Max is portrayed as a broken looser, and when it comes to rescuing someone, you get one of two choices:

  • Three enemies and a hostage wait in a room, a cutscene starts, and Max fails to rescue the hostage and gets caught. It continues with "another day, another woman I couldn't save".
  • Twenty enemies and a hostage wait in a room, no cutscene starts, Max jumps in in bullet time and easily dispatches everyone.

Unfortunately, it doesn't get better until the end. Moreover, the bitterness of Max is interesting at the beginning, but towards the end of the game, he just repeats the same things again and again, making the game rather dull and boring. At least it has a proper ending, but otherwise, I can't really recommend it.

Metro: Last light

The successor to Metro 2033. It's basically a more polished version of Metro 2033 where everything is done right, and you get to see the sun more often. The only draw-back is that the exploration part is even weaker; in particular, on the surface, I would have expected  hidden demon lairs, abandoned bunkers or something like this. What you get instead is a super-linear path you have to follow, even though you'll find enough filters and equipment in the game to survive for half an hour on the surface or so.

Graphics wise, the game is really great except for the characters. Faces are detailed enough, but the skin rendering is not that great and the animations sure need some improvement. On the other hand, some locations do look photo-realistic and scenes with crowds are well done, as they typically consist of more than 10 people. The level design can't always keep up with the graphics though. The biggest gripe I had with this game is the how the stations were depicted, they are extremely small and the layout is just plain awful.

Anyway, if you enjoyed Metro 2033, you'll like Metro: Last Light. Story-wise, it's pretty solid up to and including the end. One of the better games 2013.

Papers, please

This is surely among the best games I played in 2013, if not the best game. Graphics wise, it looses against triple-A titles like Crysis 3 (though on the art direction side, not much), but the story is much more engaging and interesting. The comparison with Crysis 3 is not that stupid as you might think, as both games took me roughly the same time to finish. Interestingly, Papers, please has better character development (remember Jorji?) as well as a more engaging story. It's an interesting game and definitely worth a try if you haven't played it already. As a bonus, it comes with the best save-game system I ever saw. Mass Effect 4, please follow suit!

Skyrim DLCs (Dawnguard & Dragonborn)

Skyrim is one of the best RPGs I ever played, together with Fallout 3 (and New Vegas) and Baldur's Gate 2. Unfortunately, both DLCs fall a bit short of the original game. Dawnguard could have been great if the war between vampires and werewolves would have been included properly. This hasn't been done, so if you are a Werewolf, you can't tell the (human) Dawnguard that you're especially well suited for fighting vampires, and worse, if you transform into a Werewolf, they will start attacking you. Same applies to your companion (!), unless your companion is a Werewolf. This just doesn't make any sense and was really something I missed from the Dawnguard DLC.

Dragonborn was even worse in my opinion. While it does introduce a new area to the game (a separate island), this new area is rather small for Skyrim and the quest line is not that interesting. Riding dragons is fun of course, but towards the end, the really ugly graphics start to hurt the game a bit. The graphics engine behind Skyrim was made for large landscapes, forests and clouds, but at the end of the Dragonborn DLC, you fight over endless oceans and in abstract fantasy worlds, and these just look terrible.

If you haven't played Skyrim, sure, get the legendary edition and have fun, but you're not going to miss anything if you don't play the DLCs.

Tomb Raider

This game is again a mixed bag. On the one hand, it has very nice graphics, an excellent setting and a good atmosphere; on the other hand, it fails in some parts of the plot and the character development. Similar to Far Cry 3, the problem here is how Lara evolves during the game. At first, she is the scared little teenager set out alone on an island, but towards the end, she has killed more people than the plague, and some of them in extremely brutal ways (seriously, way more brutal than necessary, and it doesn't help the game at all. Stop doing this, people!)

When making such a game, you have to decide as a game developer what you want and stick with it, but having a girl kill people by setting them on fire or hitting them with an ice-pickaxe into the neck just doesn't work when she apologized to a deer for killing it before. Even worse, the game could have worked just as well by fighting non-human enemies only and more exploration.

The TressFX hair was also an interesting addition for the game, as it does help indeed with the hair. Unfortunately, it only got one patch to solve the collision geometry. I wish they would have updated to TressFX 2.0, as the original hair does still have lots of small problems which break the illusion.