Building your own home server, part #2

Today, I'll describe how to assemble the server, but it's not going to be a full step-by-step guide with pictures, but rather a read-along guide to help you quickly assemble the server. If you have changed a hard drive or graphics card once, you should be able to tag along, otherwise, ask someone who has already assembled a PC to help you out. On the equipment side, you'll only need a screwdriver (seriously.)

Preparation

Before you get started, you should make sure that you have all parts ready and some space. I prefer to put a big piece of cardboard on the floor, just in case. You should also ground yourself to avoid being statically charged. Use a wristband if you are unsure about this.

Ready, go!

There are different way to assemble a computer. The key question is how good you can reach the cables/connectors and how much space you have in your case. If we look at our motherboard, there's only one tricky part, and that is the power connector at the top edge, which might be a bit hard to reach. The rest looks super-easy. We'll thus start by placing the memory in the motherboard -- there's two things you should take into account here. First, there's a notch in the memory which must match the motherboard slot. If the notch is elsewhere, return the memory, cause you got the wrong one. Second, you need to identify the "primary" ports. In the case of our motherboard, these are the blue slots, which should be populated first to get full dual-channel speed. If you stick your RAM into other slots, you'll get it all on the same channel, and only half the speed.

The next step is to fix the motherboard in the case, but before you do that, grab that back plate connector panel. This thing:

The back connector panel.
Make sure to open the management network port and insert this into the case before everything else.

In my case, the network management port was not removed (it's the upper left square box left to the circular opening.) Just remove it, as you'll won't get a second chance. Next, put the back panel into the case and snap it in. This is the only time you should go all Jedi and use the force when assembling a computer. If you need to use force in the subsequent steps, stop, because you're doing it wrong!

Assuming you have fixed the motherboard (it's just a bunch of screws, you can't do much wrong here -- make sure to read the manual which screws are for which part), it's time to insert the disk drives. In our case, we have a nice hard drive bay which can be easily removed using the thumb screws. Next, make sure to check the case manual to see where and how to fasten the 2.5" SSD drive. Most cases come with 3.5" drive bays only, so the 2.5" SSD drives either need an adapter or some "trick" like being put in sideways, on floors of cages or something like this. In our case, it is put in at the bottom of the topmost drive bay. Once you have removed the middle drive bay, it's time to fix the hard drives.

A hard disk cage with two HDD.
The hard drives have been assembled into their cage. I left space between them to improve cooling and airflow.

Leave one empty slot between hard drives to improve cooling. Get the disk drives back. There's only one more component left for us to do, which is the PSU. Unlike in many modern cases where it's put in "upside-down" here's its just placed like usual. That's also the reason why there's an ventilation opening in the floor of the case. Again, check that you used the PSU screws. That's it, we're nearly done!

A PC case with various components and loose cables.
Everything is in, but the cables are not connected yet.

Cable guy

All that's left is to connect the cables. You should start with those that are hard to reach, which in our case is the main power connector. Not only is it a split cable (20+4), but you also need to go to the very top of the case. However, as there's nothing in our way, it's still easy enough to do it now -- otherwise, we would have connected the PSU to the motherboard before inserting it. The next tricky part is the fan connector which I connected to the FAN1 connector in the top-right corner. Before we connect the disk drives, it's time to take care of the front panel cables. Every case has a few LEDs and switches (at least the power LED and the power switch) which has to be connected to the motherboard. These are usually tiny connectors. Make sure to double check your motherboard documentation which cable goes where, but don't worry too much if you get plus and minus messed up -- nothing bad will happen, stuff simply won't work but it won't fry your LEDs (at least, it never did for me.)

Finally, you plug in the SATA cables and the power cables. Make sure to stove away the remaining cables somewhere where they don't impact the air flow.

An open PC case, all cables connected.
The disk drives have been hooked up, as is the power and all other cables. Cables could be placed a bit nicer, but as it is, it's not too crowded, so I didn't bother with serious cable management.

I simply stuffed them below the hard disk drives. Now we're ready to go. Connect a power cable to the PSU, turn it on. Nothing should be happening. If you press the power button now, the machine should start up (the fans will start spinning, lights on the motherboard will go green) and the machine will simply stop somewhere in the BIOS asking us for a disk to insert, but we can't see that because we have no screen attached. If it doesn't start, double check the power connector, check the power switch on the PSU and try flipping the power LED connector.

Ok, we're ready to go now. Plug in two (!) network cables, grab a seat in front of your main machine and read the next blog post to get the software set up!

Building your own home server, part #1

Recently, I've been talking with a friend about home servers. After some discussion, he was convinced that he will get one, and asked me whether I could help assemble it. I took the opportunity to document the process. In today's blog post, we'll be looking at the hardware side and how to actually assemble a home server, but before we get started, let's make clear first why you want one and what it is.

Why? What?

So, let's assume you have multiple computers and possibly tables/mobile phones at home and you want to share data between them. The easiest way is of course to upload everything to the "cloud", which is nice, but uploading and synchronizing all your RAW images on the cloud is not the best bandwidth usage. Additionally, your trusty Playstation 3 can use DNLA, but no cloud services. The next best alternative is a NAS device. There are lots of those on the market, which are typically a super-low power PC with a few hard disk drives in a black-box which you can't fiddle around with. In general, consumer NAS won't provide server-grade hardware or reliability -- for instance, by using ECC memory and reliable file storage.

The most flexible alternative is to go ahead and build a home server on your own. All the hardware you need is readily available, and by building it manually, you can get real server-grade equipment for an acceptable price tag. You also control the software side, so you can install reliable, modern file systems like for instance ZFS or BTRFS without having to wait for your NAS provider to support them. Finally, if you need special services like virtual machines, complex right management, print services, nothing beats a full-blown server at home.

The hardware

Before we can find some hardware, we must have a list of requirements. Here are mine:

  • The server must support ECC memory, which is for instance necessary for ZFS.
  • It should use as little power as possible during idle.
  • Disk storage must be mirrored across at least two hard drives.
  • It must support gigabit ethernet.

With these requirements in mind, let's get started. First, we need a mainboard with ECC memory support. This rules out nearly all consumer hardware and we're left with a few AMD Opterons, Intel Xeons and server Atoms. As we want to go low-power, a passively cooled Atom mainboard with ECC seems to fit the bill pretty well. For this homeserver, I've used a Supermicro board. One interesting feature of this board is the server management hardware which will cover later when it comes to installing.

A mainboard with a passively cooled CPU
The Supermicro A1SAM-2550F mainboard, with a passively cooled Atom C2550 (below the big silver fins)

We also need memory, and due to price/availability I've chosen the cheapest 4 GiB ECC memory sticks I could get. ECC memory uses the same memory chips as normal consumer memory, but adds one more bit per byte to store parity information. Basically, you can think of ECC memory as follows: For every byte stored -- which consists of 8 bits, either 1 or 0 -- the parity bit stores one more bit (for instance, the bit will be set such that the number of bits that are 1 is always even.) This allows ECC memory to detect when a bit has been flipped due to cosmic rays or other influences. In reality, things are a bit more complicated, and ECC memory can even correct various errors.

We'll be using two sticks, not because we have to, but because the Intel Atom has a dual-channel memory controller. What this means is that it only gets its full performance when you use 2, 4, 6 or another multiple of two RAM sticks. We could use 4, but that's way overkill, so we're taking 2 à 4 GiB. We're also using DD3L memory modules -- the L is for low-power.

Two ECC memory sticks.
4 GiB ECC memory sticks, single-sided. Notice the 9 modules instead of 8 on non-ECC memory. The 9th module is needed for the parity information.

For storage, we need at least two identical hard-disk drives. The capacity is totally depended on your own usage patterns. For my friend, a few quick back-of-the-envelope calculations gave us a few hundred GiB of required disk storage over the next five years. I've decided to buy two 2 TiB hard-drives designed for NAS.

Two red WD hard disk drives.
The data storage disks are 2 TiB WD RED drives, which are designed for NAS usage.

The main difference of such drives compared to normal customer drives is that their firmware will report errors "right away" so the RAID controller can solve them, instead of trying extra-hard to read the data. Additionally, they are more resistant to vibrations, which is important when you put lots of them into the same enclosure.

While it is certainly possible to place the operating system onto the drives as well, it doesn't make too much sense. First of all, mirroring the operating system is a waste of disk storage, and second, you want the drives to go into stand-by mode over night, which won't happen if the OS needs to install updates. The easiest solution is to use a really low-power, super-reliable disk drive for the operating system. I've selected a tiny Intel SSD. SSDs are great for this, as they use low-power and their reliability is mostly affected by writes, but as no data will be stored on it except for the OS, the SSD will be effectively in read-only mode most of its life. We also won't need a lot of storage, so a small SSD with 40 GiB will do just fine -- and those are super-cheap.

A 40 GiB Intel SSD.
The main system disk, a small and cheap 40 GiB SSD from Intel.

Finally, we need a case and a power supply unit. For the case, you want something with good ventilation and air filters. Air filters are critical, otherwise, you're server will accumulate huge amounts of dust in no time. The case should be also big enough to allow for some air circulation.

A silver case with a large ventilation grid on the front.
The case features two fans on the front, and one on the top.

The Lian-Li case fits the bill perfectly. It comes with two fans at the front, which will keep the disk drives cool, and a fan at the top to remove excess heat from the case. It's also very light and has little isolation, which is good, as we want it to dissipate as much heat as possible -- and a silent case with thick metal and insulation does exactly the opposite.

The PSU is difficult to choose, as our home-server will need only very little power and most PSUs are way over-sized for the task. I've picked a 300W PSU, which is still complete overkill, but at least reasonably efficient -- the main reason why I selected this one is because the company has a quite good support department, which should help in the case of trouble.

A black, 300W power supply.
The 300W, silent PSU. A bit overpowered for our needs, but still relatively efficient even at low loads.

There's one final component on the power side which we can add to make the server a bit more reliable. The power grid is not perfect, and while power failures are rather uncommon, 1 sec outages at night do happen, as do thunderstorms. You don't want to run to your server in the basement to turn it off when a thunderstorm comes. The easiest and most reliable solution is to add an UPS to the server -- an uninterruptible power supply. This is just a fancy name for a big battery which will keep your machine running even when nothing else in your home is working because of a power failure. I've picked one from APC, mostly because I have the same running at home and because it can talk to your server via USB and initiate a normal shutdown before it runs out of power.

An uninterruptible power supply
The uninterruptible power supply -- basically, a large battery.

The total bill of materials is:

  • 1x Intel SSD 320 Series 40 GiB, 30€
  • 2x Western Digital RED 2 TiB, 90€ each
  • 2x Kingston ValueRAM 4 GiB DDR3L-1600, ECC, Intel certified, 40€ each
  • 1x Supermicro A1SAM-2550F, 250€
  • 1x Lian Li PC-A04A case, 90€
  • 1x be quiet! Pure Power L8 300W, 40€
  • 1x APC Back-UPS C650, 100€

That's 770€, plus a few bucks for delivery. Fortunately, we won't need anything more, as the software will be all open-source and free. That's comparable to high-end NAS devices, but you get much more flexibility in your configuration with this setup, and it's all server-grade hardware except for the case and the power-supply, with long warranty times.

In the next post, we'll assemble this thing!

Games 2014

Just like in the last years, here's my personal review of games I played this year. This may include "duplicates" in case a game kept me interested for more than a year, and older games as well. For single-player games, as usual, I only list them here if I finished the main story line at least once.

Battlefield 4

A winter map from Battlefield 4
One of the new winter maps in the BF4 DLC packages

After it turned out to be a major disappointment last year, EA/DICE did a lot to improve the situation and turn the ship around. Thanks to the DLC, a healthy dose of bug-fixing and a stable player base, I really enjoyed playing Battlefield 4 this year. Additionally, the Mantle patch did improve the performance on my machine a bit, so it's been a great year -- next time, I hope they'll take extra time to make sure everyone gets the great game from day one, and not roughly one year later.

Bioshock: Infinite

Bioshock: Infinite is hard to judge. On the one hand, it is beautifully executed, comes with an intriguing story and excellent characters; on the other hand, it is at places unnecessary brutal, too linear and has plot holes which make me cringe. My gut feeling is that it struggles a bit between telling a great story and being a first-person shooter, and sometimes, it compromises and provides action where a different approach might have been better.

What I didn't like about the story is the alternate reality business, which seems like a rather simple plot device which can be used to explain basically everything. Don't get me wrong, it's a nice twist and it's a creative idea, but it also allows the developers to do anything they want without having to explain it properly. I just dislike this "wildcard" options in story-telling which only trigger when required by the author.

It took me 6 hours to complete with a fair amount of exploration, and the fights are rather repetitive. While it were 6 entertaining hours, it's still really short to tell a good story and allow for proper character development. I hope that a future Bioshock title will be longer, take the time to tell a deeper story and provide more game content.

Graphics wise, Columbia is just great. Except for the occasional super-low-resolution light-map, both art direction and the rendering come perfectly together. What's also great about the art direction is that characters fit in seamlessly. It reminds me a bit of Dishonored, Brink or Rage, which also have this perfectly designed, somewhat crazy world.

Borderlands 2

Fun, crazy, entertaining, but not enough long-term motivation for me. I played through the complete story, occasionally in co-op, and it was fun, but that's it for me. I guess if you go for co-op only it can be really crazy. Still, it's a well executed game with some fun and insane moments. One downside is that DLCs are also crazy, everything in this game costs you some real-world money -- even in the Game of the Year, Steam-Super-Sale Edition. Seriously, is this really needed?

Broforce

8-bit game with explosions
Broforce in action

Mind off, controller on, and some quick action for short breaks. It's a fun game, no more, no less. It does not make any sense whatsoever, but it's true to itself, and if you like action flicks from the 80ties, this is your game!

Deponia 3

A worthy final chapter, and easily the best Deponia so far, including an absolutely fantastic soundtrack (make sure to listen to the hymn of the Organon!) If you liked Deponia 1 & 2, part three is all you liked, just more refined, with a worthy ending and everything you want. Great work guys!

Dota 2

Scoring a point in DOTA 2
Another one bites the dust

I've been playing Dota 2 this year quite a bit again -- it's one of the two games I play in multiplayer. The game didn't change much since last year, despite large balancing patches. My main gripe with Dota 2 remains the bad matchmaking; being thrown into a team which doesn't speak English at all makes it hard to win. I wish there would be a report for incorrect language setting option, which should help a lot. Also, muting as punishment doesn't make too much sense in a team game. Otherwise, it's still a fantastic MOBA game, with an incredible gameplay depth and complexity.

Dragon Age: Inquisition

Unfortunately, this game gets this years' disappointment award (somehow seems to be related to Frostbite 3 ...). This game, it could have been so great, but Bioware went way too far in the sandbox game direction. It's quite obvious they were "inspired" a lot by Skyrim, but unfortunately, copying Skyrim is more complicated than just providing an open world.

Image from Dragon Age: Inquisition showing a villa in the background
Dragon Age: Inquisition graphics are fantastic, there is no doubt about that

Skyrim lives from the fact that there is one world, which is more or less logically tied together. If you go into someone's house and steal an item, he'll be angry at you; if you wear an unique armor, guards will mention it, but more importantly: The world seems to keep going even when you are not there. In Dragon Age, it's absolutely clear that the world is only a set created for you, exclusively. The NPCs are all carefully placed, but there is no chance to encounter hunters or travelers while you're moving through the world. The world is also extremely static. There is no day/night cycle, in some areas, large trees do not move in the wind, there is no changing weather -- each "map" has a fixed light & weather setting, and that's it. The final nail in the coffin is the travel system, which requires you to move from one region to another through the world map. There's no such thing as walking over a hill and seeing Dragonkeep at the horizon. This makes the world feel like a big theme park, carefully crafted, but static and lifeless if you look just a tad behind the facade.

Image from Dragon Age: Inquisition showing the party in a forest
Missing details like non-swaying trees do affect the immersion

In the quest for an open world, Bioware also lost focus of their core competence: Writing a gripping story and great characters. In this game, your companions remain extras in a movie. During the mission, at best, you'll get a text message that they approve with one or your decisions or not, but you'll never have to discuss anything with them. In-between, you can talk to them, but that's it, more or less. There's also surprisingly little group dynamic within your party. Long gone are the days where Minsk would marry Jaheira and pledge to protect her. The romance is also not well integrated, as it does not progress with the story. This was done much better in Mass Effect, let alone in Baldur's Gate II. Here's an experiment you should try: After you finish Dragon Age, wait a month and try to write down what you remember of your companions.

On the story side, the most emotional and epic moment is roughly after 1/3rd of the game, when you move from (minor spoiler ahead!) your first to your second base. The intro is really bad and illogical to the point that I get the feeling it was either cut or quickly integrated into the game -- no comparison to the fantastic intro in Mass Effect 2. Logic holes also continue throughout the game, including things like being asked by people to deliver letters or search for various mineral probes (in every map!) to obtain more power for the inquisition. This is a problem of scale, which many games fall into. On the one side, you are leading the inquisition, a powerful force, on the other hand, you're solving everyday problems to obtain some dubious rewards (+60 influence, anyone?) I understand that this is not easy to solve, but tasks like resource gathering should be completely automatic; if you need crafting materials, the basic ones should be fetched without intervention, and only rare/unique ingredients, which are at hard to reach places, should be left to the inquisitor. I'm fine to do "special forces" jobs as it's clear that the inquisitor and his party is a force to be reckoned with, but they should make sense in the larger picture. And picking herbs from the garden in my own fortress to solve a quest where someone in another part of my fortress needs herbs is definitely not making too much sense.

It's also really sad you never move forces into battle. Why not have the inquisitor spearhead the fight against demons, fighting the biggest and most fearsome, while your troops rally behind you and clean up the rest? There's a try for something like this roughly half-way through the game, but after the intro to the quest, it's the typical run around, loot and open doors stuff as always.

And finally, there is no quest to move out with all of your companions, nor is there an "all in" quest. Bioware, a party size of four is already very small, and you repeated the mistake from Mass Effect which didn't have a mission with all your people together as well (which was fixed by a DLC later on.)

Injustice: Gods among us

A classic fighting game, no more, no less -- I played through the "storyline". It has a few interesting ideas, in particular, the stage transitions make the fights a bit more dynamic. And finally it's a fighting game with totally insane super-powers which fit into the scenario :)

Kerbal Space Program

Spaceship reaching orbit while still flying on boosters
There's nothing boosters can't do!

This is not really a game, but a real space flight simulation. Don't be fooled by the funny scenario, at it's core, it's a serious simulation game which requires a lot of tutorial reading to get any kind of success. Anyway, the first time I got Jebediah Kerbel into orbit and safely back must have been one of the most emotional moments in a game this year :)

Partial space station in orbit around Kerbal, against the sun
Building the Kerbal Space Station

Prison architect

This is a neat indie simulation game in the spirit of classic management games like Sim City, in this case, it's Sim Prison. The actual game is not that remarkable, but what I found very interesting about this game is the intro mission and its moral implications. I would be really interested how many prisons wind up having the same ... err ... equipment as you are required to build in the tutorial.

Ryse

I admit it: I liked the game. Sure, the gameplay is a bit repetitive, the execution moves are unnecessary brutal and the ending is not that amazing, but otherwise, it's a solid game. Pacing is good, the story is interesting enough to keep you engaged, even though it doesn't have amazing plot twists. It's a bit short -- took me roughly 5 hours -- but those 5 hours were quite a bit better than the 6 hours it took me for Crysis 3.

Graphics wise, it's absolutely fantastic, and for everyone who is remotely interested in computer graphics, it is a must-have. In particular, the facial animations and the skin rendering is phenomenal and definitely crosses the uncanny valley. This game really sets the bar when it comes to how cutscenes can look today. Wish there would be more games achieving this quality level. It's also interesting how consistent the quality throughout the game is, you never have the feeling that some parts got "more love" than others. Everything is modeled in amazing detail, and together with the solid anti-aliasing, it feels more like walking and playing in a CGI film than most games.

Spec Ops: The line

It's presented as an "anti-war" game, and it does pretty well for that -- until you play something like "This war of mine". Story-wise, it's an interesting twist on the shooter genre, but the ending was not as good as I was expecting it. If you look at the rest, it's a solid, but unremarkable shooter.

The Stanley Parable

An experimental game which tries to answer the question whether we actually have a choice or not. It's perfectly executed, funny, and yet has more depth than you might believe. When you play it, make sure to play a longer session -- for it's gameplay to work, it's really important that you remember all the steps you have taken so far. Playing one version every evening for instance is going to rob you of most of the experience.

This war of mine

This easily wins the "surprise of the year award" for me. It's a gruesome, scary, and sad game, but it's good that someone finally made it. Depicting war from the point of view of a civilian, it offers a completely different experience. My only complaint is that emotional stress is not depicted enough; if you don't click the biography of your survivors, important clues about their feelings easily get missing.

Transistor

A close second contender for disappointment of the year. After the great Bastion, Transistor failed in basically everything that made Bastion great. The soundtrack is much worse, the fights are worse, and the story is worse. It's really sad, as the premise and the scenario is very interesting.

X-Com: Enemy within

Image showing MEC troopers in X-Com:Enemy within
Getting ready for the last fights in X-Com: Enemy Within with the new MEC troopers

This is basically X-Com, the definitive edition. The MECs add an interesting component to the game, both tactical as well as from a human perspective. Unfortunately, the round-base combat is still as weird as it used to be, with elite soldiers missing enemies the size of a truck at point-blank range -- sometimes, even in mêlée. Seriously? If you haven't played X-Com yet, Enemy Within is really worth a try, as long as you can live with their gameplay mechanics.

Closing remarks

There's quite a few games I bought but didn't have time to play properly yet (including "The Vanishing of Ethan Carter", "Dungeons of the Endess", "Endless Legends", "Sim City", "Banished") -- 2014 has been an interesting year. I've also replayed Skyrim a bit, this time with a boatload of mods applied, which does improve the immersion a lot. Let's see what 2015 brings. I have Witcher 3 on pre-order, which is hopefully going to be at least as good as the second part, and I still have hope for another surprise RPG in the spirit of Fallout 3 or Skyrim coming next year.

Developing on Linux: A look back

Since nearly one year now, I'm using Linux as my main development environment. Time to look back at the core development experience, which is quite a bit different than on Windows.

Build systems

I use CMake exclusively for all my projects, and this works equally well on Windows an Linux. The build itself is done using ninja, which is quite a bit faster than Visual Studio and has less "dependency hiccups" -- that is, situations where an updated file would not result in a correct rebuild. On this side, nothing really interesting, build systems on Linux are just as sophisticated and robust on Windows. A minor advantage for Linux is that CMake finds most libraries automatically, as there are placed in standard directories.

[Update] Examples why I hate MSBuild:

  • A "null" build which just invokes my unit tests takes 4-6 seconds on Windows, 0.5 seconds on Linux (on Linux, all unit tests execute in parallel plus 100ms overhead for the build system.)
  • If you have a library everything depends on, MSBuild will build this library with a single thread and not continue building the rest until this one library is built. In general, MSBuild cannot invoke the compiler on C++ files unless the dependencies have been linked.
  • Parallelization is split between /MP (within a project, the compiler will parallelize on its own) and projects (what the build system sees and uses). You can never get 100% perfect core utilization, only severe under- or oversubscription. On Linux, the build system "sees" all individual actions (compile file, link file) and can schedule them perfectly.

Compilers

I use two compilers on Linux: GCC and Clang. Both are really robust, produce efficient code and have reasonably good error reporting. Clang still wins by a small margin on the error reporting. Unfortunately, I can't use Clang as the production compiler as it still lacks OpenMP support, and several of my tools rely on it. However, I keep my projects building with both Clang and GCC to benefit from better error coverage and tooling -- more on this below.

Stability wise, I didn't have a single internal compile error in the last year on Linux, as well as no problems with unsupported C++. In comparison, I did have several issues with Visual C++ not accepting correct C++11 in the same time (mostly related to initializer lists.) Right now, I'm using Visual Studio 2013 as the base feature set, as I have to keep a Windows version working for the graphics stuff, but even the base feature set in Visual Studio 2013 is not as solid as I would hope it to be. At least Microsoft seems to be improving this situation quickly, and with Visual Studio now free on Windows, I'll keep it as a target platform for the foreseeable future.

Performance wise, Linux is a clear winner. This may be related to the kernel, the scheduler, the memory manager or the compiler, I don't really care, but the final result is my code runs quite a bit faster in general than on Windows. And with quite a bit I mean differences of 30% on exactly the same hardware -- which is more than enough reason to port over to Linux. I don't know where Visual Studio is loosing exactly, but even for simple code with a few structures and stuff, Clang and GCC seem to regularly produce much more efficient binaries than Visual Studio.

One major difference is that the compilers on Linux will embed debug information and symbols by default into the binaries and executables. This is something I don't like too much. PDBs might not be the best solution, but they do work nicely and make it unnecessary to strip binaries before handing them over. Valve described their approach to set up a symbol server on Linux at the Steam Dev Days, which I haven't implemented yet -- definitely something I want to check out soon. While we're on debug information ...

Debuggers

This is the really sad part of this blog post. In fact, right now, I have no working debugging environment on Ubuntu 14.10. How did that happen? Well, GDB is choking while demangling symbols in my framework (see for example here and here); while LLDB is plain broken on Ubuntu 14.10 (see here and here).

Demangling seems to be a really popular issue for both debuggers, if you look at the recent entries. This is really surprising for me, as the tools are developed in conjunction with the compiler, and you would expect them to be written in-sync and tested together (i.e. when a new kind of symbol appears with a weird mangling, you would expect a test to be written that the demangler can handle it.) This, together with the fact that the debuggers are often plain unstable (i.e. they simply stop debugging) makes debugging on Linux a real nightmare compared to Windows.

There's one more nail in the coffin, which is barely functional stepping and variable inspection. Especially GCC seems to aggressively remove variables even in debug builds (or it does not emit the correct debug information to allow GDB to find them) which makes debugging hard; together with instruction reordering, debugging becomes next to impossible. I had multiple occasions where I would step through a program just to jump back and forth through it as instructions were executed in a different order than written in the source code.

Compared to Visual Studio or even WinDBG, Linux debugging significantly lacks in two areas: Robustness first and foremost, variable inspection second. Robustness for debuggers is absolutely critical. I had very few occasions on Windows where Visual Studio would not debug correctly, and in those cases, WinDBG would do the job. On Linux, I had countless issues with debuggers ranging from not updated variables to plain crashes, often forcing me to resort to "printf" style debugging to get any work done.

Second, inspecting program state with the current crop of Linux debuggers is really cumbersome. There should be a standard way to format the display of variables. Visual Studio finally got it right with the native visualizers; whoever tried to write a visualizer for GDB or LLDB will have noticed this is way harder than it should be. Documentation is less than sparse, and it requires far too much script code to get anything useful. Plus, developing and testing of visualizers is very time consuming.

For me, it's pretty clear why the debuggers on Linux are still that bad: Writing a debugger is among the least sexy tasks in compiler development. A good debugger requires good compiler support, but nobody goes ahead and brags about how correct the DWARF info generation is; people are much more likely to write about new optimizations or new features (in the last three LLVM developer meetings, there have been only one full talk on debugging, and only very few BoF/Lightning talks.) Second, the compiler must be written with the debugger in mind, which means lots of boring work on the compiler side. Microsoft can just pay developers to buckle down and do it anyway, but getting people motivated in the open source community for this is much more difficult.

IDEs

I use Qt Creator on Linux, which is good enough for most work. On Windows, Visual Studio sets the bar for IDEs, especially for C++. There's simply nothing getting close. While Qt Creator has a decent programming experience, the weird window layout (no toolbar, no docking windows, etc.), the bad debugger integration (together with the debugger problems mentioned above) and the substandard C++ coding support (IntelliSense is faster and provides help in more areas) needs to get a lot of work to be on-par with Windows. KDevelop shows that the C++ integration can be much better when Clang is used, but Qt Creator and Clang is not there yet.

I would also wish Qt Creator would remove the stupid left hand toolbar and replace it with a "normal" toolbar, plus add some docking windows so I can take advantage of two screens while debugging. That said, I'm still productive in Qt Creator, and probably not slower than I'm in Visual Studio. The difference while writing code is really minuscule, and with faster rebuilds, it's probably even a tiny win for Qt Creator. This all breaks down of course when it comes to debugging, where I'm easily only half as efficient on Linux as on Windows.

Profilers, validators, etc.

Back to better news. Profiling and validation. For profiling, there is 'perf', which is decent to get a quick idea what is going wrong. For serious profiling, Intel's tools are just as good on Linux as their are on Windows. What's lacking on Linux are good system monitoring tools -- these are scattered around in various places. Windows has the Windows Performance Analysis tools, which group everything under a common UI. On Linux, it's pretty clear that everything has been developed in isolation and there hasn't been a guiding hand behind all the stuff which would enforce a common design. This is improving with 'perf', which will be hopefully what the WPA toolkit is for Windows. My major gripe with perf is the lack of visualization, which quickly requires to resort to scripts to get some semi-interactive SVG output.

Validation wise though, Linux is a clear winner, with the holy trinity of tools: Valgrind, Address Sanitizer and Clang Static Analyzer. Valgrind is basically reason enough to port your work to Linux, and combined with the Address Sanitizer, memory corruption simply becomes a non-issue. There is one small problem with ASAN and Valgrind, which are proprietary tools and libraries -- for instance, your good old vendor-provided graphics driver. These can seriously mess up with the analysis (in case of ASAN, it will abort early -- usually long before anything interesting in your own code happens.) The only way to solve this is to write long ignore files for Valgrind; for ASAN, you can hope to stub out the offending library or convince the vendor to use it themselves, but that's it.

The Clang Static Analyzer is similar to the Visual Studio analyzer, but at least in my experience, it finds more issues, at the expense of more false positives.

Closing remarks

So, looking back on my development experience on Linux, there are two areas where Linux is a clear winner for me: The build system is just so much better than MSBuild that it's not really funny, and the tool support is just great. I found numerous weird issues on Linux in seconds which would have taken me hours to track down on Windows. I used to fear uninitialized memory issues, but now, with ASAN and Valgrind, they have become a non-issue.

Debugging wise, the situation is really bad though, and while most can be done with command line debugging or printf, I sometimes have to resort to Windows and Visual Studio to get stuff done in reasonable time. This is really not a good situation.

Compilers on Linux are better, but Visual Studio 2013 is sufficiently close that I don't feel like I'm limited on this front. With Clang gaining native Windows support, the advantage for Linux on this front is going to be zero. Performance is a different story though, with my Linux build regularly outperforming my Windows version by 30% or more.

Overall, the thing I like most on Linux is the instantaneous turnaround with ninja build, which is a real productivity boost. Together with the better net performance I usually get, it's enough to keep me on Linux, but the bad debugging experience and the lack of a polished IDE are getting increasingly frustrating. The IDE is not the main problem here, but debuggers on Linux are at a point where some bugs are near impossible to track down in a reasonable time -- a situation which clearly has to change.

Quick guide to autofs for SMB and NFS shares on Ubuntu

If you have to work with network shares, you're probably familiar with fstab and auto-mounting them on boot. This is great if your server is always reachable, but it becomes cumbersome if the connection is lost. In case the target server goes down or is not yet up when the machine boots, you have to mount the directory as root. This is particularly bad if your network takes a long time to get online, for example, if you have a wireless connection.

There's a much better and robust way, which is to use autofs. With autofs, the shares will be only mounted when required and also get unmounted after some time. It handles disconnects much more gracefully. The setup is pretty simple, but there aren't too many guides around to help you get started. Here's a quick-start guide for Ubuntu, other Linux distributions will be similar. In this guide, I'll be setting up autofs to connect to a Linux NFS share and to a password-protected Windows SMB share.

Preparation

You need to install autofs and cifs-utils. On Ubuntu, you can use apt-get to get these. You'll also need a folder where you want to mount your shares, in my case, this will be /mnt/net/smb for SMB shares and /mnt/net/nfs for NFS shares.

Configuration

Every time you change something in the config, use sudo service autofs restart to restart the autofs service.

The main configuration file is /etc/auto.master. Here, we define the mapping of folders to shares. Let's add the two directories from above:

/mnt/net/smb /etc/auto.cifs-shares
/mnt/net/nfs /etc/auto.network

This tells autofs to check auto.cifs-shares when you try to access a share in /mnt/net/smb, and similarly for nfs. The configuration files contain one share per line. The first part of the line is the directory name that will be used for the share, after that, you have to provide how the connection should be made and finally the target. Let's take a look at auto.network and add a NFS share there. In my case, the contents are:

Shared -fstype=nfs,rw,soft,tcp,nolock homeserver:/tank/Shared

If I go into the /mnt/net/nfs/Shared directory now, it will automatically connect to the /tank/Shared NFS share from the machine called homeserver. That was easy, wasn't it? Next up, SMB shares, in auto.cifs-shares:

home -fstype=cifs,rw,credentials=/home/anteru/.smbcredentials,file_mode=0777,dir_mode=0777 ://homeserver/anteru

This is slightly more complicated. You don't want to save your SMB credentials in the auto.cifs-shares file (which may be visible to every user), so I've stored them in /home/anteru/.smbcredentials. This is a very simple file which looks like this:

username=anteru
password=swordfish

When connecting to a SMB share, you'll likely want to add file_mode=0777 and dir_mode=0777 so everything created there is readable and writable by default for all users. This is needed because the share is mounted as root. You can also force it to be mounted as a specific user. Remove the file_mode and dir_mode and replace it with uid=1000 (1000 is my user id). To find your user id, run:

id -u username

Finally, there's a slightly tricky bit which is shares containing $ characters. You need to escape them in the auto.cifs-shares file using \$, otherwise, you'll get errors.

That's it, hope this quick guide will help you to get started with autofs!