Welcome to the second part of the post-mortem analysis. In the first part, I mentioned the things that went right, now it’s the time to find out what could have been better:
- Direct usage of std::iostreams everywhere: Well, the main problem here is that I thought that an iostream will throw an exception if used incorrectly (as I was used to from .NET and my own projects), but this is of course not true — an iostream will just set the fail bit, and you have to check for that. Unfortunately, this became only a problem really late when certain files were missing and things would break silently. For the future, I’ll wrap the iostream creation in a function which throws an error if the file is missing.
- File I/O part two, even though I had text files, I missed two key points: Versioning and type-information. That is, each file should have a header like “Scene 4” or so, which is checked by the corresponding reader. Again, this wasn’t a problem during development as I used to pass the right files everywhere.
- You probably see a pattern already, but there were a few more instances of “silent” failures — I’ll have to take special care next time to avoid all possibilities of loading wrong data or partial data. I’m also thinking about adding checksums to each file, even though it makes quick manual editing a bit more complicated.
- Manual threading: All threading should have used OpenMP from the beginning, which would have saved quite some time. Notice that with OpenMP, static scheduling can lead to bad behaviour if the individual items take vastly different amounts of time, make sure to profile the CPU usage. I had to use dynamic scheduling to get maximum efficiency.
- Unit-testing: I started too late, and added all in all too few unit-tests, which led to some hard-to-track down bugs.
- Functional testing: I had no functional testing in place. In retrospective, there is a really easy solution to this: Add a script layer (for instance, using Lua) to issue all commands to the app and also to query the state. This allows easy recording of all UI actions and replaying them, and makes it extremely simple to automate testing. Another huge advantage is that an “undo” system comes for free, as one can replay everything until the last command. Definitely something I’ll try next time.
So much for the down-sides, doesn’t look to bad after all. I guess I’ll read through the code once again in a month or so, but so far, I’m halfway pleased with the results. Especially as the code was under constant change most of the time :)