Postmortem: Diploma Thesis, 1
As I’m in the finishing stages (writing ;) ) of my diploma thesis, it’s time for a post-mortem analysis of the source code. This week, I’ll investigate the things that went right and which I’ll try to use in the future again.
- CMake and the subsequent port to Linux: Making the application portable allowed me to clean up the source code a bit, and get eventually better performance. In retrospective, I should have used CMake earlier in the process to save time porting later. For the next project, I’ll definitely start using CMake as early as possible in the process.
- OpenGL for all viewers and GUI applications: Even though I’m strongly biased towards DirectX due to the superior debugging capabilities and the cleaner API, OpenGL proved to be reasonably easy to use and made the porting very easy. Using GLSL was not too painful, but I was running on nVidia hardware only.
- Libraries: SQLite turned out to be very easy to integrate as an intermediate format, and allowed me to store processing data without having to deal with file I/O issues. Definitely a good choice. For reports, I used CTemplate, which was also very easy to use and allowed me to generate nice HTML reports – much better than purely textual output, as I could easily integrate images into them.
- Text formats: All input/output data would eventually end up in text files. I didn’t use any binary formats, and I could easily debug all data. This turned out to be a major time-saver, as I could parse the text files using scripts as well. Notice that the text files contained data like meshes while SQLite was used for storing intermediate data like per-mesh information.
- Modular design/Python: The whole processing was designed as a pipeline of several libraries and executables which would transform the data. Some of these modules were written in Python, while most where in C++. Especially towards the end, I could still easily add new algorithms without breaking stuff. This became important as I had to extend one processing module: I could simply write a new module, call the old module were necessary – thus reusing a lot of code – while still allowing me to quickly add the new functionality. All I had to do was to replace the calls to the old module, which were very easy to identify due to the separation. Python turned out to be a very good choice for quickly cleaning up data, and together with SQLite I could easily exchange data with the other C++ tools in the pipeline. Much easier than doing that in C++.
- UI: The applications usually had something like 20 parameters, which became a problem towards the end when I had to run them very often and tweak just a few parameters. The solution here was to write small UIs to wrap the parameters; with PyQt4 + QtDesigner I could get GUI front-ends running in half a day, while saving me lots of time later on. In future projects, I’ll try to write such small GUI runners; one nice point of having a GUI runner is also that you can give a demo more quickly ;)
All in all, the programming went ok, and I’ll try to do some things just the same in future projects. Next week (hopefully), I’ll take a look at the stuff that didn’t work out quite as expected.