The quest for the right UI toolkit
Graphical user interfaces are not really a new invention, and still, I don’t see that any vendor has come up with a really good toolkit for creating GUIs. What’s taking them so long, and where are we actually heading with the current user interface development?
State of the nation
Currently, there are two major kinds of approaches: One are declarative user interfaces, the other is what I call procedural interfaces. Procedural interfaces is what Qt, GTK, FLTK, wxWidgets, WinForms and quite a few other toolkits give you. On startup, some code like “create a button, add to layout” is executed. For creation, you usually use some designer tool which creates this procedural code and hides them from the developer, as it tends to be extremely ugly.
The nice thing about fully procedural interfaces is that they are easy to create on-the-fly. The not-so-funny part is that the UI creation code is often interwoven with the application logic (event handlers, anyone?). On the desktop, these kind of toolkits are very popular, as they don’t require much resources during creation and easily support custom widgets (the “direct rendering” mode is usually quite sophisticated).
The other kind of toolkits use are more or less declarative approach. The most successful UI toolkit out there – HTML (just think about how many web sites we have!) – is fully declarative, unless you start using JavaScript, which adds some procedural creation as well. The advantage is that the UI itself is totally decoupled from the logic. The only desktop solution that uses this approach is – as far as I know – WPF. This is actually quite a shame, because I consider the declarative approach much more suitable for UI development; after all, the interface is usually static. Moreover, by having the code separated totally from the UI, you can easily move around UI elements without having to update the code. A good example is a context menu: In WPF, you can describe it with a small XAML snippet and load it, giving the UI designer a chance to rearrange the items. In other languages, you would create it at runtime, forcing the designer to rearrange it directly in your code.
Of course, this only covers the cases where you can live with the user-created widgets. For custom widgets, there is basically no real choice besides having a fully procedural painting engine. For this, Qt’s QPainter is actually very nice example: It is a really clean, low-level drawing interface. I’m not so sure that a more scene-graph oriented approach is really suited for the low-level drawing, especially performance wise – think about creating a plot with line segments vs. drawing the line directly.
Wish-list
For the future, I wish that the various vendors can agree on some standard – similar to HTML – for declarative UI development. The display-engine for it can be different for each platform (I have no problem with widgets looking differently, as long as the layout engine can cope with it). For low-level drawing, some kind of minimal draw toolkit should be provided (similar to QPainter) with a specified minimum set of operations.
The major problem will be the programming language for all of this; something like JavaScript (with C interop) should do the trick though. Until then, I think I’ll try to use WPF for most of my future projects on Windows, as I’m really pleased with the results so far. I didn’t write custom widgets with it though, this might very well change my opinion on the subject :)