Skip to main content

Inside niven: Dynamic class loading

Today, we take a look at niven's class loader, which is responsible for dynamic instancing of classes from plugins and packages, and is likely to become a core component of the serialization system.

The class loader

The class loader is a component that is tightly coupled with the RTTI system. I forgot to say last time that the RTTI system with class instancing has a drawback, you need a default constructor for each class (i.e. className ()) - just occured to me when a class that used to work refused to compile due to a missing default constructor. This limitation is not as bad as it might sound at first, as most "interesting" classes don't have parameters in the classes constructor, and classes that are instanced during serialization have to init from the source archive and not the construtor anyway. The class loader supports two ways of loading classes:

  • Loading from Packages: A package is a simple DLL file that exports the getClass-Functions, as described last time. Packages don't need to be initialized before use. A good example of a package file is the application itself. The package contains in this case the classes that contain the application logic. Other packages might include different resource loaders, which could be loaded during runtime (think of shader generators, for example). The package loader is not working yet, but this is only a matter of time, as the second loader works, which supports a super-set of the features needed for package loading.
  • Loading from Plugins: A plugin is a DLL file that exports all the same functions as a package, and special plugin-handling functions (init, shutdown and a plugin-info function). Special care has to be taken so these function are always called before using the plugin, and that they are only called once. For the class loader though, it is no difference if it's invoked with a Plugin or a Package, as it assumes that in case of a plugin it has been already properly initialized. The load call itself is the same as in the package case.

There might be more ways sometime, as it is possible to create custom script-driven objects that are loaded by the class loader (objects that contain code that is executed during run-time) The class loader might even be extended for loading a special kind of classes into a VM. At the moment, the Package & Plugin loading is most important though, and other loaders are not planned for the nearest future.

Type safety

The class loader is completely type-safe: When loading a class, the loader checks if the class is derived from the one it should be assigned to. Usually, you don't want to create an instance of a class you know, but rather of unknown classes. Example: The render driver is located in a plugin, and the name of the class varies, the only thing that remains the same is the interface the class implements, namely iRenderer. To load a class, you would use now loadClass <iRenderer> ("name of the class", the_plugin). This is able to load any renderer, just as long as it's derived from iRenderer - you just specify the name and the plugin/package where to find it. The engine takes care of finding it, checking whether it is a valid derivate of iRenderer and if so, instancing it. This checking turns out to be rather simple, as the loader is working on the RTTI classes anyway. So much for today, when I get into serialization, I'll revisit the current class instancing mechanism, as it'll probably need some tweaking. Anyway, at the moment, things are looking really good, with those core services running, I have really good hope the serialization stuff won't block me too long. I'll take another look at the window handler then, splitting it up (the window creation is currently bound to the input, which is altogether bound to the renderer - something I want to break up).

Inside niven Part. I

This is the first of a series of articles that describe the inner workings of niven. In the first article, I'm gonna tell you something about niven's RTTI system, object handling and other core stuff.


Niven has a custom tailored RTTI system, which I covered partially in a previous post. But let's take a closer look at it. All classes which want to use the RTTI system have to be derived from iObject - a abstract base class. Depending on what type of class you have, you need to add something like DEFINE_CLASS(myClassName,myParent) to your header file and IMPLEMENT_CLASS(myClassName, Class::Interface, myPackage) to your source file. This adds a static member to your class of type Class which is a class that holds information about your class. Now, you can easily query the type of a class by callings its type () function which returns the run-time type, even if called through a base class interface. If you need the type of a class (not of an instance, but for the class itself, for example, when you want to check whether an instance is of a certain type) you use the staticType () function which returns the static type of the class.

Object handling

For this one, I have to tell you a bit about how the engine worked, and why I changed it. Previously, it had a global SingletonManager which kept track of all Singletons (classes like PluginManager, ResourceManager etc.). For example, it was used in the OpenGLDrv implementation. It was loaded using the PluginManager, which called the Plugin->init () function. This function would then create an instance of OpenGLRenderer and register it at the SingletonManager by using SingletonManager::register ("Renderer", new OpenGLRenderer);. This looks allright, but the problems cropped up step by step. First of all, the SingletonManager was actually abused as it was used to keep track of 10 Singletons that needed to be loaded always anyway, i.e. it was just a global variable that kept pointer to other global variables (and you couldn't even bet on the SingletonManager, the getSingletonManager call could fail ...). And second, you couldn't control what happened when you loaded a plugin - and as users are also supposed to provide plugins, this was clearly the wrong way (users could do stuff they were not supposed to do, and there was no easy way to protect the core classes). So, I decided to abandon the SingletonManager completely and put the core classes into a global Engine class. This worked of course fine for the classes like PluginManager, but when I came to the renderer there was a problem. Previously, the renderer registered itself at the engine by calling a function from the engine - this way of self-registering was not available any longer due to the reasons above. The engine would have to ask the plugin to create a class instance or an interface instance, and then the engine would register this instance. I tried several approaches to accomplish that, including my own COM-like system (which would add a function extern "C" API_EXPORT bool CreateInterface(const GUID& guid, iObject** object); to each Plugin, and you would load objects with those awful GUIDs), a string based factory (extern "C" API_EXPORT bool CreateInstance (const std::string& name, const VersionInfo& minimalVersion, iObject** object)) and a few more approaches like that. The problem all of these have in common: Your createFunction needs to know about all objects it can create, and it is time-consuming and error-prone to keep it in sync. Here is the solution I came up with finally: When you add the IMPLEMENT_CLASS to your source file, and you do it for a non-abstract class, it adds an exported function getClassMyClass - this function returns a pointer to the RTTI class for this class. And now the real trick: For non-abstract classes, the RTTI Class stores a function pointer to a function that returns iObject*. DEFINE_CLASS adds a new function to the class, static iObject* staticConstruct(); which returns a pointer to a new instance of the class. Example: You have a class MyClass, and you call iObject* myClassInstance = MyClass::staticConstruct ();. When you query the run-time type of myClassInstance you'll see it's really an instance of MyClass. Looks good so far? Well, there is something to keep in mind. Let's assume you have a class with a private constructor, what happens if you call staticConstruct()? It returns you a class instance, as it is a class member and may also access private members. Surely not what you wanted. The trick I used to overcome this is to use a special template functor class that takes care of the instancing - this template class is never a friend of your class, so you get compile-time errors when trying to add DEFINE_CLASS to an abstract class. The error messages are useful, they tell you about not being able to create an instance of an abstract class, which should tell you to look at the documentation and use the DEFINE_CLASS_ABSTRACT macro instead. So much for now, got some serious work to do. Next part will be hopefully about object serialization, package/plugin management, the event queue or static linkage of packages/plugins.

DirectX 10 - The beginning of the end?

Microsoft released the DirectX December 2005 SDK lately, which contains pre-release DirectX10 information. Looking closer at DirectX10, it might be in my opinion the best reason to switch over to OpenGL now, and support that API on a broader base.

What's coming?

DirectX 10 (DX10 from here on) is a big step for DX. I won't give a detailed overview of the changes here, if you're interested in one, read on GameDev's DX10 overview. Probably the biggest change in DirectX is that it splits up the core functionality from the rest of the API, by moving low-level logic into the DirectX Graphics Infrastructure (DXGI), something like a graphics "kernel". The DirectX API (10,11,...) will sit in top of that "kernel", which should make it easier for Microsoft to update DirectX. Another striking feature is the introduction of the Geometry shader that is basically an extended vertex shader that allows not only manipulating vertices, but also creating new vertices. This is a "must-have" on the track to subdivision surfaces, NURBS and other high-level geometry.

Ok, what's the deal

First of all, there are some statements from Microsoft that make me wonder. On the one hand, they say that the graphics are all driven by a (according to the PDC 2005 presentations) Kernel-Mode DirectX-Krnl, which is probably the old name for DXGI. This means that the graphics kernel is really part of the kernel, tightly coupled with it. On the other hand, Microsoft announced that the graphics are not part of the kernel any more - they seperated both as freezing of the GUI was one of the most common reasons for lockups. Even more confusion when you consider the following: According to the PDC 2005 presentation, Windows Vista's GUI is based completely on DirectX. The whole desktop is rendered using DirectX, i.e. the desktop itself is a DirectX surface. It is also impossible (at least from Microsoft's side) to support OpenGL in windowed mode as the desktop is locked by DirectX (although IHVs say it would be possible, more on that in a moment). They plan to wrap OpenGL through DirectX, translating OpenGL calls to DirectX. As there is no way - and no interest - in supporting OpenGL extensions, these aren't wrapped, meaning you can get at most OpenGL 1.4 under Windows Vista. Summing this up, it seems that the GUI and other graphics related stuff really depends on DirectX, and that is all extremly tightly coupled. But, to come back to the OpenGL thing: Microsoft is very proud that the graphics card is managed by the OS under Vista (yet another statement that makes me wonder, didn't they say graphics and OS are seperated?), for example, the graphics card's memory can be virtualized. The current documentation states that resources can be shared between several DirectX applications, for example surfaces. But, this means it should be easily possible for an OpenGL process to write to its own part of the memory and pass that information to the DirectX based desktop compositing engine. Obviously, it's all not that simple, as both nVidia and 3dlabs have a hard time to get enough information to run OpenGL under Vista without turning off the desktop compositing (see the forums at At the moment, it is only possible to run full-screen apps with full OpenGL support. No chance to see your fancy DCC tools work with the new GUI, same for all your game editors etc. that come with OpenGL windows.

Developers, developers, developers

Another fact about DirectX - this year, they released at least 6 DX9 SDKs this year (February, April, June, August, October, December - at least looking here) - each around 300 Mb large. I hope I didn't miss one :). What does this mean for you? Well, each odd month, you have to download and install a new SDK just to keep up with your DirectX development. Plus, each SDK comes with higher requirements, the December SDK would love you to have Vista installed to see the DirectX 10 examples. Even using the latest released final Windows version (this should be Windows XP x64 edition) you have no luck with it. And, to be honest, how many people have that version running, let alone a Release Candidate Windows Vista copy with Release Candidate developers tools? Except studios looking to be sponsored by Microsoft, no serious developer is going to do mission-critical work on such a toolset. This smells like vendor-lockin to me, forcing you not only to use DirectX on Windows, but also force the people to upgrade to Vista (in case you want to use the Geometry shader, for example - no way to use that under Windows XP) and probably also use Managed DirectX soon, as this technology is pushed also. Which means you don't have any chance to get that running under other OS at all (don't forget that there are other platform than only PCs where you want 3D graphics, like the PS3). Interesting, when hardware comes out that has the geometry shader, you'll have only two choices, either use OpenGL (as the geometry shader will be available as an extension there for sure) or force an update to Windows Vista - let's see which choice people are more willing to take.

So why OpenGL

OpenGL went a completely different way. There is core functionality that was and is and will be supported always, no matter what extensions are available. It's C based, so you can access it from virtually any programming language ever invented, there are wrappers for dozens of scripting languages and even .NET (in case you really want to go managed), and many high-level C++ wrappers. But you can basically take your 10 years old copy of GLQuake, add a fancy shader call to it and compile it today and it will still work without further modifications - try this with a DirectX 3 app ;) And, you can develop for OpenGL freely under any OS you want, with any tools you want and without the fear that the stuff you implemented today may be obsolete or deprecated tomorrow. Plus, you get immediate access to all new hardware features through the OpenGL extensions - no need until a new version of DX is released that supports the fancy new feature you like to take advantage of. And it's no single company that says what a GFX card has to be able to do, but rather the IHVs themselves. And they surely know best how to use their own hardware. However, there are also downsides on the OpenGL side: There is no company that is creating an OpenGL SDK, you have to search for your information from various sources. The support varies from IHV to IHV, and sometimes from OS to OS. And sometimes the IHVs are slow when they have to agree on an extension - seems to be getting better though. But considering that it doesn't force you into a special programming model (Managed Code anyone? Using the Microsoft compiler only? Vista?), and allows you to develop freely, promoting the use of OpenGL is probably the best thing you can do. After all, having choice is what developing is all about, isn't it?

Boost::Signal inside a std::map

Recently, I came across a problem with Boost::Signal. I wanted to use them inside a std::map so I could map signals to categories, and this turned out to be not as simple as one might think.


The main problem is that boost::signal is noncopyable, meaning you can't use the assignment operator "=" at all. This prevents the straightforward use in any STL container, as those depend on the assignment operator. You can of course have explicit instances of boost::signal, but you can't use them directly in a container, let alone add and remove them dynamically.


The first solution I tried was to store pointers (in my case, boost::shared_ptr<>) to the signals, and fill the map with those. This works, but it is a bit cumbersome to maintain all the pointers and have new all around the place. Fortunately, another boost library comes to the rescue, boost::ptr_map from the Boost Pointer Container library. It has two specialities I came across (using boost 1.33.1 beta from 8th November).

  • Its insert () function takes a non-const reference as key, unlike the std::map. Be aware of that, as it can lead to hard-to-find compiler errors.
  • The dereferencing was non-standard, in my case I had to use (iterator_to_ptr_map)(params_for_signal), I supposed I would have to add some "*" somewhere but somehow the map didn't need it.

Will try with Boost 1.33.1 final and report if there is any difference to the beta.