File system thoughts

I've been working today morning on the file system (today afternoon I'm going to install my new Asus K8N-SLI Premium ;) mobo so my desktop PC will be running again). I've added some generic stream classes: IStream, MemoryStream, TextStream and IFile. Some of them depend on each other, MemoryStreams can be streamed into IFiles, the same is true for the TextStreams. The log uses now the TextStream which means that is basically stores a TextStream and the output file and redirects all incoming data to the TextStream with a bit of formatting (log prepends the current timestamp). This approach has some nice advantages as for example MemoryArchive can wrap around a MemoryStream and does not need to fall back to the C++ I/O streams internally. MemoryArchive provides some serialization functions for primitive data types and hides a bit of MemoryStream's functions (namely, the raw read/write functions, and the seek function). Another nice thing is that MemoryArchive can stream into any kind of stream now, including MemoryStream - which means you can do deep copies "for free" without additional work. Code like this is enough:

MemoryArchive::operator=(MemoryArchive& rhs) {
    MemoryStream stream;
    streamTo (&stream);
    rhs.streamFrom (&stream);
}

So much for now, I'll post a few more details when the refactoring/rewrite is over.

Rewriting

I'm rewriting niven's FileSystem. The FileSystem originally supported different backends transparently to the user (local filesystem, web based files, etc.), but I'm removing all this stuff in favour of a more streamlined, easy to use file system which supports local files only. I'm not entirely sure whether I should keep a backdoor for archive or web-based file systems in it, as they don't map very well to a general file system - archives and the web are read-only filesystems, and it's probably better to put them into a seperate class, in order to make it clear to the user that he can't use them as general-purpose file systems. Some ideas that float around my mind are special calls in the file system that allow the user to request a file using an URL, which would in turn invoke the appropriate file system. The problem with a file system supporting archives and normal files is: If you load a file, it might happen the file is loaded from an archive, but if you modify it, the modifications happen to a local copy of the file as you can't work inside the archive. Now image the archive loader has a higher priority than the local loader, and you've got your dillema. You can't really figure out what went wrong cause you have to wait until all backends finish and then it's usually to late to find an error, and I don't like this behaviour. I think I'll enable different backends only for the openFile call, as this is the only safe one - all other calls, like createFile etc. will probably go into the local file system. This needs some thought, gonna write a bit more about it tomorrow. I hope to finish the rewrite and and the package export/import during the next week.

First object serialized

Finally I managed to got enough of the serialization working to store and load my first object - a Tuple3 derived class. Now I "just" need to support generic handles. Here you can see a quick example - I create a Vector3f which stores/reads its three member variables in the archive. After that, I reset the archive by closing it and opening it for reading. Then I serialize the contents of the archive back into a fresh new vector.

Vector3f testv(1,2,3), testr;
MemoryArchive b;
b.setMode (IArchive::WRITE);
b & testv; // from testv into b
b.setMode (IArchive::CLOSED);
b.setMode (IArchive::READ);
b & testr; // from b to testr