Network & Math rambling
Some rambling about network stuff and math libraries …
Networks
At home, I’m running 4-5 PCs (one is not always here), running a wide range of operating systems (all are Windows based, but the Windows versions are: XP, XP Pro, Vista Ultimate x86, Vista Ultimate x64, Server 2003 EE) which allow me to test stuff easily. 2 machines are connected via 100 Mbit ethernet, the other three use 54 Mbit WLAN. The machines are all pretty fast (slowest is a 2.2 GHz Athlon XP, fastest a Core 2 Duo), so installing stuff does not take too long. However, I’m spending an increasing amount of time to manage all these machines and keep them at least remotely in sync.
For example, I need OpenOffice on all of them. With each new version, I gotta boot all 5 machines up, RDP into them or sit in front of them just to upgrade OpenOffice. I serve the installation file from my desktop, so I don’t have to download it 5 times, however this is not that fast as it could be. It would be nice to “push” the installation files onto the machines as soon as the machine and my desktop happens to be online at the same time, so I can install it the next time I get access to the machine. This is something I think I’m gonna write soon, a simple data distribution system for home networks where you can basically select a folder for “sharing” and it gets pushed to the clients as soon as they find out that the server is running.
What’s already slow with OpenOffice (100 MiB) however gets rather frustrating with stuff like DirectX SDKs, MSDN Updates etc. which are in the range of several GiB and take a rather long time to install (plus, you’ve gotta mount those ISOs!). I’ve got no satisfying solution to that … what I did however is to consolidate all the DVDs that were flying around onto my desktop machine, so they get properly backed up weekly and I don’t have to search for them. I’m using the NTFS file compression which gives great results on some stuff, but it’s still a lot of stuff. Just to give you an idea how much data this is: My main tool install folder is 9.41 GiB with 3.5k files, my developer libs folder is 4.42 GiB with 4.3k files (compresses down to 1.5 GiB), and my reference folder (containing a few open source programs) is 1.19 GiB with 89k files.
Most sync tools I’m aware of are not suited to sync such folders across different machines, plus I get the feeling NTFS is not meant for this either (just to get the size/number of files in my reference folder takes like 1 minute). On the other hand, I do use syncing very successfully on my papers/presentation folder, which has an average file size of 5-10 MiB and there isn’t much going on (only adds, no deletes, occasionally moves). I’d also like to see a tool which can take advantage of the fact that both the source and target folder are NTFS compressed, and does not extract/compress the data just for the transfer.
For the future, I hope to get a small server running at home with Linux. I’m looking at some low-frequency, dual-core solution with a good amount of RAM and a RAID5 with ~600 GiB of capacity with wake-on-LAN. The problem with all these NAS boxes you can get is that the CPU is usually pretty bad - and I don’t want to spend minutes waiting for a large SVN checkin to finish, plus I don’t want to see the transfer speed getting crippled by the CPU. This server would also solve some other issues, for example, I can set up a VPN then and remotely sync/checkin into the SVN while I’m at university.
The server has its own set of issues, though: Without wake-on-LAN, it’s gotta run 24/7, which is basically a no-go, no matter how silent I want to have nothing running at night while I’m sleeping in my room and second, it’s going to eat up a lot of space until I opt for a blade-style-chassis which brings me to another problems, this stuff is not cheap. Even a basic server, with commodity parts (no APS, no hot-swap PSU etc.) is going to cost around 600€‚ with the most basic config, and this goes up very quickly to 700-800€ if you want it a bit future proof (i.e., 1 TiB of usable disk space). I’m not yet willing to spend that much money on something that should be rather cheap (guys, get up, a XBox 360 with 3 disks won’t cost 700€ and it has a graphics chip on board, why the heck can’t I get an embedded board with a 1.6 GHz Core 2 Duo with stripped cache and two DDR2 slots plus 4 SATA ports for 250€?).
Math
The other area are math libraries. For some reason or another, there is no good, standardized C++ math solution out there which spans across small and large matrices etc. Yeah, there is BLAS, with all its low-level stuff, but why the heck is there no math library built which has SSE/SSE2/Altivec optimizations and dense matrices up to let’s says 8x8, then switches over to BLAS, and offering “normal” implementations (i.e. portable C++ code) all the way up to have a safe reference?
I mean, either you do the low-level fixed stuff with 4x4 matrices with hand-optimized loops and aligned vectors and god knows what to squeeze the most out of your machine, but then you’ve gotta implement all the stuff like LU-Decomposition on your own, or you use something like boost::ublas, which is going to kill your performance at small sizes (at least, compared to your ASM tuned magic).
What I’d like to see is a consistent math library offering independence of storage, object size and algorithms. It should allow me to specialize certain operations for certain types (for example, if I want to specialize the matrix inverse for 4x4 float matrices, I should be able to do that), it should allow me to plug in a BLAS backend and it should have reasonable, portable default implementations of all operations.
This is a lot of work (you’ve gotta switch your storage for example automatically between fixed-size arrays to heap-based arrays depending on the size), it has a few things in it that are rather cumbersome (like: You want to construct Vector<2, fixed_storage, float>
a with a (2, 3)
, which is a constructor that you can put into the main class, but then you’ve gotta compile-time-check whether the class has actually length 2 or more when assigning it like that, making the stuff ugly - trust me, I did it) and it needs some industry backing (standardized C++/BLAS interface, expression template conventions, porting); hopefully someone will do this as it really gets frustrating that every game developer has to write its own math library just because there is nothing out there which you can build on. Even a standard interface to storage, basic supported operations and such stuff would be helpful already as you could expect to be able to use algorithms written against that interface, but even that seems to be out of range currently.
There has been one day some work on a game math library, but it did disappear quite quickly after nobody showed up who was willing to push it. Seems that most people can live with the situation like it is now, but I’d really like to see a solution as outlined above appear some day.