Before you get the wrong impression, I'm also writing code besides writing test code, but it's still very much work in progress, so I don't want to talk too much about it yet. So back to testing, which I'm practicing much more extensively now than ever before. What follows are some notes, best-practices and random stuff related to testing.
These are the test workhorses, which make sure your refactoring is not introducing new bugs, that your interfaces are sane, and that the code does actually work. For every project now, I usually write either the tests along with the code, or immediately afterwards. I'm not so much a fan of the TDD approach, where you are supposed to write the test code first: You can easily wind up modeling the interface according to your assumptions about the implementation, which may or may not be valid. I much rather prefer writing the interface first, then the implementation, and then test it. That way, I readily see whether the interface is easy to implement, and while writing the tests, I see how difficult it is to use. Remember to refactor the stuff after a week or so, because one week later, you'll see things in a different light again.
Another thing that you should absolutely avoid is doing sort-of-functional testing in your unit tests. If you have binary data you need for your test, put it right into the executable, and compile it along. It's no use if you get an error while parsing a file on disk, because it does not tell you whether the function you are testing is wrong, or whether it is a random IO problem. Actually, a single function shouldn't both parse and access the disk anyway. If done right, you should be able to run all tests as a post-build step, and they should take something in the order of 1 sec to finish (try to be so fast with IO access).
A side note: If you discover a bug in your code, no matter how tiny, write a test to verify that you actually fixed it. Sometimes, trying to reproduce the bug highlights another bug somewhere else, and you won't find these otherwise.
This is also important, and yet often projects don't have an automated setup for this. After having all your unit tests in place, you have to continue and test things like the IO layer, reproduce user bugs and similar stuff. For this, you really need some testing framework which sets up a clean initial state each time you run it, and executes your application, observing its behaviour. Currently, I'm using a bunch of .NET helper applications and a C++ test runner for this, which works reasonably fine. Just make sure that in case of a crash the tests continue and no "Assertion failure" box pops up.
In my case, I pack the test cases into a single DLL, and run them from a test runner, so I can reduce the number of projects to build. Alternativly, you can of course have one executable per test, but this can quickly kill your project build times.
Remember to document this system, so other developer can easily get started with it. I prefer plain text files for documentation, as you can be sure that everyone can read them, but you might also want to use something like Sphinx, which works on more or less plain text. Just make sure that it does not require leaving your favorite IDE to read!
So far, I'm getting on pretty well with this. Over the last weeks, I've implemented a new library for my project, which is covered by 50+ unit tests, and I'm pretty happy with the resulting code. Moreover, I could safely refactor some functionality today, being confident that I don't break anything. This is really worth a lot.