When I write code that really matters, I prefer to design in the test-driven style:
- Figure out the next behavior that I need to implement
- Answer the question "What single test can I write that demonstrates I'm one step closer to implementing that behavior?"
- Write it and see that it fails (to show that I'm testing something sensible or that I've already finished that step)
- Implement that part of a feature
- Clean up the tests and the code
- Repeat until done
Not everyone on my team does this, and that's fine—it's more important that we deliver working, well-tested, robust features than that we all use the same style. Sometimes, however, the other developers check in features and ask me to help them write tests.
You can take this idea too far even as a rule of thumb, but I'm starting to believe that there's an inverse correlation between the difficulty of testing a feature and implementing a feature which corresponds to the quality of design.
In other words, when Allison said to me the other day "This code was really easy to write!" and I said "It was more difficult to test than to write (but it wasn't difficult to test)", that's a good sign that we've found great abstractions and discovered effective APIs in our code.
The difficulty of the tests is in building and selecting the right test data to expose all of the branches in the code.
You can obviously take this rule of thumb too far: some code is difficult to write and tedious to test because of low quality. Some code is easy to write and difficult to test because it does too much in a very obvious and straightforward way that merits some serious refactoring.
Still, when the balance of work in my programming goes toward crafting effective and useful and correct tests, I start to believe that I'm on the right track to crafting great code.