As I mentioned earlier in a post about TDD with Visual Studio, I recently watched some smart guys at MS preach TDD tips and techniques. They had some great points so I thought I'd distill them into bullets for future reference.
1) Tests serve as examples of how to use a class or method. Once used to having tests that show how things work (and that they work), developers start using the key phrase, "Do you have a test for that?"
2) Developer tests (they call them "coder tests") are distinctly different from QA test and should be kept separate. QA tests target features and treat the system as a black box. Unit test created by the developer operate at a lower level and test different things.
3) Name your tests carefully. I'll expand on that and give some of my own guidelines here.
- Name test packages like the package being tested with a suffix. For example, the "DataAccess" project/package is tested by "DataAccess.Test".
- Name test classes the same as the class under test with the suffix "Test". For example, the class "PrintManager" is tested by the test class "PrintManagerTest". This convention makes it easy to find the related class and keeps the class name a noun.
- Name test methods the same as the method being tested with the prefix "Test". For example, the method "PrintProductOrder()" is tested by the method "TestPrintProductOrder()". This convention keeps the method name a verb that reads as and English phrase. It's also compatible with NUnit's feature that assumes any method of a test class starting with the word Test is a test method.
4) Write each test before writing the method under test. This ensures that you don't waste time writing code that isn't going to be tested and used. It also encourages the developer to think as a user of the target method before thinking about implementation, which usually results in a cleaner, easier-to-use interface.
5) Follow the "3-As" pattern for test methods: Arrange, Act, Assert. Specifically, use separate code paragraphs (groups of lines of code separated by a blank line) for each of the As. Arrange is variable declaration and initialization. Act is invoking the code under test. Assert is using the Assert.* methods to verify that expectations were met. Following this pattern consistently makes it easy to revisit test code.
6) When writing application code, only write enough code to make a test work. If you know there should be more code to handle other logic cases, go write the tests for those cases. This technique prevents gold-plating and ensures that you always have a test for the code you write.
7) When you find you need to refactor working code, refactor and re-test prior to writing new code. This technique ensures your refactoring is correct prior to adding new functionality and applies to creating new methods, introducing inheritance, everything. I equate this principle to the practice of running all tests in a solution after getting the latest code from the repository and prior to writing anything new. I don't want to spend time debugging my newly written code under the false assumption that the system wasn't already broken.
I feel the urge to point out that in the webcast they are preaching test-driven development to the point of test-driven analysis & design. While I know it's the XP way, I disagree the idea that you shouldn't implement the correct long-term design when you know it's appropriate. In other words, analyzing your problem domain and producing a conceptual model prior to writing code is a good thing. And, the major benefit of OOP is that your code should reflect the concepts in the problem domain. So, when your TDD causes you to think, "I need a new class here", refer to your conceptual model and add the right class in the right structure. Improper inheritance and other kludges along the way just increase the number of iterations and amount of refactoring required to get to your end-goal. Iteration is good. Hyper-iteration is a waste.