Does YAGNI ever apply to tests?
I’ve been writing a small utility to help us do some configuration setup for testing. It needs to walk a directory structure, find all instances of a specific xml file, and then make some modifications to the file.
I TDD’d the class that does the XML file stuff, and I’m confident that it’s working well. I’m now going to do the class to walk of the directory structure and find the files.
And there is my dilemna. I don’t know if I’m going to do TDD on that.
I know exactly how to write it, I’ve written it before, and my experience is that that is code that never changes nor breaks. And, figuring out how to write tests for it is going to be somewhat complex because I’ll have to isolate out the live file system parts.
So, I’ve already decided what I’m going to do, but I’m curious what you think. Does YAGNI apply to test code, or is that the first step to the dark side?
There is some validity to that logic. If I had one of my developers ask me that, my response would be "write the tests"; I’ll give you the benefit of the doubt and trust you on "never changes nor[sic] breaks" and suggest in your case it’s not needed. (if it’s that common though, why hasn’t already been componentized for reuse and this test question already been covered?
Unit tests are another area, do you unit test your unit tests? (ad infinitum…)
Michael Feathers wrote this a long time ago and I came across it via Scott Bellware’s blog. He talks about what is not a unit test and I tend to agree with this list:
http://www.artima.com/weblogs/viewpost.jsp?thread=126923
A test is not a unit test if:
# It talks to the database
# It communicates across the network
# It touches the file system
# It can’t run at the same time as any of your other unit tests
# You have to do special things to your environment (such as editing config files) to run it.
I’d say the file system stuff you’re doing falls under the 3rd bullet point.
I don’t think Michael’s list is suggesting that code that accesses a database, accesses the file system, or accesses the network shouldn’t be tested. He’s merely suggesting they shouldn’t be called "unit tests".
Indeed, the very next sentence in the article confirms Peter’s belief:
<quote>
Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.
</quote>
Ideally, I’d agree – but the separation can be costlier than writing the tests and living with the smell, in my experience. Not always, but sometimes.
Jon
Yes, YAGNI does apply to tests.
You should test what can break. If you’re a new dev, you don’t have a good sense for what can break, so test everything. If you’re constantly refactoring an area — again, a high-risk area for bugs. But as you gain more experience, you will get a better feel for potential problem areas in your code, and it becomes more common that you write code once and you subsequently don’t have to change it.
(For this reason I believe TDD is most useful for students and new hires; old hands tend to benefit less from it).
On another note, one of my pet peeves is when developers call their tests "unit tests" when there is nothing "unit" about them. Not to say that unit tests are the only tests worth writing (they’re not), but rather that "unit test" is a specific type of test, and knowing what it means distinguishes a bandwagon-jumper from someone who knows what they’re doing.
"I know exactly how to write it, I’ve written it before, and my experience is that that is code that never changes nor breaks."
Then why are you writing it again?
It sounds like the problem is that the test is too obvious and hard to write without basically rewriting the code. Kind of like writing a unit test to prove that a method adding 2 and 2, and returning 4, is working right. It’s hard to write a test for an "embarrassingly obvious" type of function.
Since you’re confident the code is working right, maybe the thing to do at this point is refactor a bit to make it possible to put the code in a library so next time you won’t have to write it again.
And as you refactor, the code may get less obvious to the point where you need to write tests on it anyway. Two birds with one stone.
There is a development and maintenance cost for each test just as there is a cost for each feature. You have to balance the expense against the return on the investment. The return is future bug detection without integration testing.
Ideally you assign your limited budget of time to the highest priority areas, including your budget for writing tests. If you don’t have lots of time to maintain the tests as you change the tested code, or the tested code will never change, you should not write the tests. But you cannot use the TDD buzzword either. You have sinned inexcusably!
I have seen a lot of redundant testing done with TDD, whereby unit tests re-test the same code tested by integration tests. I believe the philosophy is that too much testing is better than not enough. It is a blunt tool.