You suck at TDD #1: Rewrite the steps
I’ve been paying attention to TDD for the past few years – doing it myself, watching others doing it, reading about it, etc. – and I’ve been seeing a lot of variation in the level of success people are having with it. As is my usual approach, I wrote a long and tedious post about it, which I have mercifully decided not to inflict on you.
Instead, I’m going to do a series of posts about the things I’ve seen getting in the way of TDD success. And, in case it isn’t obvious, I’ve engaged in the majority of the things that I’m going to be writing about, so, in the past, I sucked at TDD, and I’m sure I haven’t totally fixed things, so I still suck at it now.
Welcome to "You suck at TDD"…
Rewrite the steps
The whole point of TDD is that following the process exerts design pressure on your code so that you will refactor to make it better (1). More specifically, it uses the difficulty in writing simple test code as a proxy for the design quality of the code that is being tested.
Let’s walk through the TDD steps:
-
Write a test that fails
-
Make the test pass
-
Refactor
How does this usually play out? Typically, we dive directly into writing the test, partly because we want to skip the silly test part and get onto the real work of writing the product code, and partly because TDD tells us to do the simplest thing that could possible work. Writing the test is a formality, and we don’t put a lot of thought into it.
The only time this is not true is when it’s not apparent how we can actually write the test. If, for example, a dependency is created inside a class, we need to do something to be able to inject that dependency, and that usually means some refactoring in the product code.
Now that we have the test written, we make the test pass, and then it’s time to refactor, so we look at the code, make some improvements, and then repeat the process.
And we’re doing TDD, right?
Well…. Not really. As I said, you suck at TDD…
Let’s go back to what I wrote at the beginning of the section. I said that the point of TDD was that the state of our test code (difficult to write/ugly/etc) forced us to improve our product code. To succeed in that, that means that our test code has to either be drop-dead-simple (setup/test/assert in three lines) or it needs to be evolving to be simpler as we go. With the exception of the cases where we can’t write a test, our tests typically are static. I see this all the time.
Let’s try a thought experiment. I want you to channel your mindset when you are doing TDD. You have just finished making the test pass, and you are starting the refactor set. What are you thinking about? What are you looking at?
Well, for me, I am focused on the product code that I just wrote, and I have the source staring me in the face. So, when I think of refactoring, I think about things that I might do to the product code. But that doesn’t help my goal, which is to focus on what the test code is telling me, because it is the proxy for whether my product code is any good.
This is where the three-step process of TDD falls down; it’s really easy to miss the fact that you should be focusing on the test code and looking for refactorings *there*. I’m not going to say that you should ignore product code refactorings, but I am saying that the test ones are much more important.
How can we change things? Well, I tried a couple of rewrites of the steps. The first is:
-
Write a test that fails
-
Make the test pass
-
Refactor code
-
Refactor test
Making the code/test split explicit is a good thing as it can remind us to focus on the tests. You can also rotate this around so that "refactor tests" is step #1 if you like. This was an improvement for me, but I was still in "product mindset" for step 4 and it didn’t work that great. So, I tried something different:
-
Write a test that fails
-
Refactor tests
-
Make the test pass
-
Refactor code
Now, we’re writing the test that fails, and then immediately stopping to evaluate what that test is telling us. We are looking at the test code and explicitly thinking about whether it needs to improve. That is a "good thing".
But… There’s a problem with this flow. The problem is that we’re going to be doing our test refactoring while we have a failing test in our test suite, which makes the refactoring a bit harder as the endpoint isn’t "all green", it’s "all green except for the new test".
How about this:
-
Write a test that fails
-
Disable the newly failed assertion
-
Refactor tests
-
Re-enable the previously failing assertion
-
Make the test pass
-
Refactor code
That is better, as we now know when we finish our test refactoring that we didn’t break any existing tests.
My experience is that if you think of TDD in terms of these steps, it will help put the focus where it belongs – on the tests. Though I will admit that for simple refactorings, I often skip disabling the failing test, since it’s a bit quicker and it’s a tiny bit easier to remember where I was after the refactoring.
Not sure that's refactoring. When writing tests you are expressing your intent, you are trying to assume a well designed piece of code exists that does what you want. I don't so much refactor my tests but keep molding the test to better represent my intent, evaluating whether I've got something that really makes my life easier or not. Often as I play with the test code I work out what the abstraction I want is.
Refactoring of test code does happen….its to make sure the test is expressed in it's simplest form, and common setup is nicely captured. etc. Also, any tests that are mostly useless get deleted. This is one of the main things people don't do that make them suck at TDD. Tests are liabilities that need to be maintained. Kent Beck emphasized, in his counter-intuitive turn of phrase, Test as little as possible. Meaning you need to test to make sure your system works, but do it with as least tests as possible.
Keith,
I'm not quite sure what you mean by "trying to assume a well designed piece of code exists…" I'm trying to use my test code to guide me towards that well-designed piece of code.
On your second point, I have traditionally done a fair bit of refactoring of my test code and thought it was a good thing. Recently, I've come to question that perspective; for example, as you describe, I used to refactor out common setup code. But now I'm trying to ask myself why the setup code is complex enough to need refactoring. I'm not sure yet where to draw the line, but I do think that test refactoring can hide complexity that is telling you that your design could use some work.
Hope that makes sense.
first point, what I mean is that is that by imagining what the most ideal object/function is and assuming it exists, you write a test, there's no code that exists for it, you just assume it exists, then you can ponder whether it really is good or not. By assuming I'm just getting at the idea that you don't worry about the implementation. I've noticed a pattern that people are often biased upfront about what the implementation is going to be so they dive in and Test an Implementation into existence. Where TDD is Designing and Implementation into existence via Tests.
Contriving an example…
"I want to add some numbers" … hmmm… ideally I need an add function…..
assert( add(3, 4) === 7)
hmmm…thats ok….but actually its more I want multiple numbers….
assert( add(3, 4, 5) === 12)….
hmmmm…. need a better name….
assert( sum(3,4,5) === 12 )
and really its a list
assert( sum( [3,4,5] ) === 12) …..
I'm not sure that's the best example, but at each stage you may implement the code or you may just keep working the test code till it expresses what you want to say. You keep working your tests till you have the right names, and you can express what you want pretty concisely in an appropriately generic way. That process of changing your tests to better reflect your intent I wouldn't call refactoring so much.
Second point, I find I have to do very little refactoring of my tests compared to re-stating my intent to get a better design. If I'm refactoring often I'll just end up deleting some of the stepping stone tests that lead me to a design I'm happy with, or if I see I'm asserting effectively the same thing in two different test cases refactor out a new test. But even then, not sure its refactoring…. validation preserving change 🙂
Thanks for the clarification.
If I was doing add two numbers, I'd go off and write the first test, and then I'd go write the implementation for add. I'd then go onto the next thing I needed; if it was to add up three numbers, then I'd make any changes to the api at that time. I'm not sure how that relates to what you are saying – are you saying that you are designing a series of tests before you start writing code?
No, more that you are searching for a design that suits what you want to express as a test….
What I'm saying, is if you write the first test, and then implement "add". You miss what I thought you were getting at in your article, which is a mental step of am I saying the right thing in the right way with my test? Now is your time to play with the design. Sometimes you do go and implement add, but you should come back and say, did I express what I wanted? The idea being you play with your design ideas with your test code till you feel you are saying the right thing.
Sometimes that process is done while also implementing the code, and as you implement you get inspiration for a better way, but quite often I find inspiration while writing the test.
In fact, the way I do it I don't even think too much about tests as thinking about how to express a design idea.
For simple problems like 'add' or 'roman numerals' or "bowling game" you can take a simplistic approach of just doing add, and when you need 3 numbers adding it in. Then evolving like that. It's a nice way of learning TDD, but in reality, with more complex problems, you need to play at the "test" stage more.
add and sum are a bit too simple but I'm trying to capture the idea that "add" was kind of the right thing to express, but sum was a better thing to express. In simplistic TDD jumping from add to sum is kind of easy because they are known concepts. But more often than not you are dealing with semi unknown concepts and the jump from kind of right, to much better simplifying idea is trickier…. and TDD in its simple form can trap you into "kind of right".
I dunno, I started in 1999 with all this, and taught a lot of people TDD, and in virtually all cases TDD helped people become better designers, but sometimes the designs came out a bit strange. Whenever I sat with someone and tried to work out why, I found the person hadn't really experimented enough with the idea. They ended up evolving something that didn't quite have enough design thought in it. Test-Code-Refactor helps find a lot of abstractions, but its not quite enough.
Interesting.
I think we are mostly in agreement in principle but are talking about different things. I was talking about the simplicity of the test code (typically the setup of the test code), and putting in effort to try to make it simpler. I think you are talking more about the way the thing I am adding is expressed.