YAGNI and unit tests…
Thanks for your comments.
I decided to go ahead and write the unit tests for that layer, both because I knew what not writing them would be like, and I wanted to play with wrapping/mocking a system service.
I also decided – as some of you commented – to do the right thing and encapsulate it into a class. That would have happened long ago, but though I’ve written it several times, I don’t think I’ve ever duplicated it within a single codebase – and the codebases where I did write it are pretty disparate. Now, I have something where I could at least move the source file around…
Writing tests for this was a bit weird, because in some sense what I needed to do was figure out what the system behavior was, break that down, write a test against my objects, and then write mocks that allowed me to simulate the underlying behavior.
So, for example, I created a test to enumerate a single file in a single directory, wrote a wrapper around DirectoryInfo, and then created a mock on that object so I could write GetFiles() to pass back what I wanted. And so on with multiple files, sub-directories, etc.
So, I did that, went to write the little bit of code that I needed in the real version (to use the real GetFiles() calls and package the data up), hooked it up to my real code, and it worked.
*But*, when I went back and looked at the code, I found that what I had really done was create two sets of code. There was the real code that called the system routines and shuffled the data into my wrapped classes. And then there was my mock code that let me control what files and directories got returned. But there wasn’t any common code that was shared.
So, my conclusion is that I really didn’t get anything out of the tests I wrote, because the tests only tested the mocks that I wrote rather than the real code, because the only real code was the code that called the system functions.
In this case, TDD didn’t make sense, and I will probably pull those tests out of the system.TDD may make sense the next level up, where I’ve written a new encapsulation around directory traversal, but it seems like the only code there is hookup code.
So, the result of my experiement was that, in this case, writing the tests was the wrong thing to do.
Couldn’t agree more. If there’s no business logic, there’s no point in writing tests.
This is a common complaint. TDD leads to mocking, and mocking leads to test bloat. With mocking is you are testing units in isolation, usually breaking the encapsulation of a sub- system to do so, making the test dependent on implementation details, increasing maintenance cost. Integration testing, whereby a sub-system or application is tested as a whole, has less of this problem and can often test the same code paths.
I was intriged by your original question on this, thanks for taking the time to answer your own question. I’d love to see how you addressed your problem, think you could zip up your tests and mock classes and share them with the world? thanks!
Interesting, when I read the orignal post my gut feeling was that writing the tests was a waste of time but I didn’t have enough evidence or argument as to why.
Thinking about it more there is a balance between the following factors
1. How complex is the code to be tested
2. How complex will the tests need to be (if your tests are very complex how can you be sure they are correct?)
and in a pragmatic sense
3. How critical is the code? I work in an environment where a few non critical bugs are prefered to spending another month testing
Just for curiosity, would have a virtual file system like WinFuse (http://www.suchwerk.net/sodcms_FUSE_for_WINDOWS.htm) been an option in this case? Or what about an in-memory file system like that of Lucene.Net (look at the RAMDirectory class)?
Simone,
That might be a nice thing for integration tests. But it wouldn’t really help for unit tests, because I want to isolate away the dependencies.
I understand, sorry, I just gave a quick read to your posts and maybe missed something about your intentions.