Unit Test Success using Ports, Adapters, and Simulators
There is a very cool pattern called Port/Adapter/Simulator that has changed my perspective about unit testing classes with external dependencies significantly and improved the code that I’ve written quite a bit. I’ve talked obliquely about it and even wrote a kata about it, but I’ve never sat down and written something that better defines the whole approach, so I thought it was worth a post. Or two – the next one will be a walkthrough of an updated kata to show how to transform a very simple application into this pattern.
I’m going to assume that you are already “down” with unit testing – that you see what the benefits are – but that you perhaps are finding it to be more work than you would like and perhaps the benefits haven’t been quite what you hoped.
Ports and Adapters
The Ports and Adapters pattern was originally described by Alistair Cockburn in a topic he called “Hexagonal Architecture”. I highly recommend you go and read his explanation, and then come back.
I take that back, I just went and reread it. I recommend you read this post and then go back and read what he wrote.
I have pulled two main takeaways from the hexagonal architecture:
The first is the “hexagonal” part, and the takeaway is that the way we have been drawing architectural diagrams for years (User with a UI on top, app code in between (sometime in several layers), database and other external dependencies at the bottom) doesn’t really make sense. We should instead delineate between “inside the application” and “outside of the application”. Each thing that is outside of the application should be abstracted into what he calls a port (which you can just think of as an interface between you and the external thing). The “hexagonal” thing is just a way of drawing things that emphasizes the inside/outside distinction.
Dealing with externals is a common problem when we are trying to write unit tests; the external dependency (say, the .NET File class, for example) is not designed with unit testing in mind, so we add a layer of abstraction (wrapping it in a class of our own), and then it is testable.
This doesn’t seem that groundbreaking; I’ve been taking all the code related to a specific dependency – say, a database – and putting it into a single class for years. And, if that was all he was advocating, it wouldn’t be very exciting.
The second takeaway is the idea that our abstractions should be based on what we are trying to do in the application (the inside view) rather than what is happening outside the application. The inside view is based on what we are trying to do, not the code that we will write to do it.
Another way of saying this is “write the interface that *you wish* were available for the application to use”. In other words, what is the simple and straightforward interface that would make developing the application code simple and fun?
Here’s an example. Let’s assume I have a text editor, and it stores documents and preferences as files. Somewhere in my code, I have code that accesses the file system to perform these operations. If I wanted to encapsulate the file system operations in one place so that I can write unit tests, I might write the following:
class FileSystem { public void CreateDirectory(string directory) { } public string ReadTextFile(string filename) { } public void WriteTextFile(string filename, string contents) { } public IEnumerable<string> GetFiles(string directory) { } public bool FileExists(string filename) { } }
And I’ve done pretty well; I can extract an interface from that, and then do a mock/fake/whatever to write tests of the code that uses the file system. All is good, right? I used to think the answer is “yes”, but it turns out the answer is “meh, it’s okay, but it could be a lot better”.
Cockburn’s point is that I’ve done a crappy job of encapsulating; I have a bit of isolation from the file system, but the way that I relate to the code is inherently based on the filesystem model; I have directories and files, and I do things like reading and writing files. Why should the concept of loading or saving a document be tied to this thing we call filesystem? It’s only tied that way because of an accident of implementation.
To look at it another way, ask yourself how hard it would be to modify the code that uses FileSystem to use a database, or the cloud? It would be a pretty significant work item. That also means that my encapsulation is bad.
What we are seeing – and this is something Cockburn notes in his discussion – is that details from the implementation are leaking into our application. Instead of treating the dependency technology as a trivial choice that we might change in the future, we are baking it into the application. I’m pretty sure that somewhere in our application code we’ll need to know file system specifics such as how to parse path specifications, what valid filename characters are, etc.
A better approach
Imagine that we were thinking about saving and loading documents in the abstract and had no implementation in mind. We might define the interface (“port” on Cockburn’s lingo) as follows:
public interface IDocumentStore { void Save(DocumentName documentName, Document document); Document Load(DocumentName documentName); bool DoesDocumentExist(DocumentName documentName); IEnumerable<DocumentName> GetDocumentNames(); }
This is a very simple interface – it doesn’t need to do very much because we don’t need it to. It is also written fully using the abstractions of the application – Document and DocumentName instead of string, which makes it easier to use. It will be easy to write unit tests for the code that uses the document store.
Once we have this defined, we can write a DocumentStoreFile class (known as an “adapter” because it adapts the application’s view of the world to the underlying external dependency).
Also note that this abstraction is just what is required for dealing with documents; the abstraction for loading/saving preferences is a different abstraction, despite the fact that it also uses the file system. This is another way this pattern differs from a simple wrapper.
(I should note here that this is not the typical flow; typically you have code that it tied to a concrete dependency, and you refactor it to something like this. See the next post for more information on how to do that).
At this point, it’s all unicorns and rainbows, right?
Not quite
Our application code and tests are simpler now – and that’s a great thing – but that’s because we pushed the complexity down into the adapter. We should test that code, but we can’t test that code because it is talking with the non-testable file system. More complex + untestable doesn’t make me happy, but I’m not quite sure how to deal with that right now, so let’s ignore it for the moment and go write some application unit tests.
A test double for IDocumentStore
Our tests will need some sort of test double for code that uses the IDocumentStore interface. We could write a bunch of mocks (either with a mock library or by hand), but there’s a better option
We can write a Simulator for the IDocumentStore interface, which is simply an adapter that is designed to be great for writing unit tests. It is typically an in-memory implementation, so it could be named DocumentStoreMemory, or DocumentStoreSimulator, either would be fine (I’ve tended to use “Simulator”, but I think that “Memory” is probably a better choice).
Nicely, because it is backed by memory, it doesn’t have any external dependencies that we need to mock, so we can write a great set of unit tests for it (I would write them with TDD, obviously) that will define the behavior exactly the way the application wants it.
Compared to the alternative – mock code somewhere – simulators are much nicer than mocks. They pull poorly-tested code out of the tests and put it into a place where we can test is well, and it’s much easier to do the test setup and verification by simply talking to the simulator. We will write a test that’s something like this:
DocumentStoreSimulator documentStore = new DocumentStoreSimulator(); DocumentManager manager = new DocumentManager(documentStore); Document document = new Document("Sample text"); DocumentName documentName = new DocumentName("Fred"); manager.Save(documentName); Assert.IsTrue(documentStore.DoesDocumentExist(documentName)); Assert.AreEqual("Sample text", documentStore.Load(documentName).Text);
Our test code uses the same abstractions as our product code, and it’s very easy to verify that the result after saving is correct.
A light bulb goes off
We’ve now written a lot of tests for our application, and things mostly work pretty well, but we keep running into annoying bugs, where the DocumentStoreFile behavior is different than the DocumentStoreMemory behavior. This is annoying to fix, and – as noted earlier – we don’t have any tests for DocumentStoreFile.
And then one day, somebody says,
These aren’t DocumentStoreMemory unit tests! These are IDocumentStore unit tests – why don’t we just run the tests against the DocumentStoreFile adapter?
We can use the simulator unit tests to verify that all adapters have the same behavior, and at the same time verify that the previously-untested DocumentStoreFile adapter works as it should.
This is where simulators really earn their keep; they give us a set of unit tests that we can use both to verify that the real adapter(s) function correctly and that all adapters behave the same way.
And there was much rejoicing.
In reality, it’s not quite that good initially, because you are going to miss a few things when you first write the unit tests; things like document names that are valid in one adapter but not another, error cases and how they need to be handled, etc. But, because you have a set of shared tests and they cover everything you know about the interface, you can add the newly-discovered behavior to the unit tests, and then modify the adapters so they all support it.
Oh, and you’ll probably have to write a bit of code for test cleanup, because that document that you stored in your unit tests will be there the next time if you are using the file system adapter but not the memory adapter, but these are simple changes to make.
Other benefits
There are other benefits to this approach. The first is that adapters, once written, tend to be pretty stable, so you don’t need to be running their tests very much. Which is good, because you can’t run the tests for any of the real adapters as part of your unit tests suite; you typically need to run them by hand because they use real versions of the external dependencies and require some configuration.
The second is that the adapter tests give you a great way to verify that a new version of the external dependency still works the way you expect.
The simulator is a general-purpose adapter that isn’t limited to the unit test scenario. It can also be used for demos, for integration tests, for ATDD tests; any time that you need a document store that is convenient to work with. It might even make it into product code if you need a fast document cache.
What about UI?
The approach is clearest when you apply it to a service, but it can also be applied to the UI layer. It’s not quite as cool because you generally aren’t about to reuse the simulator unit tests the same way, but it’s still a nice pattern. The next post will delve into that a bit more deeply.
Recent Comments