Port/Adapter/Simulator and error conditions

October 9, 2015 at 11:10 am

An excellent question on an internal alias came up today, and I wanted to share my response more widely.

The question is around simulating error conditions when doing Port/Adapter/Simulator.

For example, if my production adapter is talking to a database, the database might be unreachable and the real adapter would throw a timeout exception. How can we get the simulator to do that, so we can write a test that verifies that our code behaves correctly in that scenario?

Before I answer, I need to credit Arlo, who taught me at least part of this technique…

Implement common behaviors across all adapters

The first thing to do is to see if we can figure out how to make the simulator behavior mirror the behavior of the real adapter.

If we are implementing some sort of store, the real adapter might throw an “ItemNotFound” exception if the item isn’t there, and we can just make the simulator detect the same situation and throw the same exception. And we will – of course – write a test that we can use to verify that the behavior matches.

Or, if there is a restriction on the names in a store (say one of our adapters stores items in a file, and the name is just the filename), then all of the adapters much implement that restriction (though I’d consider whether I wanted to do that or use an encoding approach to get rid of the restriction for the file adapter).

Those are the simple cases, but the question was specifically about timeouts. Timeouts are random and not-deterministic, right?

Yes, they are non-deterministic in actual use, but there might be a scenario that will always throw a timeout. What does the real adapter do if we pass in a database server that does not exist – something like “DatabaseThatDoesNotExist”? If we can figure out a developer/configuration error that sets up the scenario we want, then we can implement the same behavior in all of our adapters, and our world will be simple.

However, the world is not always that simple…

Cheat

I’ll note here that Arlo did not teach me this technique, so any stupidity belongs to me…

If I can’t find out a deterministic way to get a scenario to happen, then I need to implement a back door. I do this by adding a method to the simulator (not the adapter) that looks something like this:

public void SimulateTimeoutOnLoad();

Note that it is named with “Simulate”<scenario><method-name>, so that it’s easy to know that it isn’t part of the adapter interface and what it does. I will write a unit test to verify that the simulator does this correctly, but – alas – I cannot run this test against the real adapter because the method is not part of the adapter. That means it’s a decent idea to put these tests in a different file from the ones that target the adapter interface.

Now that I have the new method, the test is pretty simple:

Fetcher fetcher = new FetcherSimulator();

ObjectToTest ott = new ObjectToTest(fetcher);

fetcher.SimulateTimeoutOnLoad();

ott.Load();
Assert.Whatever(…);

I’m just using the method to reach into the simulator and tell it to do something specific in a specific scenario.

My goal is to do this as little as possible because it reduces the benefit we get from P/A/S, so I try to look very hard to find a way to not cheat.

I should also note that if you are using P/A/S on the UI side, you are pretty much stuck using this technique, because the things that you are simulating are user actions, and none of them can be triggered through using the real adapter.

Agile team evaluation

October 5, 2015 at 4:35 pm

I’ve been thinking a bit about team evaluation. In the agile world, this is often done by looking at practices – is the team doing pairing, are they doing story mapping, how long is their iteration length?

This is definitely a useful thing to do, but it can sometimes be too prescriptive; a specific practice needs to be good for a team where they are right now, and that’s always clear. I’m a big fan of not blocking checkin on code review, but I need something to replace it (continuous code review through pairing or mobbing) before it makes sense.

Instead, I’ve tried to come up with a set of questions that focus out the outcomes that I think are most important and whether the team is getting better at those outcomes. I’ve roughly organized them into four categories (call them “Pillars” if you must).

 

1: Delivery of Business Value

  • Is the team focused on working on the most important things?
  • Are they delivering them with a quality they are proud of?
  • Are they delivered in small, easy-to-digest chunks?
  • Is the team getting better?

2: Code Health

  • Is the code well architected?
  • Are there tests that verify that the code works and will continue to work?
  • Is the team getting better over time?
    • Is the architecture getting cleaner?
    • Is it easier to write tests?
    • Is technical debt disappearing?
    • Are bugs becoming less frequent?
    • Are better technologies coming in?

3: Team Health

  • Is the team healthy and happy?
  • Is there “esprit de corps” in the team?
  • Are team members learning to be better at existing things?
  • Are team members learning how to do new things?
  • Does the team have an experimental mindset?

4: Organization Health

  • Are changes in approaches by the team(s) leading to changes in the overall organization?
  • Are obstacles to increase speed and efficiency going away?
  • Are the teams trying different things and sharing their findings? Or is the organization stuck in a top-down, monocultural approach?
  • Is there a cleared vision and charter for the organization?
  • Does the organization focus on “what” and “why” and let the teams control the “how”?

Agile management

July 9, 2015 at 1:43 pm

A friend at work posted a link to the following article:

I’m Sorry, But Agile Won’t Fix Your Products

and I started to write a quick note but thought it deserved a bigger response

+++++++++++++

Okay, so, I agree with pretty much all of the article.

As is often the case, I went and wrote some analysis, didn’t like it, tried to make it better, and ended up abandoning it to write something else. I agree with the comments in the article about command-and-control, but I think there is another aspect that is worth discussing. I’ll note that some of this is observational rather than experiential.

Collectively, management tends to value conformity pretty highly. If, for example, your larger group creates two-year products plans, you will be asked about your two-year product plans and – even if you have a great reason for only doing three-month plans that your manager agrees with – you will become an outlier. Being an outlier puts you at risk; if, for example, things don’t go as well as expected for you, there is now an obvious cause for the problem – your nonconformity. Or, your manager get promoted, and the new manager wants two-year plans.

This effect was immortalized in a saying dating back to the days of mainframes:

Nobody ever got fired for buying IBM…

Because of this effect, you end up with what I call a “Group Monoculture”, where process is mostly fixed, and inefficiency and lack of progress are fine as long as they are the status quo.

It is a truism that, whatever skills they might also possess, there is one commonality amongst all the managers; they possess the ability to be hired and/or promoted in the existing corporate culture. That generally means that they are good at following the existing process and good at conformity. This reinforces and cements the monoculture. Any changes that happen are driven from high up the chain and just switch the group to a different monoculture.

Different is bad, which, last time I checked, was not one of the statements in the Agile Manifesto…

How can agility happen in such an org? Well, it happens due to the actions of what I call process adapters. A process adapter adapts the process that exists above a group in the organization to be closer to the process that the team wants to have. For example, the adapter might keep that two-year plan up to date but allow the team below to work as if short planning cycles were the norm. Or an adapter might adapt the team’s one-week iteration cycle to the overall group’s 12-week cycle.

Adaptation is not a panacea. The adaptation is always imperfect and some of the process from above leaks down, and it can be pretty stressful to the adapter; they are usually hiding some details from their manager, fighting battles so that they can be different, and running a very real career risk. As their team gets more agile and self-guided, the adaptation gets more leaky, and the adapter runs more risks; the whole thing can be derailed by investing time in reducing technical debt which slows them down, some unexpected questions by the agile team members to management, or the adapter getting a new manager.

I’ve seen quite a few first-level (aka “lead”) adapters; leads tend to be focused more down at their team than up and out and can usually get away with more non-conformity; leads are viewed as less experienced and there’s often a feeling that they should have a lot of latitude in how they run their teams. Leads are also more likely to be senior and technically astute, which gives them more options to “explore different opportunities” both inside and outside the company.

I haven’t seen any second-level adapters be successful for more than a year or so, though I have seen a few try really hard.

Sometimes, adapters get promoted into the middle of the hierarchy or are hired from outside. This is often a frustrating position for the adapter. As Joe Egan and Gerry Rafferty wrote back in 1972:

Clowns to the left of me,
Jokers to the right,
Here I am
Stuck in the middle…

One of two things tend to happen.

Either the adapter gets frustrated with the challenges of adapting and trying to drive broader change and decide to do something else, or the adapter gets promoted higher. Further promotion often doesn’t have the hoped-for effect; as the adapter moves up they get broader scope, and the layers underneath them are managed by – you guessed it – the rank and file managers who are devoted to the existing monoculture. Not to mention that the agile “teams are self-organizing and drive their own approach” tenet means that adapters tend to give less direction to their reports.

A little something that made me happy…

June 2, 2015 at 2:15 pm

Last week, I was doing some work on a utility I own. It talked to some servers in the background that could be slow at times and there was no way to know what was happening, so I needed to provide some way of telling the user that it was busy.

I started writing a unit test for it, and realized I needed an abstraction (R U busy? Yes, IBusy):

public interface IBusy
{
    void Start();
    void Stop();
}

I plumbed that into the code, failed the test, and then got it working, but it wasn’t very elegant. Plus, the way I have my code structured, I had to pass it into each of the managers that do async operations, and there are four of those.

The outlook was not very bright, but I can suffer when required, so I started implementing the next set.

Halfway through, I got an idea. When I added in the asynchronous stuff, I needed a way to abstract that out for testing purposes, so I had defined the following:

public interface ITaskCreator
{
    void StartTask<T>(Func<T> func, Action<T> action  );
}

This is very simple to use; pass in the function you want to happen asynchronously and the action to process your result. There is a TaskCreatorSynchronous class that I use in my unit tests; it looks like this:

public void StartTask<T>(Func<T> func, Action<T> action)
{
    action(func());
}

What I realized was that the times I needed to show the code was busy were exactly the times when I was running a task, and I already had a class that knew how to do that.  I modified TaskCreator:

public class TaskCreator : ITaskCreator
{
    public EventHandlerEmpty StartedTask;
    public EventHandlerEmpty FinishedTask;

    public void StartTask<T>(Func<T> func,
        Action<T> action  )
    {
        if (StartedTask != null)
        {
            StartedTask();
        }

        Task<T>.Factory.StartNew(func)
            .ContinueWith((task) =>
            {
                action(task.Result);
                if (FinishedTask != null)
                {
                    FinishedTask();
                }
            }, TaskScheduler.FromCurrentSynchronizationContext());
    }
}

It now has an event that is called before the task is started and one that is called after the task is completed. All my main code has to do is hook up appropriately to those events, and any class that uses that instance to create tasks will automatically get the busy functionality.

I am happy when things turn out so neatly.

What makes a good metric?

June 2, 2015 at 1:18 pm

I got into a discussion at work today about metrics – a discussion about correctness vs utility – and I wrote something that I thought would be of general interest.

——

The important feature of metrics is that they are useful, which generally means the following:

a) Sensitive to the actual thing that you are trying to measure (ie when the underlying value changes, the metric changes).

b) Positively correlated with the thing you are trying to measure (a change in the underlying value produces a move in the correct direction of the metric).

c) Not unduly influenced by other factors outside of the underlying value (ie a change in the underlying usage does not have a significant effect on the metric).

Those give you a decent measure. It’s nice to have other things – linearity, where a 10% in the underlying value results in a 10% move in the metric – but they aren’t a requirement for utility in many cases.

To determine utility, you typically do a static analysis, where you look at how the metric is calculated, figure out how that relates to what you are trying to measure, and generally try to come up scenarios that would break it. And you follow that up with empirical analysis, where you look at how it behaves in the field and see if it is generating the utility that you need.

The requirements for utility vary drastically across applications. If you are doing metrics to drive an automated currency trading system, then you need a bunch of analysis to decide that a metric works correctly. In a lot of other cases, a very gross metric is good enough – it all depends on what use you are going to make of it.

——

Two things to add to what I wrote:

Some of you have undoubtedly noticed that my definition for the goodness of a metric – utility – is the same definition that is use on scientific theories. That makes me happy, because science has been so successful, and then it makes me nervous, because it seems a bit too convenient.

The metrics I was talking about were ones coming out of customer telemetry, so the main factors I was worried about were how closely the telemetry displayed actual customer behavior and whether we were making realistic simplifying assumptions in our data processing. Metrics come up a lot in the agile/process world, and in those cases confounding factors are your main issue; people are very good at figuring out how to drive your surrogate measure in artificial ways without actually driving the underlying thing that you value.

Port/Adapter/Simulator and UI

May 12, 2015 at 8:19 am

I’ve been working on a little utility project, and I’ve been using port/adapter/simulator on both the server-side parts and on the UI parts. It has been working nicely, though it took me a while to get there.

Initially, I started with a single UI class. After a bit of extension, it looked a bit ugly, so I decided to break it apart by functional area – there’s a main working area, there’s a favorites area, there’s an executing area, and there’s a config area. For each area, it looks something like this:

IUIFavorites

-> UIFavorites
-> UIFavoritesSimulator (really more of a mock than a simulator)

FavoritesManager(IUIFavorites, IUIStore, etc. )

The UI side handles just that – the UI – and the manager part handles the business logic. The UI part exposes events for user actions a properties and methods for modification.

There was one slightly sticky part of this. There are times when the working area manager needs to add itself to the favorites. Options I thought of:

1) Passing the UIWorking object to the favorites manager.

2) Passing the working manager to the favorites manager.

3) Hooking the UIworking event to a favorites manager method in the main creation code.

4) Hooking a working manager event to a favorites manager method in the main creation code.

I didn’t like #1 or #2, so I ended up doing #4. #3 also seemed okay.

The no bugs journey part 3–what kind of bug is this?

April 1, 2015 at 9:45 pm

If you are in a buggy group, you have a lot of bugs.

After writing the preceding, I’ll endeavor to make the rest of this post slightly less obvious.

Anyway, so, you have a lot of bugs, and you need to get better. How can you get started?

Well, there’s a common technique in agile called root-cause analysis. It’s a good technique, but your team isn’t ready for it and you can’t afford to do it.

Instead, I recommend a technique I came up (probably not uniquely) that I call “bug categorization”. I started this when I team that I was on had a notably buggy iteration, and I noticed that a fair number of the bugs were just sloppy – not reading the spec, not running the app after they had made a change, that sort of thing.

What I wanted was a way to help the team to clean up their act without being prescriptive.

So, I went through all the bugs – something like 120 – and did a really quick classification on each of them, putting them into one of the following categories:

  • Foreseeable: This is a bug that we should have caught.
  • External: Somebody else broke us.
  • Existing: This bug existed in past versions of our software.
  • Spec: There was a problem with the spec.
  • Other: Something else.

At our retrospective, I presented a graph of the bugs, and then put up a post-it that said, “too many bugs”.

And then I let the team work on the problem. And they did great, cutting the foreseeable bugs significantly in just a few iterations despite me expanding the definition of foreseeable to include more bugs.

And while my initial impetus was to reduce the sloppy bugs, the team spent a lot of time fixing bugs related to definition and communication. For example, they invented a dev/test/pm meeting that the start of a story to make sure everybody was on the same page.

So, that’s bug categorization.

No bugs journey episode 2: It’s a matter of values…

February 21, 2015 at 4:54 pm

Read episode 1 first.

Here we are at episode 2, and time for another question. Of the three, which one do you value the most?

  1. Shipping on schedule
  2. Shipping with a given set of features
  3. Shipping with high quality

Write down your answer.

Ha ha! It was a trick question. Pretty much everybody is going to say, “it depends”, because it’s a tradeoff between these three. Let’s try an alternate expression:

  1. Build it fast
  2. Build a great product
  3. Build it right

All of these are obviously important. Hmm. Let’s try a different tact…

If you get in a situation where you need to choose between schedule, features, and quality, which one would you choose?

  1. Shipping on schedule with the right set of features but adequate quality
  2. Shipping late with the right set of features and high quality.
  3. Shipping on schedule and with high quality but with fewer features than we had hoped

Write down your answer.

Let’s talk about option #1. First off, it doesn’t exist. The reason it doesn’t exist is that there is a minimum quality level at which you can survive and grow as a company. If you are trying to hold features and schedule constant, as you develop features you are forcing the quality level down – it *has to* go down because something has to give if you get behind (short estimates, etc.). That means you start with a product that you just shipped, and then it degrades in quality as you do features, and then at some point you want to ship, so you realize you need quality, so you focus on getting back to *adequate*. Unfortunately, fixing bugs is the activity with the most schedule uncertainty, so there is no way you are shipping on time.

It’s actually worse than this. Typically, you’ve reached a point where your quality is poor and you know that there is no way that you are going to get the features done and reach acceptable quality, so you cut features.

And you end up with a subset of features that ships late with barely adequate quality. There’s an old saying in software that goes, “schedule/features/quality – pick 2”, but what few people realize is that it’s very easy to pick zero.

I hope I’ve convinced you that the first approach doesn’t work, and given that I’m saying this series is about “no bugs”, you probably aren’t surprised. Let’s examine the other options.

I’ve see #2 work for teams; they had a feature target in their head, built their software to high quality standards, and then shipped when they were done. This was common in the olden days when box products were common, when the only way to get your product out there was produce a lot of CDs (or even diskettes) and ship them out to people. It worked, in the sense that companies that used the approach were successful.

It is, however, pretty disruptive on the business as a whole; it’s hard to run a business where you don’t know:

  • If new revenue is going to show up 1 year or 3 years from now
  • When you’ll be spending money on sales and marketing
  • Whether the market has any interest in buying what you build

Not to mention making it tough on customers who don’t know when they’ll get that bug fix and enhancement they requested.

Which leaves option #3, which through an amazing coincidence, is the one that is the most aligned with agile philosophy. Here’s one way to express it:

Given that:

  • It is important to the rest of the business and our customers that we be predictable in our shipping schedule
  • Bugs slow us down, make customers unhappy, and pose significant schedule risk
  • The accuracy at which we can make estimates of the time to develop features is poor
  • Work always expands (people leave the team or get sick, we need to spend time troubleshooting customer issues, engineering systems break), forcing us to flex somewhere

The only rational option that we have is to flex on features. If we are willing to accept that features don’t show up as quickly as we would like (or had originally forecast), it is possible to ship on time with high quality.

I chose the phrase “it is possible” carefully; it is possible to build such a system but not get the desired result.

Flexing on features effectively

Time for another exercise. Studies have shown that you will get a much better review of a plan if you spend some focused time thinking about how it might go wrong.

Thinking about the world I just described, what could go wrong? What might keep us from shipping on time with high quality?

Write down your answers.

Here’s my list:

  1. At the time we need to decide to cut features, we might have 10 features that are 80% done. If we move out all 10 of them to the next iteration, we have nothing to ship.
  2. We might have a hard time tracking where we are early enough to make decisions; most people have seen a case where all the features were on schedule until a week before shipping and suddenly 25% were behind by a week or more. If this happens, it may be too late to adapt.
  3. We might have teams that are dependent on each other; the only way to make my schedule is to cut work from team A, but that means that teams B & C can’t finish their work as planned, and they will have to adjust, using time we don’t have.
  4. This release was feature A and a few bugfixes, and feature A isn’t on schedule. We’ll look silly if we just shipped bugfixes.

(At this point I’m really hoping that you don’t have something important on your list that I’ve missed. If so, that’s what comments are for…)

How can we mitigate? Well, for the first one, we can focus on getting one feature done before we move on to the next one. That means that we would have 8 features 100% done instead of 10 features 80% done. This is one of the main drivers for the agile “work together as a team” approach.

This mitigation works for the second one as well. If we make our tracking “is the feature fully complete and ready to ship”, we can tell where we are (3/10 features current done and ready to ship (“done done” in agile terminology)) and we have a better chance of predicting where we are going. This is another driver for “work together as a team”. Note that for both the first and the second one, the more granular our features are, the easier it is to make work; it works great if the team has 5-10 items per iteration but poorly if it only has two. This is the driver for “small self-contained stories” and “velocity measurement” in agile.

I have a few thoughts on the third one. You can mitigate by using short cycles and having teams B and C wait until A is done with their work. You can try to break the work A does into parts so the part the other teams need can be delivered further.  Or you can go with a “vertical team” approach, which works great. 

For the fourth one, the real problem is that we put all of our eggs in one basket. Chopping feature A up will give us some granularity and the chance to get part way there. I also think that a shorter cycle will be our friend; if we are giving our customers updates every month, they will probably be fine with a message that says, “this month’s update only contains bugfixes”.

To summarize, if we have small stories (a few days or less) and we work on them sequentially (limiting how many we are working on at one time), our problems about tracking and having something good enough to ship become much more tractable. We can predict early what features are not going to make it, and simply shift them to the next cycle (iteration). That is our pressure-relief valve, the way that we can make sure we have enough time to get features completed on the shipping schedule with great quality.

The world isn’t quite as simple as I’ve described it here, but I’ve also omitted a number of advanced topics that help out with that, so I think it’s a pretty fair overview.

Before I go on, I’d like to address one comment I’ve heard in relation to this approach.

If we flex on features, then I’m not going to be able to forecast where we will be in 6 months

The reality is that nobody has ever been able to do that, you were just pretending. And when you tried, you were often spending time on things that weren’t the most important ones because of new priorities.

Instead, how about always working on whatever is the highest priority for the business, and having the ability to adjust that on a periodic basis? How does that sound?

The culture of commitment

Time for another question:

What are the biggest barriers to making this work in your organization?

Write down your answers.

I have a list, but I’m only going to talk about one, because it is so much more important than the rest. It’s about a culture of commitment.

Does your organization ask development teams to *commit* to being done in the time that they estimated? Does it push back when developer estimates are too large? Do developers get emails or visits from managers telling them they need to be done on time?

If so, you are encouraging them to write crappy code. You have trained them that being on the “not done” list is something to be avoided, and they will do their best not to avoid it. They can either work harder/longer – which has limited effectiveness and is crappy for the company in other ways – or they can cut corners. That’s all they can do.

Pressure here can be pretty subtle. If my team tracks “days done” and I have to update my estimate to account for the fact that things were harder than I thought, that puts pressure on me to instead cut corners.

This is one reason agile uses story points for estimation; it decouples the estimation process from the work process.

Changing the culture

Here are my suggestions:

  1. Get rid of any mention the words “committed” or “scheduled” WRT work. The team is *planning* what work they will attempt in the next iteration.
  2. Change the “are you going to be done?” interaction to ask, “is there anything you would like to move out to the next iteration?” Expect that initially, people aren’t going to want to take you up on this, and you may have to have some personal interactions to convince somebody that it’s really okay to do this.
  3. When you have somebody decide to move something out, make sure that you take public note of it.

    “To adjust the amount of work to the capacity of the team and maintain shippable quality, the paypal feature has been moved out to the next iteration.”

  4. When teams/individuals check in unfinished features (buggy/not complete) instead of letting those features make it into the common build, force them to remove them or disable them. Make this part of the team-wide status report for the iteration (“the ‘email invoice’ feature did not meet quality standards and has been disabled for this iteration”).
  5. I highly recommend switching to a team-ownership approach. Teams are much more likely to change how they work and improve over time than individuals.

You will need to be patient; people are used to working in the old ways.

This cultural change is the most important thing you can do to reduce the number of bugs that you have.

No Bugs Journey Episode 1: Inventory

February 6, 2015 at 8:06 am

Over the past few years I had the opportunity to work in an environment in which we achieved a significant reduction in bugs and an associated increase in quality and developer satisfaction.

This series will be about how to start the journey from wherever you currently are to a place with fewer bugs, and a bunch of things to think about and perhaps try on your team. Some will be applicable for the individual developer, some will be about an engineering team, some will be for the entire group.

As with all things agile, some techniques will work great for your team, some may require some modification, and some may not work at all. Becoming agile is all about adopting an experimental mindset.

Whether you are on a team awash in bugs or you are already working towards fewer bugs, I hope you will find something of value.

Inventory

We’re going to start by stepping back, taking stock of the situation, and practicing the “Inspect” part of the agile “Inspect and adapt” cycle. We’ll be using a feedback technique that is often used in agile groups.

<aside>

The agile software community is very effective at finding useful techniques outside of the field and adopting them, which is great. They are not great at giving credit for where those techniques were created, so there are a lot of “agile” practices that really came from somewhere else. Which is why I wrote “is often used in agile groups” rather than “agile technique”…

The technique we’re going to use is known as “affinity mapping”, which has been around for a long time.

</aside>

(Note: the following exercise works great as a team/group activity as well. If you want to do it as a group exercise, go find a few videos on affinity mapping and watch them first).

Find some free time and a place where you won’t be disturbed. Get a stack of sticky notes and a pen.

Your task is to write down the issues that are contributing to having lots of bugs, one per sticky note. Write down whatever comes into your head, and don’t spend a lot of time getting it perfect. You can phrase it either as a problem (“lots of build breaks”) or a solution (“set up a continuous integration server”).

Start with what is in the forefront of your mind, and write them down until you run out of things to write. There is no correct number to end up with; some people end up with 4 notes, some people end up with 20.

Now that you have done that, look at this list below, and see if thinking about those areas leads you to get any more ideas. Write them down as well.

  1. Planning
  2. Process tracking and management
  3. Reacting to changes
  4. Interactions between roles (dev, test, management, design, customer)
  5. Recognition – what leads to positive recognition, what leads to negative recognition
  6. Who is respected on the team, and why?
  7. Promotion – what sort of performance leads to promotion
  8. Developer techniques
  9. How is training handled
  10. Who owns getting something done

This second part of the exercise is about thinking different (aka “out of the box”), in looking for non-obvious causes. I’m explicitly having you do it to see if you come up with some of the topics that I’m going to cover in the future.

Clustering

We now have a big pile of sticky notes. Our next task is to review the notes to see if there is any obvious clustering in them. Put all of the notes up on a wall/whiteboard, and look at them. If you find two notes that are closely related, put them next to each other. If you are doing this as an individuals, you will probably only find one or two clusters; if doing it as a group, you will find more.

Note that we are not doing categorization, where each sticky is in a group. At the end I expect to see a lot of notes that aren’t in clusters. That is expected.

Reflection

Take a few minutes and look at what you have. My guess is that you have a decent list of issues, and my hope is that you thought of things that you weren’t thinking about before.

At this point, you might be saying, “If I know what the issues are, why shouldn’t I just go off and start working to address them? Why do I need to read what Eric says?”

If you said that, I am heartily in support of the first statement. Determine which sticky/cluster you (or your team) wants to tackle, figure out a small/cheap/low-risk experiment that you can try, and start making things better.

As for my value-add, my plan is for it to add value in two ways:

First, I’ve been through this a few times, and my guess is that you missed a few issues during the exercise. I know that I missed a ton of stuff the first time I tried this.

Second, I’m hoping to give some useful advice around what techniques might be used for a specific issue, common pitfalls, and ordering constraints (you may have to address issue B to have a decent chance of addressing issue A).

Next time

Next time I’m to talk about the incentives that are present in a lot of teams and how they relate to bugs.

No bugs journey episode 2: Stop encouraging your developers to write bugs

It’s about to get real.

Unit test success using Ports, Adapters, & Simulators–kata walkthrough

December 1, 2014 at 4:27 pm

You will probably want to read my conceptual post on this topic before this one.

The kata that I’m using can be found at github here. My walkthrough is in the EricGuSolution branch, and I checked in whenever I hit a good stopping point. When you see something like:

Commit: Added RecipeManager class

you can find that commit on the branch and look at the change that I made. The checkin history is fairly coarse; if you want a more atomic view, go over to the original version of the kata, and there you’ll find pretty much a per-change view of the transformations.

Our patient

We start with a very simple Windows Forms application for managing recipes. It allows users to create/edit/delete recipes, and the user can also decide where to store their recipes. The goal is to add unit tests for it. The code is pretty tiny, but it’s pretty convoluted; there is UI code tied in with file system code, and it’s not at all clear how we can get tested.

I’m going to be doing TDD as much as possible here, so the first thing to do is to dive right in and start writing tests, right?

The answer to that is “nope”. Okay, if you are trying to add functionality, you can use the techniques in Feather’s excellent book, “Working Effectively with Legacy Code”, but let’s just pretend we’ve done that and are unhappy with the result, so we’re going to refactor to make it easier to test.

The first thing that I want you to do is to look at the application & code, and find all the ports, and then write down a general description of what each port does. A port is something that a program uses to interface with an external dependency. Go do that, write them down, and then code back.

The Ports

I identified three ports in the system:

  1. A port that loads/saves/lists/deletes recipes
  2. A port that loads/saves the location of the recipes
  3. A port that handles all the interactions with the user (ie “UI”)

You could conceivably break some of these up; perhaps the UI port that deals with recipes is different than the one that deals with the recipe storage directory. We’ll see what happens there later on.

If you wanted, you could go to the next level of detail and write out the details of the interface of each port, but I find it easier to pull that out of the code as I work.

How do I do this without breaking things?

That’s a really good question. There are a number of techniques that will reduce the chance of that happening:

  1. If your language has a refactoring tool available, use it. This will drastically reduce the number of bugs that you introduce. I’m working in C#, so I’m going to be using Resharper.
  2. Run existing tests (integrated tests, other automated tests, manual tests) to verify that things still work.
  3. Write pinning tests around the code you are going to change.
  4. Work in small chunks, and test often.
  5. Be very careful. My favorite method of being very careful is to pair with somebody, and I would prefer to do it even if I have pretty good tests.

Wherever possible, I used resharper to do the transformations.

Create an adapter

An adapter is an implementation of a port. I’m going to do the recipe one first. My goal here is to take all the code that deals with these operations and get it in one place. Reading through the code in Form1.cs, I see that there the LoadRecipes() method. That seems like something our port should be able to do. It has the following code:

private void LoadRecipes()
{
    string directory = GetRecipeDirectory();
    DirectoryInfo directoryInfo = new DirectoryInfo(directory);
    directoryInfo.Create();

    m_recipes = directoryInfo.GetFiles("*")
        .Select(fileInfo => new Recipe { Name = fileInfo.Name, Size = fileInfo.Length, Text = File.ReadAllText(fileInfo.FullName) }).ToList();

    PopulateList();
}

I see three things going on here. First, we get a string from another method, then we do some of our processing, then we call the “PopulateList()” method. The first and the last thing don’t really have anything to do with the concept of dealing with recipes, so I’ll extract the middle part out into a separate method (named “LoadRecipesPort()” because I couldn’t come up with a better name for it).

private void LoadRecipes()
{
    string directory = GetRecipeDirectory();
    m_recipes = LoadRecipesPort(directory);

    PopulateList();
}

private static List<Recipe> LoadRecipesPort(string directory)
{
    DirectoryInfo directoryInfo = new DirectoryInfo(directory);
    directoryInfo.Create();

    return directoryInfo.GetFiles("*")
        .Select(
            fileInfo =>
                new Recipe
                {
                    Name = fileInfo.Name,
                    Size = fileInfo.Length,
                    Text = File.ReadAllText(fileInfo.FullName)
                })
        .ToList();
}

Note that the extracted method is static; that verifies that it doesn’t have any dependencies on anything in the class.

I read down some more, and come across the code for deleting recipes:

private void DeleteClick(object sender, EventArgs e)
{
    foreach (RecipeListViewItem recipeListViewItem in listView1.SelectedItems)
    {
        m_recipes.Remove(recipeListViewItem.Recipe);
        string directory = GetRecipeDirectory();

        File.Delete(directory + @"\" + recipeListViewItem.Recipe.Name);
    }
    PopulateList();

    NewClick(null, null);
} 

There is only one line there – the call to File.Delete(). I pull that out into a separate method:

private static void DeleteRecipe(string directory, string name)
{
    File.Delete(directory + @"\" + name);
}

Next is the code to save the recipe. I extract that out:

private static void SaveRecipe(string directory, string name, string directions)
{
    File.WriteAllText(Path.Combine(directory, name), directions);
}

That is all of the code that deals with recipes.

Commit: Extracted recipe code into static methods

<aside>

You may have noticed that there is other code in the program that deals with the file system, but I did not extract it. That is very deliberate; my goal is to extract out the implementation of a specific port. Similarly, if I had been using a database rather than a file system, I would extract only the database code that dealt with recipes.

This is how this pattern differs from a more traditional “wrapper” approach, and is hugely important, as I hope you will soon see.

</aside>

The adapter is born

I do an “extract class” refactoring and pull out the three methods into a RecipeStore class. I convert all three of them to instance methods with resharper refactorings (add a parameter of type RecipeStore to each of them, then make them non-static, plus a bit of hand-editing in the form class). I also take the directory parameter and push it into the constructor. That cleans up the code quite a bit, and I end up with the following class:

public class RecipeStore
{
    private string m_directory;

    public RecipeStore(string directory)
    {
        m_directory = directory;
    }

    public List<Recipe> Load()
    {
        DirectoryInfo directoryInfo = new DirectoryInfo(m_directory);
        directoryInfo.Create();

        return directoryInfo.GetFiles("*")
            .Select(
                fileInfo =>
                    new Recipe
                    {
                        Name = fileInfo.Name,
                        Size = fileInfo.Length,
                        Text = File.ReadAllText(fileInfo.FullName)
                    })
            .ToList();
    }

    public void Delete(string name)
    {
        File.Delete(m_directory + @"\" + name);
    }

    public void Save(string name, string directions)
    {
        File.WriteAllText(Path.Combine(m_directory, name), directions);
    }
}

Commit: RecipeStore instance class with directory in constructor

Take a look at the class, and evaluate it from a design perspective. I’m pretty happy with it; it does only one thing, and the fact that it’s storing recipes in a file system isn’t apparent from the method signature. The form code looks better as well.

Extract the port interface & write a simulator

I now have the adapter, so I can extract out the defining IRecipeStore interface.

public interface IRecipeStore
{
    List<Recipe> Load();
    void Delete(string name);
    void Save(string name, string directions);
}

I’ll add a new adapter class that implements this interface:

class RecipeStoreSimulator: IRecipeStore
{
    public List<Recipe> Load()
    {
        throw new NotImplementedException();
    }

    public void Delete(string name)
    {
        throw new NotImplementedException();
    }

    public void Save(string name, string directions)
    {
        throw new NotImplementedException();
    }
}

The simulator is going to be an in-memory implementation of the recipe store, which which will make it very good for unit tests. Since it’s going to be in-memory, it doesn’t have any dependencies and therefore I can write unit tests for it. I’ll do that with TDD.

Commit: RecipeStoreSimulator with tests

It was a very simple interface, so it only took me about 15 minutes to write it. It’s not terribly robust, however; it has no error-handling at all. I now have a simulator that I can use to test any code that uses the RecipeStore abstraction. But wait a second; the tests I wrote for the simulator are really tests for the port.

If I slightly modify my tests so that they use an IRecipeStore, I can re-purpose them to work with any implementation of that port. I do that, but I start seeing failures, because the tests assume an empty recipe store. If I change the tests to clean up after themselves, it should help…

Once I’ve done that, I can successfully run the port unit tests against the filesystem recipestore.

Commit: Unit tests set up to test RecipeStore

RecipeStoreLocator

We’ll now repeat the same pattern, this time with the code that figures out where the RecipeStore is located. I make the methods static, push them into a separate class, and turn them back into instance methods.

When I first looked at the code, I was tempted not to do this port, because the code is very specific to finding a directory, and the RecipeStore is the only thing that uses it, so I could have just put the code in the RecipeStore. After a bit of thought, I decided that “where do I store my recipes” is a separate abstraction, and therefore having a locator was a good idea.

Commit: RecipeStoreLocator class added

I create the Simulator and unit tests, but when I go to run them, I find that I’m missing something; the abstraction has no way to reset itself to the initial state because the file persists on disk. I add a ResetToDefault() method, and then it works fine.

Commit: Finished RecipeStoreLocator + simulator + unit tests

Status check & on to the UI

Let’s take a minute and see what we’ve accomplished. We’ve created two new port abstractions and pulled some messy code out of the form class, but we haven’t gotten much closer to be able to test the code in the form class itself. For example, when we call LoadRecipes(), we should get the recipes from the store, and then push them out into the UI. How can we test that code?

Let’s try the same sort of transformations on the UI dependency. We’ll start with PopulateList():

private void PopulateList()
{
    listView1.Items.Clear();

    foreach (Recipe recipe in m_recipes)
    {
        listView1.Items.Add(new RecipeListViewItem(recipe));
    }
}

The first change is to make this into a static method. That will require me to pass the listview and the recipe list as parameters:

private static void PopulateList(ListView listView, List<Recipe> recipes)
{
    listView.Items.Clear();

    foreach (Recipe recipe in recipes)
    {
        listView.Items.Add(new RecipeListViewItem(recipe));
    }
}

And I’ll pull it out into a new class:

public class RecipeManagerUI
{
    private ListView m_listView;

    public RecipeManagerUI(ListView listView)
    {
        m_listView = listView;
    }

    public void PopulateList(List<Recipe> recipes)
    {
        m_listView.Items.Clear();

        foreach (Recipe recipe in recipes)
        {
            m_listView.Items.Add(new RecipeListViewItem(recipe));
        }
    }
}

This leaves the following implementation for LoadRecipes():

private void LoadRecipes()
{
    m_recipes = m_recipeStore.Load();

    m_recipeManagerUI.PopulateList(m_recipes);
}

That looks like a testable bit of code; it calls load and then calls PopulateList with the result. I extract it into a RecipeManager class (not sure about that name right now), make it an instance method, add a constructor to take the recipe store and ui instances, and pull the list of recipes into this class as well. I end up with the following:

public class RecipeManager
{
    private RecipeStore m_recipeStore;
    private RecipeManagerUI m_recipeManagerUi;
    private List<Recipe> m_recipes; 

    public RecipeManager(RecipeStore recipeStore, RecipeManagerUI recipeManagerUI)
    {
        m_recipeManagerUi = recipeManagerUI;
        m_recipeStore = recipeStore;   
    }

    public List<Recipe> Recipes { get { return m_recipes; } }

    public void LoadRecipes()
    {
        m_recipes = m_recipeStore.Load();

        m_recipeManagerUi.PopulateList(m_recipes);
    }
}

Commit: Added RecipeManager class 

Now to test LoadRecipes, I want to write:

[TestMethod()]
public void when_I_call_LoadRecipes_with_two_recipes_in_the_store__it_sends_them_to_the_UI_class()
{
    RecipeStoreSimulator recipeStore = new RecipeStoreSimulator();
    recipeStore.Save("Grits", "Stir");
    recipeStore.Save("Bacon", "Fry");

    RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator();

    RecipeManager recipeManager = new RecipeManager(recipeStore, new RecipeManagerUISimulator());

    recipeManager.LoadRecipes();

    Assert.AreEqual(2, RecipeManagerUI.Recipes.Count);
    RecipeStoreSimulatorTests.ValidateRecipe(recipeManagerUI.Recipes, 0, "Grits", "Stir");
    RecipeStoreSimulatorTests.ValidateRecipe(recipeManagerUI.Recipes, 1, "Bacon", "Fry");
}

I don’t have the appropriate UI simulator, so I’ll extract the interface and write the simulator, including some unit tests.

Commit: First full test in RecipeManager

In the tests, I need to verify that RecipeManager.LoadRecipes() passes the recipes off to the UI, which means the simulator needs to support a property that isn’t needed by the new class. I try to avoid these whenever possible, but when I have to use them, I name them to be clear that they are something outside of the port interface. In this case, I called it SimulatorRecipes.

We now have a bit of logic that was untested in the form class in a new class that is tested.

UI Events

Looking at the rest of the methods in the form class, they all happen when the user does something. That means we’re going to have to get a bit more complicated. The basic pattern is that we will put an event on our UI port, and it will either hook to the actual event in the real UI class, or to a SimulateClick() method in the simulator.

Let’s start with the simplest one. NewClick() looks like this:

private void NewClick(object sender, EventArgs e)
{
    textBoxName.Text = "";
    textBoxObjectData.Text = "";
}

To move this into the RecipeManager class, I’ll need to add abstractions to the UI class for the click and for the two textbox values.

I start by pulling all of the UI event hookup code out of the InitializeComponent() method and into the Form1 constructor. Then, I added a NewClick event to the UI port interface and both adapters that implement the interface. It now looks like this:

public interface IRecipeManagerUI
{
    void PopulateList(List<Recipe> recipes);

    event Action NewClick;

    string RecipeName { get; set; }
    string RecipeDirections { get; set; }
}

And, I’ll go off and implement these in the UI class, the simulator class, and the simulator test class.

<aside>

I’m not sure that NewClick is the best name for the event, because “click” seems bound to the UI paradigm. Perhaps NewRecipe would be a better name…

</aside>

Commit: Fixed code to test clicking the new button

Note that I didn’t write tests for the simulator code in this case. Because of the nature of the UI class, I can’t run tests across the two implementations to make sure they are the same (I could maybe do so if I did some other kind of verification, but I’m not sure it’s worth it). This code mostly fits in the “if it works at all, it’s going to work” category, so I don’t feel that strongly about testing it.

The test ends up looking like this:

[TestMethod()]
public void when_I_click_on_new__it_clears_the_name_and_directions()
{
    RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator();

    RecipeManager recipeManager = new RecipeManager(null, recipeManagerUI);

    recipeManagerUI.RecipeName = "Grits";
    recipeManagerUI.RecipeDirections = "Stir";

    Assert.AreEqual("Grits", recipeManagerUI.RecipeName);
    Assert.AreEqual("Stir", recipeManagerUI.RecipeDirections);

    recipeManagerUI.SimulateNewClick();

    Assert.AreEqual("", recipeManagerUI.RecipeName);
    Assert.AreEqual("", recipeManagerUI.RecipeDirections);
}

That works. We’ll keep going with the same approach – choose an event handler, and go from there. We’re going to do SaveClick() this time:

private void SaveClick(object sender, EventArgs e)
{
    m_recipeStore.Save(textBoxName.Text, textBoxObjectData.Text);
    m_recipeManager.LoadRecipes();
}

We’ll try writing the test first:

[TestMethod()]
public void when_I_click_on_save__it_stores_the_recipe_to_the_store_and_updates_the_display()
{
    RecipeStoreSimulator recipeStore = new RecipeStoreSimulator();
    RecipeManagerUISimulator recipeManagerUI = new RecipeManagerUISimulator();

    RecipeManager recipeManager = new RecipeManager(recipeStore, recipeManagerUI);

    recipeManagerUI.RecipeName = "Grits";
    recipeManagerUI.RecipeDirections = "Stir";

    recipeManagerUI.SimulateSaveClick();

    var recipes = recipeStore.Load();

    RecipeStoreSimulatorTests.ValidateRecipe(recipes, 0, "Grits", "Stir");

    recipes = recipeManagerUI.SimulatorRecipes;

    RecipeStoreSimulatorTests.ValidateRecipe(recipes, 0, "Grits", "Stir");
}  

That was simple; all I had to do was stub out the SimulateSaveClick() method. The test fails, of course. About 10 minutes of work, and it passes, and the real UI works as well.

Commit: Added Save

Commit: Added in Selecting an item in the UI

Commit: Added support for deleting recipes

To be able to support changing the recipe directory required the recipe store to understand that concept. This was done by adding a new RecipeDirectory property, and implementing it in both IRecipeStore adapters.

Commit: Added support to change recipe store directory

All done

Let’s look at what is left in the form class:

public partial class Form1 : Form
{
    private RecipeManager m_recipeManager;

    public Form1()
    {
        InitializeComponent();

        var recipeManagerUI = new RecipeManagerUI(listView1, 
            buttonNew, 
            buttonSave, 
            buttonDelete, 
            buttonSaveRecipeDirectory, 
            textBoxName, 
            textBoxObjectData, 
            textBoxRecipeDirectory);

        var recipeStoreLocator = new RecipeStoreLocator();
        var recipeStore = new RecipeStore(recipeStoreLocator.GetRecipeDirectory());
        m_recipeManager = new RecipeManager(recipeStore, recipeStoreLocator, recipeManagerUI);
        m_recipeManager.Initialize();
    }
}

This is the entirety of the form class; it just creates the RecipeManagerUI class (which encapsulates everything related to the UI), the RecipeStoreLocator class, the RecipeStore class, and finally, the RecipeManager class. It then calls Initialize() on the manager, and, at that point, it’s up and running.

Looking through the code, I did a little cleanup:

  1. I renamed RecipeDirectory to RecipeLocation, because that’s a more abstract description.
  2. I renamed Recipe.Text to Recipe.Directions, because it has been buggin’ me…
  3. Added in testing for Recipe.Size

Commit: Cleanup