Agile Open Northwest 2016 recap

February 9, 2016 at 8:13 pm

Last week, I spent three days at Agile Open Northwest 2016, a small (300-ish people) agile conference held at the Exhibition Hall at Seattle Center.

Well, perhaps ‘conference" is the wrong word; one of the first things that I read about the conference said the following:

Agile Open Northwest runs on Open Space Technology (OST).

I am generally both amused and annoyed by the non-ironic appending of the word "technology" to phrases, but in this case, I’m going to cut them some slack, because Open Space really is different.

I’ve done a lot of conferences in my time, and one of the problems with conferences is that they are never really about the things that you want them to be about. So, you pull out the conference program, try to figure out what the talks are really going to be like, and go from there.

Open space technology is basically a solution to that problem. It’s a self-organizing approach where a minimal bit of structure is put into place, and then the conference is put on by the attendees.

So, somebody like me could host a session on TDD and design principles and #NoMocks, and – in collaboration with a few others (including Arlo) – we could have very fun and constructive session.

And – far more cool in my book – somebody who knew just a little about value stream mapping to could host a session on applying value stream mapping to software projects and have the people who showed up teach me.

The other non-traditional thing with the conference is the law of personal mobility, which says that it’s okay – no, it’s required – that if you aren’t learning or contributing at a session you have chosen, you leave and find a better use of their time. Which means that people will circulate in and out of the sessions

With the exception of one session, I enjoyed and learned something at all of the sessions that I went to.

The one downside of this format is that you need engaged people to make it work; if you take a bunch of disinterested people and ask them to come up with sessions, nobody is going to step up.

I also got to do a couple of Lean Coffee sessions at breakfast Thursday and Friday. These are a great way to get ideas quickly and cover a lot of topics in a short amount of time.

Overall, I had a great time. If you have passion around this area, I highly recommend this conference.

You suck at TDD #4 – External dependencies

January 26, 2016 at 1:48 pm

When I started doing TDD, I thought it was pretty clear what to do with external dependencies. If your code writes to a file system – for example – you just write a file system layer (what would typically be called a façade, though I didn’t know the name of the pattern back then), and then you can mock at that layer, and write your tests.

This is a very common approach, and it mostly works in some scenarios, and because of that I see a lot of groups stick at that level. But it has a significant problem, and that problem is that it is lacking an important abstraction. This lack of abstraction usually shows up in two very specific ways:

  • The leakage of complexity from the dependency into the application code
  • The leakage of implementation-specific details into the application code

Teams usually don’t notice the downside of these, unless a very specific thing happens: they get asked to change the underlying technology. Their system was storing documents in the file system, and it now needs to store them in the cloud. They look at their code, and they realize that the whole structure of their application is coupled to the specific implementation. The team hems and haws, and then comes up with a 3 month estimate to do the conversion. This generally isn’t a big problem for the team because it is accepted pretty widely that changing an underlying technology is going to be a big deal and expensive. You will even find people who say that you can’t avoid it – that it is always expensive to make such a change.

If the team never ends up with this requirement, they typically won’t see the coupling nor will the see the downside of the leakage. In my earlier posts I talked about not being sensitive to certain problems, and this is a great example of that. Their lives will be much harder, but they won’t really notice.

Enter the hexagon

A long time ago in internet time, Alistair Cockburn came up with a different approach that avoids these problems, which he called the Hexagonal Architecture. The basic idea is that you segment your application into two different kinds of code – there is the application code, and then there is the code that deals with the external dependencies.

About this time, some of you are thinking, “this is obvious – everybody knows that you write a database layer when you need to talk to a database”. I’ll ask you to bear with me for a bit and keep in mind the part where if you are not sensitive to a specific problem, you don’t even know the problem exists.

What is different about this approach – what Cockburn’s big insight is – is that the interface between the application and the dependency (what he calls a “port”) should be defined by the application using application-level abstractions. This is sometimes expressed as “write the interface that you wish you had”. If you think of this in the abstract, the ideal would be to write all of the application code using the abstraction, and then go off an implement the concrete implementation that actually talks to the dependency.

What does this give us? Well, it gives us a couple of things. First of all, it typically gives us a significant simplification of the interface between the application and the dependency; if you are storing documents, you typically end up with operations like “store document, load document, and get the list of documents”, and they have very simple parameter lists. That is quite a bit simpler than a file system, and an order of magnitude simpler than most databases. This makes writing the application-level code simpler, with all of the benefits that come with simpler code.

Second, it decouples the application code from the implementation; because we defined the interface at the application level, if we did it right there are no implementation-specific details at the app layer (okay, there is probably a factory somewhere with some details – root directory, connection string, that sort of thing). That gives us the things we like from a componentization perspective, and incidentally makes it straightforward to write a different implementation of the interface in some other technology.

At this point there is somebody holding up their hand and saying, “but how are you going to test the implementation of port to make sure it works?” BTW, Cockburn calls the implementation of a port an “adapter” because it adapts the application view to the underlying dependency view, and the overall pattern is therefore known as “port/adapter”.

This is a real concern. Cockburn came up with the pattern before TDD was really big so we didn’t think about testing in the same way, and he was happy with the tradeoff of a well-defined adapter that didn’t change very often and therefore didn’t need a lot of ongoing testing because the benefits of putting the “yucky dependency code” (my term, not his) in a separate place was so significant. But it is fair to point to that adapter code and say, “how do you know that the adapter code works?”

In the TDD world, we would like to do better. My first attempt did what I thought was the logical thing to do. I had an adapter that sat on top of the file system, so I put a façade on the file system, and wrote a bunch of adapter tests with a mocked-out file system, and verified that the adapter behaved as I expected it to. Which worked because the file system was practical to mock, but would not have worked with a database system because of the problem with mocking.

Then I read something that Arlo wrote about simulators, and it all made sense.

After I have created a port abstraction, I need some way of testing code that uses a specific port, which means some sort of test double. Instead of using a mocking library – which you already know that I don’t like – I can write a special kind of test double known as a simulator. A simulator is simply an in-memory implementation of the port, and it’s generally fairly quick to create because it doesn’t do a ton of things. Since I’m using TDD to write it, I will end up with both the simulator and a set of tests that verify that the simulator behaves properly. But these tests aren’t really simulator tests, they are port contract tests.

So, I can point them at other implementations of the port (ie the ones that use the real file system or the real database), and verify that the other adapters behave exactly the way the simulator does. And that removes the requirement to test the other adapters in the traditional unit-tested way; all I care about is that all the adapters behave the same way. And it actually gives me a stronger sense of correctness, because when I used the façade I had no assurance that the file system façade behaved the same way the real file system did.

In other words, the combination of the simulator + tests has given me a easy & quick way to write application tests, and it has given me a way to test the yucky adapter code. And it’s all unicorns and rainbows from then on. Because the simulator is a real adapter, it supports other uses; you can build a headless test version of the application that doesn’t need the real dependency to work. Or you can make some small changes to the simulator and use it as an in-memory cache that sits on to of the real adapter.

Using Port/Adapter/Simulator

If you want to use this pattern – and I highly recommend it – I have a few thoughts on how to make it work well.

The most common problem people run into is in the port definition; they end up with a port that is more complex than it needs to be or they expose implementation-specific details through the port.

The simplest way to get around this is to write from the inside out. Write the application code and the simulator & tests first, and then only go and write the other adapters when that is done. This makes it much easier to define an implementation-free port, and that will make your life easier far easier.

If you are refactoring into P/A/S, then the best approach gets a little more complex. You probably have application code that has implementation-specific details. I recommend that you approach it in small chunks, with a flow like this:

  1. Create an empty IDocumentStore port, an empty DocumentStoreSimulator class, and an empty DocumentStoreFileSystem class.
  2. Find an abstraction that would be useful to the application – something like “load a document”.
  3. Refactor the application code so that there is a static method that knows how to drive the current dependency to load a document.
  4. Move the static method into the file system adapter.
  5. Refactor it to an instance method.
  6. Add the method to IDocumentStore.
  7. Refactor the method so that the implementation-dependent details are hidden in the adapter.
  8. Write a simulator test for the method.
  9. Implement the method in the simulator.
  10. Repeat steps 2-9.

Practice

I wrote a few blog posts that talk about port/adapter/simulator and a practice kata. I highly recommend doing the kata to practice the pattern before you try it with live code; it is far easier to wrap your head around it in a constrained situation than in your actual product code.

Response to comments : You suck at TDD #3–Design sensitivity and improvement

December 18, 2015 at 8:58 am

I got some great comments on the post, and I answered a few in comments but one started to get very long-winded so I decided to convert my response into a post.

Integration tests before refactoring

The first question is around whether it would be a good idea to write an integration test around code before refactoring.

I hate integration tests. It may be the kinds of teams that I’ve been on, but in the majority of cases, they:

  1. Were very expensive to write and maintain
  2. Took a long time to run
  3. Broke often for hard-to-determine reasons (sometimes randomly)
  4. Didn’t provide useful coverage for the underlying features.

 

Typically, the number of issues they found was not worth the amount of time we spent waiting for them to run, much less the cost of creating and maintaining them.

There are a few cases where I think integration tests are justified:

  1. If you are doing something like ATDD or BDD, you are probably writing integration tests. I generally like those, though it’s possible they could get out of hand as well.
  2. You have a need to maintain conformance to a specification or to previous behavior. You probably need integration tests for this, and you’re just going to have to pay the tax to create and maintain them.
  3. You are working in code that is scary.

 

"Scary" means something very specific to me. It’s not about the risk of breaking something during modification, it’s about the risk of breaking something in a way that isn’t immediately obvious.

There are times when the risk is significant and I do write some sort of pinning tests, but in most cases the risk does not justify the investment. I am willing to put up with a few hiccups along the way if it avoids a whole lot of time spent writing tests.

I’ll also note that to get the benefit out of these tests, I have to cover all the test cases that are important. The kinds of things that I might break during refactoring are the same kind of things I might forget to test. Doing well at this makes a test even more expensive.

In the case in the post, the code is pretty simple and it seemed unlikely that we could break it in non-obvious way, so I didn’t invest the time in an integration test, which in this case would have been really expensive to write.  And the majority of the changes were done automatically using Resharper refactorings that I trust to be correct.

Preserving the interface while making it testable

This is a very interesting question. Is it important to preserve the class interface when making a class testable, or should you feel free to change it? In this case, the question is whether I should pull the creation of the LegacyService instance out of the method and pass it in through the constructor, or instead use another technique that would allow me to create either a production or test instance as necessary.

Let me relate a story…

A few years ago, I led a team that was responsible for taking an authoring tool and extending it. The initial part had been done fairly quickly and wasn’t very well designed, and it had only a handful of tests.

One day, I was looking at a class, trying to figure out how it worked, because the constructor parameters didn’t seem sufficient to do what it needed to do. So, I started digging and exploring, and I found that it was using a global reference to an uber-singleton that give it access to 5 other global singletons, and it was using these singletons to get its work done. Think of it as hand-coded DI.

I felt betrayed and outraged. The constructor had *lied* to me, it wasn’t honest about its dependencies.

And that started a time-consuming refactoring where I pulled out all the references to the singletons and converted them to parameters. Once I got there, I could now see how the classes really worked and figure out how to simplify them.

I prefer my code to be honest. In fact, I *need* it to be honest. Remember that my premise is that in TDD, the difficulty of writing tests exerts design pressure and that in response to that design pressure, I will refactor the code to be easier to test and that aligns well with "better overall". So I am hugely in preference of code that makes dependencies explicit, both because it is more honest (and therefore easier to reason about), and because it’s messy and ugly and that means I’m more likely to convert it to something that is less messy and ugly.

Or, to put it another way, preserving interfaces is a non-goal for me. I prefer honest messiness over fake tidiness.

You suck at TDD #2–Mocking libraries

December 10, 2015 at 8:20 am

Note: I am focusing only on the design impact of TDD. To better understand the overall impact, see this series of posts by Jay Bazuzi.

My first experience with TDD was back in 2002 or so, and it was in C++, so there weren’t any mocking libraries available. That meant that I had to use hand-written mocking classes.

When hand-mocking, you need to create separate classes, write each of the methods that you need, etc. If the scenario is complex, you may have to write several classes that coordinate with each other to accomplish the mock. It’s more than a little pain at times, and creating a new class always seems like a bit of an interrupt in my train of thought.

That is as it should be. One of my foundational principles of running development teams is that pain – which, in this context, means, "tedious things I have to do instead of writing new code" – is a great incentive. That which is tedious is an automatic target for reduction and elimination. So, you have developers fix their own bugs because it will cause them pain to do so.

(This is, of course, not a panacea; there are plenty of organizations where this will not result in better quality because the incentive towards writing bugs is so strong, but I digress…)

Then mocking libraries showed up the scene. No need to write new classes, just write some mocking code and you’re done. That reduced the pain and reduced the design pressure that ‘the test is hard to write" was exerting, which reduces the improvement.

Then a few mocking libraries showed up that let you mock statics, which allows you to test things that were totally untestable before, and that further reduced the design pressure. You can do some pretty ugly things with those libraries…

Refactoring gets harder

Many of the mocking libraries have another problem – they use a string-based approach for defining their mocks. This means that they are not refactoring-friendly; after you refactor your code, you find out that your tests won’t even run; you have to go and hand-modify them so that they now match your new refactoring. This makes refactoring more painful, and makes it more likely that you will just skip the refactoring

Discussion

As you have probably gathered, I am not a fan of mocking libraries. They make it too easy to do things that I think you shouldn’t, and they short circuit the feedback that you would otherwise be feeling. Embrace the pain of writing your own mocks, and use that to motivate you towards better solutions. I’ll be talking more about better solutions in future posts.

There is one situation where mocking libraries are great; if I need to bring an existing codebase under test so that I won’t break it as I work on it. In that case, I need their power, and I will plan to get rid of them in the longer term.

You suck at TDD #1: Rewrite the steps

December 4, 2015 at 8:05 am

I’ve been paying attention to TDD for the past few years – doing it myself, watching others doing it, reading about it, etc. – and I’ve been seeing a lot of variation in the level of success people are having with it. As is my usual approach, I wrote a long and tedious post about it, which I have mercifully decided not to inflict on you.

Instead, I’m going to do a series of posts about the things I’ve seen getting in the way of TDD success. And, in case it isn’t obvious, I’ve engaged in the majority of the things that I’m going to be writing about, so, in the past, I sucked at TDD, and I’m sure I haven’t totally fixed things, so I still suck at it now.

Welcome to "You suck at TDD"…

Rewrite the steps

The whole point of TDD is that following the process exerts design pressure on your code so that you will refactor to make it better (1). More specifically, it uses the difficulty in writing simple test code as a proxy for the design quality of the code that is being tested.

Let’s walk through the TDD steps:

  1. Write a test that fails

  2. Make the test pass

  3. Refactor

How does this usually play out? Typically, we dive directly into writing the test, partly because we want to skip the silly test part and get onto the real work of writing the product code, and partly because TDD tells us to do the simplest thing that could possible work. Writing the test is a formality, and we don’t put a lot of thought into it.

The only time this is not true is when it’s not apparent how we can actually write the test. If, for example, a dependency is created inside a class, we need to do something to be able to inject that dependency, and that usually means some refactoring in the product code.

Now that we have the test written, we make the test pass, and then it’s time to refactor, so we look at the code, make some improvements, and then repeat the process.

And we’re doing TDD, right?

Well…. Not really. As I said, you suck at TDD…

Let’s go back to what I wrote at the beginning of the section. I said that the point of TDD was that the state of our test code (difficult to write/ugly/etc) forced us to improve our product code. To succeed in that, that means that our test code has to either be drop-dead-simple (setup/test/assert in three lines) or it needs to be evolving to be simpler as we go. With the exception of the cases where we can’t write a test, our tests typically are static. I see this all the time. 

Let’s try a thought experiment. I want you to channel your mindset when you are doing TDD. You have just finished making the test pass, and you are starting the refactor set. What are you thinking about? What are you looking at?

Well, for me, I am focused on the product code that I just wrote, and I have the source staring me in the face. So, when I think of refactoring, I think about things that I might do to the product code. But that doesn’t help my goal, which is to focus on what the test code is telling me, because it is the proxy for whether my product code is any good.

This is where the three-step process of TDD falls down; it’s really easy to miss the fact that you should be focusing on the test code and looking for refactorings *there*. I’m not going to say that you should ignore product code refactorings, but I am saying that the test ones are much more important.

How can we change things? Well, I tried a couple of rewrites of the steps. The first is:

  1. Write a test that fails

  2. Make the test pass

  3. Refactor code

  4. Refactor test

Making the code/test split explicit is a good thing as it can remind us to focus on the tests. You can also rotate this around so that "refactor tests" is step #1 if you like. This was an improvement for me, but I was still in "product mindset" for step 4 and it didn’t work that great. So, I tried something different:

  1. Write a test that fails

  2. Refactor tests

  3. Make the test pass

  4. Refactor code

Now, we’re writing the test that fails, and then immediately stopping to evaluate what that test is telling us. We are looking at the test code and explicitly thinking about whether it needs to improve. That is a "good thing".

But… There’s a problem with this flow. The problem is that we’re going to be doing our test refactoring while we have a failing test in our test suite, which makes the refactoring a bit harder as the endpoint isn’t "all green", it’s "all green except for the new test".

How about this:

  1. Write a test that fails

  2. Disable the newly failed assertion

  3. Refactor tests

  4. Re-enable the previously failing assertion

  5. Make the test pass

  6. Refactor code

That is better, as we now know when we finish our test refactoring that we didn’t break any existing tests.

My experience is that if you think of TDD in terms of these steps, it will help put the focus where it belongs – on the tests. Though I will admit that for simple refactorings, I often skip disabling the failing test, since it’s a bit quicker and it’s a tiny bit easier to remember where I was after the refactoring.

Agile Transitions Aren’t

October 31, 2015 at 9:37 pm

A while back I was talking with a team about agile. Rather than give them a typical introduction, I decided to talk about techniques that differentiated more successful agile teams from less successful ones. Near the end of the talk, I got a very interesting question:

"What is the set of techniques where, if you took one away, you would no longer call it ‘agile’?"

This is a pretty good question. I thought for a little bit, and came up with the following:

  • First, the team takes an incremental approach; they make process changes in small, incremental steps
  • Second, the team is experimental; they approach process changes from a "let’s try this and see if it works for us" perspective.
  • Third, the team is a team; they have a shared set of work items that they own and work on as a group, and they drive their own process.

All of these are necessary for the team to be moving their process forward. The first two allow process to be changed in low risk and reversible way, and the third provides the group ownership that makes it possible to have discussions about process changes in the first place. We get process plasticity, and that is the key to a successful agile team – the ability to take the current process and evolve it into something better.

Fast forward a few weeks later, and I was involved in a discussion about a team that had tried Scrum but hadn’t had a lot of luck with it, and I started thinking about how agile transitions are usually done:

  • They are implemented as a big change; one week the team is doing their old process, then next they (if they are lucky) get a little training, and then they toss out pretty much all of their old process and adopt a totally different process.
  • The adoption is usually a "this is what we are doing" thing.
  • The team is rarely the instigator of the change.

That’s when I realized what had been bothering me for a while…

The agile transition is not agile.

That seems more than a little weird. We are advocating a quick incremental way of developing software, and we start by making a big change that neither management or the team really understand on the belief that, in a few months, things will shake out and the team will be in a better place. Worse, because the team is very busy trying to learn a lot of new things, it’s unlikely that they will pick up on the incremental and experimental nature of agile, so they are likely going to go from their old static methodology to a new static methodology.

This makes the "you should hire an agile coach" advice much more clear; of course you need a coach because otherwise you don’t have much chance of understanding how everything is supposed to work. Unfortunately, most teams don’t hire an agile coach, so it’s not surprising that they don’t have much success.

Is there a better way? Can a team work their way into agile through a set of small steps? Well, the answer there is obviously "Yes", since that’s how the agile methods were originally developed.

I think we should be able to come up with a way to stage the changes so that the team can focus on the single thing they are working on rather than trying to deal with a ton of change. For example, there’s no reason that you can’t establish a good backlog process before you start doing anything else, and that would make it much easier for the agile teams when they start executing.

Resharper tip #1: Push code into a method / Pull code out of a method

October 12, 2015 at 2:59 pm

Resharper is a great tool, but many times that operation that I want to perform isn’t possible with a single refactoring; you need multiple refactorings to get the result that you want. I did a search and could find these anywhere, so I thought I’d share them with you.

If you know where more of these things are described and/or you know a better way of doing what I describe, please let me know.

Push code into a method

Consider the following code:

   1: static void Main(string[] args)

   2: {

   3:     DateTime start = DateTime.Now;

   4:  

   5:  

   6:     DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

   7:     string startString = start.ToShortDateString();

   8:  

   9:     Process(oneDayEarlier, startString);

  10:  

  11: }

  12:  

  13: private static void Process(DateTime oneDayEarlier, string startString)

  14: {

  15:     Console.WriteLine(oneDayEarlier);

  16:     Console.WriteLine(startString);

  17: }

Looking at the code in Main(), there are a couple of variables that are passed into the Process() method. A little examination shows that things would be cleaner they were in the Process method, but there’s no “move code into method” refactoring, so I’ll have to synthesize it out of the refactorings that I have. I start by renaming the Process() method to Process2(). Use whatever name you want here:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         DateTime start = DateTime.Now;

   6:  

   7:  

   8:         DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

   9:         string startString = start.ToShortDateString();

  10:  

  11:         Process2(oneDayEarlier, startString);

  12:  

  13:     }

  14:  

  15:     private static void Process2(DateTime oneDayEarlier, string startString)

  16:     {

  17:         Console.WriteLine(oneDayEarlier);

  18:         Console.WriteLine(startString);

  19:     }

  20: }

Next, select the lines that I want to include into the method plus the method call itself, and do an Extract Method refactoring to create a new Process() method:

   1: static void Main(string[] args)

   2: {

   3:     DateTime start = DateTime.Now;

   4:  

   5:  

   6:     Process(start);

   7: }

   8:  

   9: private static void Process(DateTime start)

  10: {

  11:     DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

  12:     string startString = start.ToShortDateString();

  13:  

  14:     Process2(oneDayEarlier, startString);

  15: }

  16:  

  17: private static void Process2(DateTime oneDayEarlier, string startString)

  18: {

  19:     Console.WriteLine(oneDayEarlier);

  20:     Console.WriteLine(startString);

  21: }

Finally, inline the Process2() method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         DateTime start = DateTime.Now;

   6:  

   7:  

   8:         Process(start);

   9:     }

  10:  

  11:     private static void Process(DateTime start)

  12:     {

  13:         DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

  14:         string startString = start.ToShortDateString();

  15:  

  16:         Console.WriteLine(oneDayEarlier);

  17:         Console.WriteLine(startString);

  18:     }

  19: }

Three quick refactorings got me to where I wanted, and it’s about 15 seconds of work if you use the predefined keys.

Pull Code Out of a Method

Sometimes, I have some code that I want to pull out of a method. Consider the following:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation("Information: ", n);

   8:     }

   9:  

  10:     private static void WriteInformation(string information, int n)

  11:     {

  12:         File.WriteAllText("information.txt", information + n);

  13:     }

  14: }

I have a few options here. We can pull “information.txt” out easily by selecting it and using Introduce Parameter:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation(string information, int n, string filename)

  11:     {

  12:         File.WriteAllText(filename, information + n);

  13:     }

  14: }

I could use that same approach to pull out “information + n”, but I’m going to do it an alternate way that works well if I have a chunk of code. First, I introduce a variable:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation(string information, int n, string filename)

  11:     {

  12:         string contents = information + n;

  13:         File.WriteAllText(filename, contents);

  14:     }

  15: }

I rename the method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation2("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation2(string information, int n, string filename)

  11:     {

  12:         string contents = information + n;

  13:         File.WriteAllText(filename, contents);

  14:     }

  15: }

And I now extract the code that I want to remain in the method to a new method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation2("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation2(string information, int n, string filename)

  11:     {

  12:         string contents = information + n;

  13:         WriteInformation(filename, contents);

  14:     }

  15:  

  16:     private static void WriteInformation(string filename, string contents)

  17:     {

  18:         File.WriteAllText(filename, contents);

  19:     }

  20: }

And, finally, I inline the original method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         string contents = "Information: " + n;

   8:         WriteInformation("information.txt", contents);

   9:     }

  10:  

  11:     private static void WriteInformation(string filename, string contents)

  12:     {

  13:         File.WriteAllText(filename, contents);

  14:     }

  15: }

Port/Adapter/Simulator and error conditions

October 9, 2015 at 11:10 am

An excellent question on an internal alias came up today, and I wanted to share my response more widely.

The question is around simulating error conditions when doing Port/Adapter/Simulator.

For example, if my production adapter is talking to a database, the database might be unreachable and the real adapter would throw a timeout exception. How can we get the simulator to do that, so we can write a test that verifies that our code behaves correctly in that scenario?

Before I answer, I need to credit Arlo, who taught me at least part of this technique…

Implement common behaviors across all adapters

The first thing to do is to see if we can figure out how to make the simulator behavior mirror the behavior of the real adapter.

If we are implementing some sort of store, the real adapter might throw an “ItemNotFound” exception if the item isn’t there, and we can just make the simulator detect the same situation and throw the same exception. And we will – of course – write a test that we can use to verify that the behavior matches.

Or, if there is a restriction on the names in a store (say one of our adapters stores items in a file, and the name is just the filename), then all of the adapters much implement that restriction (though I’d consider whether I wanted to do that or use an encoding approach to get rid of the restriction for the file adapter).

Those are the simple cases, but the question was specifically about timeouts. Timeouts are random and not-deterministic, right?

Yes, they are non-deterministic in actual use, but there might be a scenario that will always throw a timeout. What does the real adapter do if we pass in a database server that does not exist – something like “DatabaseThatDoesNotExist”? If we can figure out a developer/configuration error that sets up the scenario we want, then we can implement the same behavior in all of our adapters, and our world will be simple.

However, the world is not always that simple…

Cheat

I’ll note here that Arlo did not teach me this technique, so any stupidity belongs to me…

If I can’t find out a deterministic way to get a scenario to happen, then I need to implement a back door. I do this by adding a method to the simulator (not the adapter) that looks something like this:

public void SimulateTimeoutOnLoad();

Note that it is named with “Simulate”<scenario><method-name>, so that it’s easy to know that it isn’t part of the adapter interface and what it does. I will write a unit test to verify that the simulator does this correctly, but – alas – I cannot run this test against the real adapter because the method is not part of the adapter. That means it’s a decent idea to put these tests in a different file from the ones that target the adapter interface.

Now that I have the new method, the test is pretty simple:

Fetcher fetcher = new FetcherSimulator();

ObjectToTest ott = new ObjectToTest(fetcher);

fetcher.SimulateTimeoutOnLoad();

ott.Load();
Assert.Whatever(…);

I’m just using the method to reach into the simulator and tell it to do something specific in a specific scenario.

My goal is to do this as little as possible because it reduces the benefit we get from P/A/S, so I try to look very hard to find a way to not cheat.

I should also note that if you are using P/A/S on the UI side, you are pretty much stuck using this technique, because the things that you are simulating are user actions, and none of them can be triggered through using the real adapter.

Agile team evaluation

October 5, 2015 at 4:35 pm

I’ve been thinking a bit about team evaluation. In the agile world, this is often done by looking at practices – is the team doing pairing, are they doing story mapping, how long is their iteration length?

This is definitely a useful thing to do, but it can sometimes be too prescriptive; a specific practice needs to be good for a team where they are right now, and that’s always clear. I’m a big fan of not blocking checkin on code review, but I need something to replace it (continuous code review through pairing or mobbing) before it makes sense.

Instead, I’ve tried to come up with a set of questions that focus out the outcomes that I think are most important and whether the team is getting better at those outcomes. I’ve roughly organized them into four categories (call them “Pillars” if you must).

 

1: Delivery of Business Value

  • Is the team focused on working on the most important things?
  • Are they delivering them with a quality they are proud of?
  • Are they delivered in small, easy-to-digest chunks?
  • Is the team getting better?

2: Code Health

  • Is the code well architected?
  • Are there tests that verify that the code works and will continue to work?
  • Is the team getting better over time?
    • Is the architecture getting cleaner?
    • Is it easier to write tests?
    • Is technical debt disappearing?
    • Are bugs becoming less frequent?
    • Are better technologies coming in?

3: Team Health

  • Is the team healthy and happy?
  • Is there “esprit de corps” in the team?
  • Are team members learning to be better at existing things?
  • Are team members learning how to do new things?
  • Does the team have an experimental mindset?

4: Organization Health

  • Are changes in approaches by the team(s) leading to changes in the overall organization?
  • Are obstacles to increase speed and efficiency going away?
  • Are the teams trying different things and sharing their findings? Or is the organization stuck in a top-down, monocultural approach?
  • Is there a cleared vision and charter for the organization?
  • Does the organization focus on “what” and “why” and let the teams control the “how”?

Agile management

July 9, 2015 at 1:43 pm

A friend at work posted a link to the following article:

I’m Sorry, But Agile Won’t Fix Your Products

and I started to write a quick note but thought it deserved a bigger response

+++++++++++++

Okay, so, I agree with pretty much all of the article.

As is often the case, I went and wrote some analysis, didn’t like it, tried to make it better, and ended up abandoning it to write something else. I agree with the comments in the article about command-and-control, but I think there is another aspect that is worth discussing. I’ll note that some of this is observational rather than experiential.

Collectively, management tends to value conformity pretty highly. If, for example, your larger group creates two-year products plans, you will be asked about your two-year product plans and – even if you have a great reason for only doing three-month plans that your manager agrees with – you will become an outlier. Being an outlier puts you at risk; if, for example, things don’t go as well as expected for you, there is now an obvious cause for the problem – your nonconformity. Or, your manager get promoted, and the new manager wants two-year plans.

This effect was immortalized in a saying dating back to the days of mainframes:

Nobody ever got fired for buying IBM…

Because of this effect, you end up with what I call a “Group Monoculture”, where process is mostly fixed, and inefficiency and lack of progress are fine as long as they are the status quo.

It is a truism that, whatever skills they might also possess, there is one commonality amongst all the managers; they possess the ability to be hired and/or promoted in the existing corporate culture. That generally means that they are good at following the existing process and good at conformity. This reinforces and cements the monoculture. Any changes that happen are driven from high up the chain and just switch the group to a different monoculture.

Different is bad, which, last time I checked, was not one of the statements in the Agile Manifesto…

How can agility happen in such an org? Well, it happens due to the actions of what I call process adapters. A process adapter adapts the process that exists above a group in the organization to be closer to the process that the team wants to have. For example, the adapter might keep that two-year plan up to date but allow the team below to work as if short planning cycles were the norm. Or an adapter might adapt the team’s one-week iteration cycle to the overall group’s 12-week cycle.

Adaptation is not a panacea. The adaptation is always imperfect and some of the process from above leaks down, and it can be pretty stressful to the adapter; they are usually hiding some details from their manager, fighting battles so that they can be different, and running a very real career risk. As their team gets more agile and self-guided, the adaptation gets more leaky, and the adapter runs more risks; the whole thing can be derailed by investing time in reducing technical debt which slows them down, some unexpected questions by the agile team members to management, or the adapter getting a new manager.

I’ve seen quite a few first-level (aka “lead”) adapters; leads tend to be focused more down at their team than up and out and can usually get away with more non-conformity; leads are viewed as less experienced and there’s often a feeling that they should have a lot of latitude in how they run their teams. Leads are also more likely to be senior and technically astute, which gives them more options to “explore different opportunities” both inside and outside the company.

I haven’t seen any second-level adapters be successful for more than a year or so, though I have seen a few try really hard.

Sometimes, adapters get promoted into the middle of the hierarchy or are hired from outside. This is often a frustrating position for the adapter. As Joe Egan and Gerry Rafferty wrote back in 1972:

Clowns to the left of me,
Jokers to the right,
Here I am
Stuck in the middle…

One of two things tend to happen.

Either the adapter gets frustrated with the challenges of adapting and trying to drive broader change and decide to do something else, or the adapter gets promoted higher. Further promotion often doesn’t have the hoped-for effect; as the adapter moves up they get broader scope, and the layers underneath them are managed by – you guessed it – the rank and file managers who are devoted to the existing monoculture. Not to mention that the agile “teams are self-organizing and drive their own approach” tenet means that adapters tend to give less direction to their reports.