Response to comments : You suck at TDD #3–Design sensitivity and improvement

December 18, 2015 at 8:58 am

I got some great comments on the post, and I answered a few in comments but one started to get very long-winded so I decided to convert my response into a post.

Integration tests before refactoring

The first question is around whether it would be a good idea to write an integration test around code before refactoring.

I hate integration tests. It may be the kinds of teams that I’ve been on, but in the majority of cases, they:

  1. Were very expensive to write and maintain
  2. Took a long time to run
  3. Broke often for hard-to-determine reasons (sometimes randomly)
  4. Didn’t provide useful coverage for the underlying features.

 

Typically, the number of issues they found was not worth the amount of time we spent waiting for them to run, much less the cost of creating and maintaining them.

There are a few cases where I think integration tests are justified:

  1. If you are doing something like ATDD or BDD, you are probably writing integration tests. I generally like those, though it’s possible they could get out of hand as well.
  2. You have a need to maintain conformance to a specification or to previous behavior. You probably need integration tests for this, and you’re just going to have to pay the tax to create and maintain them.
  3. You are working in code that is scary.

 

"Scary" means something very specific to me. It’s not about the risk of breaking something during modification, it’s about the risk of breaking something in a way that isn’t immediately obvious.

There are times when the risk is significant and I do write some sort of pinning tests, but in most cases the risk does not justify the investment. I am willing to put up with a few hiccups along the way if it avoids a whole lot of time spent writing tests.

I’ll also note that to get the benefit out of these tests, I have to cover all the test cases that are important. The kinds of things that I might break during refactoring are the same kind of things I might forget to test. Doing well at this makes a test even more expensive.

In the case in the post, the code is pretty simple and it seemed unlikely that we could break it in non-obvious way, so I didn’t invest the time in an integration test, which in this case would have been really expensive to write.  And the majority of the changes were done automatically using Resharper refactorings that I trust to be correct.

Preserving the interface while making it testable

This is a very interesting question. Is it important to preserve the class interface when making a class testable, or should you feel free to change it? In this case, the question is whether I should pull the creation of the LegacyService instance out of the method and pass it in through the constructor, or instead use another technique that would allow me to create either a production or test instance as necessary.

Let me relate a story…

A few years ago, I led a team that was responsible for taking an authoring tool and extending it. The initial part had been done fairly quickly and wasn’t very well designed, and it had only a handful of tests.

One day, I was looking at a class, trying to figure out how it worked, because the constructor parameters didn’t seem sufficient to do what it needed to do. So, I started digging and exploring, and I found that it was using a global reference to an uber-singleton that give it access to 5 other global singletons, and it was using these singletons to get its work done. Think of it as hand-coded DI.

I felt betrayed and outraged. The constructor had *lied* to me, it wasn’t honest about its dependencies.

And that started a time-consuming refactoring where I pulled out all the references to the singletons and converted them to parameters. Once I got there, I could now see how the classes really worked and figure out how to simplify them.

I prefer my code to be honest. In fact, I *need* it to be honest. Remember that my premise is that in TDD, the difficulty of writing tests exerts design pressure and that in response to that design pressure, I will refactor the code to be easier to test and that aligns well with "better overall". So I am hugely in preference of code that makes dependencies explicit, both because it is more honest (and therefore easier to reason about), and because it’s messy and ugly and that means I’m more likely to convert it to something that is less messy and ugly.

Or, to put it another way, preserving interfaces is a non-goal for me. I prefer honest messiness over fake tidiness.

You suck at TDD #2–Mocking libraries

December 10, 2015 at 8:20 am

Note: I am focusing only on the design impact of TDD. To better understand the overall impact, see this series of posts by Jay Bazuzi.

My first experience with TDD was back in 2002 or so, and it was in C++, so there weren’t any mocking libraries available. That meant that I had to use hand-written mocking classes.

When hand-mocking, you need to create separate classes, write each of the methods that you need, etc. If the scenario is complex, you may have to write several classes that coordinate with each other to accomplish the mock. It’s more than a little pain at times, and creating a new class always seems like a bit of an interrupt in my train of thought.

That is as it should be. One of my foundational principles of running development teams is that pain – which, in this context, means, "tedious things I have to do instead of writing new code" – is a great incentive. That which is tedious is an automatic target for reduction and elimination. So, you have developers fix their own bugs because it will cause them pain to do so.

(This is, of course, not a panacea; there are plenty of organizations where this will not result in better quality because the incentive towards writing bugs is so strong, but I digress…)

Then mocking libraries showed up the scene. No need to write new classes, just write some mocking code and you’re done. That reduced the pain and reduced the design pressure that ‘the test is hard to write" was exerting, which reduces the improvement.

Then a few mocking libraries showed up that let you mock statics, which allows you to test things that were totally untestable before, and that further reduced the design pressure. You can do some pretty ugly things with those libraries…

Refactoring gets harder

Many of the mocking libraries have another problem – they use a string-based approach for defining their mocks. This means that they are not refactoring-friendly; after you refactor your code, you find out that your tests won’t even run; you have to go and hand-modify them so that they now match your new refactoring. This makes refactoring more painful, and makes it more likely that you will just skip the refactoring

Discussion

As you have probably gathered, I am not a fan of mocking libraries. They make it too easy to do things that I think you shouldn’t, and they short circuit the feedback that you would otherwise be feeling. Embrace the pain of writing your own mocks, and use that to motivate you towards better solutions. I’ll be talking more about better solutions in future posts.

There is one situation where mocking libraries are great; if I need to bring an existing codebase under test so that I won’t break it as I work on it. In that case, I need their power, and I will plan to get rid of them in the longer term.

You suck at TDD #1: Rewrite the steps

December 4, 2015 at 8:05 am

I’ve been paying attention to TDD for the past few years – doing it myself, watching others doing it, reading about it, etc. – and I’ve been seeing a lot of variation in the level of success people are having with it. As is my usual approach, I wrote a long and tedious post about it, which I have mercifully decided not to inflict on you.

Instead, I’m going to do a series of posts about the things I’ve seen getting in the way of TDD success. And, in case it isn’t obvious, I’ve engaged in the majority of the things that I’m going to be writing about, so, in the past, I sucked at TDD, and I’m sure I haven’t totally fixed things, so I still suck at it now.

Welcome to "You suck at TDD"…

Rewrite the steps

The whole point of TDD is that following the process exerts design pressure on your code so that you will refactor to make it better (1). More specifically, it uses the difficulty in writing simple test code as a proxy for the design quality of the code that is being tested.

Let’s walk through the TDD steps:

  1. Write a test that fails

  2. Make the test pass

  3. Refactor

How does this usually play out? Typically, we dive directly into writing the test, partly because we want to skip the silly test part and get onto the real work of writing the product code, and partly because TDD tells us to do the simplest thing that could possible work. Writing the test is a formality, and we don’t put a lot of thought into it.

The only time this is not true is when it’s not apparent how we can actually write the test. If, for example, a dependency is created inside a class, we need to do something to be able to inject that dependency, and that usually means some refactoring in the product code.

Now that we have the test written, we make the test pass, and then it’s time to refactor, so we look at the code, make some improvements, and then repeat the process.

And we’re doing TDD, right?

Well…. Not really. As I said, you suck at TDD…

Let’s go back to what I wrote at the beginning of the section. I said that the point of TDD was that the state of our test code (difficult to write/ugly/etc) forced us to improve our product code. To succeed in that, that means that our test code has to either be drop-dead-simple (setup/test/assert in three lines) or it needs to be evolving to be simpler as we go. With the exception of the cases where we can’t write a test, our tests typically are static. I see this all the time. 

Let’s try a thought experiment. I want you to channel your mindset when you are doing TDD. You have just finished making the test pass, and you are starting the refactor set. What are you thinking about? What are you looking at?

Well, for me, I am focused on the product code that I just wrote, and I have the source staring me in the face. So, when I think of refactoring, I think about things that I might do to the product code. But that doesn’t help my goal, which is to focus on what the test code is telling me, because it is the proxy for whether my product code is any good.

This is where the three-step process of TDD falls down; it’s really easy to miss the fact that you should be focusing on the test code and looking for refactorings *there*. I’m not going to say that you should ignore product code refactorings, but I am saying that the test ones are much more important.

How can we change things? Well, I tried a couple of rewrites of the steps. The first is:

  1. Write a test that fails

  2. Make the test pass

  3. Refactor code

  4. Refactor test

Making the code/test split explicit is a good thing as it can remind us to focus on the tests. You can also rotate this around so that "refactor tests" is step #1 if you like. This was an improvement for me, but I was still in "product mindset" for step 4 and it didn’t work that great. So, I tried something different:

  1. Write a test that fails

  2. Refactor tests

  3. Make the test pass

  4. Refactor code

Now, we’re writing the test that fails, and then immediately stopping to evaluate what that test is telling us. We are looking at the test code and explicitly thinking about whether it needs to improve. That is a "good thing".

But… There’s a problem with this flow. The problem is that we’re going to be doing our test refactoring while we have a failing test in our test suite, which makes the refactoring a bit harder as the endpoint isn’t "all green", it’s "all green except for the new test".

How about this:

  1. Write a test that fails

  2. Disable the newly failed assertion

  3. Refactor tests

  4. Re-enable the previously failing assertion

  5. Make the test pass

  6. Refactor code

That is better, as we now know when we finish our test refactoring that we didn’t break any existing tests.

My experience is that if you think of TDD in terms of these steps, it will help put the focus where it belongs – on the tests. Though I will admit that for simple refactorings, I often skip disabling the failing test, since it’s a bit quicker and it’s a tiny bit easier to remember where I was after the refactoring.

Agile Transitions Aren’t

October 31, 2015 at 9:37 pm

A while back I was talking with a team about agile. Rather than give them a typical introduction, I decided to talk about techniques that differentiated more successful agile teams from less successful ones. Near the end of the talk, I got a very interesting question:

"What is the set of techniques where, if you took one away, you would no longer call it ‘agile’?"

This is a pretty good question. I thought for a little bit, and came up with the following:

  • First, the team takes an incremental approach; they make process changes in small, incremental steps
  • Second, the team is experimental; they approach process changes from a "let’s try this and see if it works for us" perspective.
  • Third, the team is a team; they have a shared set of work items that they own and work on as a group, and they drive their own process.

All of these are necessary for the team to be moving their process forward. The first two allow process to be changed in low risk and reversible way, and the third provides the group ownership that makes it possible to have discussions about process changes in the first place. We get process plasticity, and that is the key to a successful agile team – the ability to take the current process and evolve it into something better.

Fast forward a few weeks later, and I was involved in a discussion about a team that had tried Scrum but hadn’t had a lot of luck with it, and I started thinking about how agile transitions are usually done:

  • They are implemented as a big change; one week the team is doing their old process, then next they (if they are lucky) get a little training, and then they toss out pretty much all of their old process and adopt a totally different process.
  • The adoption is usually a "this is what we are doing" thing.
  • The team is rarely the instigator of the change.

That’s when I realized what had been bothering me for a while…

The agile transition is not agile.

That seems more than a little weird. We are advocating a quick incremental way of developing software, and we start by making a big change that neither management or the team really understand on the belief that, in a few months, things will shake out and the team will be in a better place. Worse, because the team is very busy trying to learn a lot of new things, it’s unlikely that they will pick up on the incremental and experimental nature of agile, so they are likely going to go from their old static methodology to a new static methodology.

This makes the "you should hire an agile coach" advice much more clear; of course you need a coach because otherwise you don’t have much chance of understanding how everything is supposed to work. Unfortunately, most teams don’t hire an agile coach, so it’s not surprising that they don’t have much success.

Is there a better way? Can a team work their way into agile through a set of small steps? Well, the answer there is obviously "Yes", since that’s how the agile methods were originally developed.

I think we should be able to come up with a way to stage the changes so that the team can focus on the single thing they are working on rather than trying to deal with a ton of change. For example, there’s no reason that you can’t establish a good backlog process before you start doing anything else, and that would make it much easier for the agile teams when they start executing.

Resharper tip #1: Push code into a method / Pull code out of a method

October 12, 2015 at 2:59 pm

Resharper is a great tool, but many times that operation that I want to perform isn’t possible with a single refactoring; you need multiple refactorings to get the result that you want. I did a search and could find these anywhere, so I thought I’d share them with you.

If you know where more of these things are described and/or you know a better way of doing what I describe, please let me know.

Push code into a method

Consider the following code:

   1: static void Main(string[] args)

   2: {

   3:     DateTime start = DateTime.Now;

   4:  

   5:  

   6:     DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

   7:     string startString = start.ToShortDateString();

   8:  

   9:     Process(oneDayEarlier, startString);

  10:  

  11: }

  12:  

  13: private static void Process(DateTime oneDayEarlier, string startString)

  14: {

  15:     Console.WriteLine(oneDayEarlier);

  16:     Console.WriteLine(startString);

  17: }

Looking at the code in Main(), there are a couple of variables that are passed into the Process() method. A little examination shows that things would be cleaner they were in the Process method, but there’s no “move code into method” refactoring, so I’ll have to synthesize it out of the refactorings that I have. I start by renaming the Process() method to Process2(). Use whatever name you want here:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         DateTime start = DateTime.Now;

   6:  

   7:  

   8:         DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

   9:         string startString = start.ToShortDateString();

  10:  

  11:         Process2(oneDayEarlier, startString);

  12:  

  13:     }

  14:  

  15:     private static void Process2(DateTime oneDayEarlier, string startString)

  16:     {

  17:         Console.WriteLine(oneDayEarlier);

  18:         Console.WriteLine(startString);

  19:     }

  20: }

Next, select the lines that I want to include into the method plus the method call itself, and do an Extract Method refactoring to create a new Process() method:

   1: static void Main(string[] args)

   2: {

   3:     DateTime start = DateTime.Now;

   4:  

   5:  

   6:     Process(start);

   7: }

   8:  

   9: private static void Process(DateTime start)

  10: {

  11:     DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

  12:     string startString = start.ToShortDateString();

  13:  

  14:     Process2(oneDayEarlier, startString);

  15: }

  16:  

  17: private static void Process2(DateTime oneDayEarlier, string startString)

  18: {

  19:     Console.WriteLine(oneDayEarlier);

  20:     Console.WriteLine(startString);

  21: }

Finally, inline the Process2() method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         DateTime start = DateTime.Now;

   6:  

   7:  

   8:         Process(start);

   9:     }

  10:  

  11:     private static void Process(DateTime start)

  12:     {

  13:         DateTime oneDayEarlier = start - TimeSpan.FromDays(1);

  14:         string startString = start.ToShortDateString();

  15:  

  16:         Console.WriteLine(oneDayEarlier);

  17:         Console.WriteLine(startString);

  18:     }

  19: }

Three quick refactorings got me to where I wanted, and it’s about 15 seconds of work if you use the predefined keys.

Pull Code Out of a Method

Sometimes, I have some code that I want to pull out of a method. Consider the following:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation("Information: ", n);

   8:     }

   9:  

  10:     private static void WriteInformation(string information, int n)

  11:     {

  12:         File.WriteAllText("information.txt", information + n);

  13:     }

  14: }

I have a few options here. We can pull “information.txt” out easily by selecting it and using Introduce Parameter:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation(string information, int n, string filename)

  11:     {

  12:         File.WriteAllText(filename, information + n);

  13:     }

  14: }

I could use that same approach to pull out “information + n”, but I’m going to do it an alternate way that works well if I have a chunk of code. First, I introduce a variable:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation(string information, int n, string filename)

  11:     {

  12:         string contents = information + n;

  13:         File.WriteAllText(filename, contents);

  14:     }

  15: }

I rename the method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation2("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation2(string information, int n, string filename)

  11:     {

  12:         string contents = information + n;

  13:         File.WriteAllText(filename, contents);

  14:     }

  15: }

And I now extract the code that I want to remain in the method to a new method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         WriteInformation2("Information: ", n, "information.txt");

   8:     }

   9:  

  10:     private static void WriteInformation2(string information, int n, string filename)

  11:     {

  12:         string contents = information + n;

  13:         WriteInformation(filename, contents);

  14:     }

  15:  

  16:     private static void WriteInformation(string filename, string contents)

  17:     {

  18:         File.WriteAllText(filename, contents);

  19:     }

  20: }

And, finally, I inline the original method:

   1: class Program

   2: {

   3:     static void Main(string[] args)

   4:     {

   5:         int n = 15;

   6:  

   7:         string contents = "Information: " + n;

   8:         WriteInformation("information.txt", contents);

   9:     }

  10:  

  11:     private static void WriteInformation(string filename, string contents)

  12:     {

  13:         File.WriteAllText(filename, contents);

  14:     }

  15: }

Port/Adapter/Simulator and error conditions

October 9, 2015 at 11:10 am

An excellent question on an internal alias came up today, and I wanted to share my response more widely.

The question is around simulating error conditions when doing Port/Adapter/Simulator.

For example, if my production adapter is talking to a database, the database might be unreachable and the real adapter would throw a timeout exception. How can we get the simulator to do that, so we can write a test that verifies that our code behaves correctly in that scenario?

Before I answer, I need to credit Arlo, who taught me at least part of this technique…

Implement common behaviors across all adapters

The first thing to do is to see if we can figure out how to make the simulator behavior mirror the behavior of the real adapter.

If we are implementing some sort of store, the real adapter might throw an “ItemNotFound” exception if the item isn’t there, and we can just make the simulator detect the same situation and throw the same exception. And we will – of course – write a test that we can use to verify that the behavior matches.

Or, if there is a restriction on the names in a store (say one of our adapters stores items in a file, and the name is just the filename), then all of the adapters much implement that restriction (though I’d consider whether I wanted to do that or use an encoding approach to get rid of the restriction for the file adapter).

Those are the simple cases, but the question was specifically about timeouts. Timeouts are random and not-deterministic, right?

Yes, they are non-deterministic in actual use, but there might be a scenario that will always throw a timeout. What does the real adapter do if we pass in a database server that does not exist – something like “DatabaseThatDoesNotExist”? If we can figure out a developer/configuration error that sets up the scenario we want, then we can implement the same behavior in all of our adapters, and our world will be simple.

However, the world is not always that simple…

Cheat

I’ll note here that Arlo did not teach me this technique, so any stupidity belongs to me…

If I can’t find out a deterministic way to get a scenario to happen, then I need to implement a back door. I do this by adding a method to the simulator (not the adapter) that looks something like this:

public void SimulateTimeoutOnLoad();

Note that it is named with “Simulate”<scenario><method-name>, so that it’s easy to know that it isn’t part of the adapter interface and what it does. I will write a unit test to verify that the simulator does this correctly, but – alas – I cannot run this test against the real adapter because the method is not part of the adapter. That means it’s a decent idea to put these tests in a different file from the ones that target the adapter interface.

Now that I have the new method, the test is pretty simple:

Fetcher fetcher = new FetcherSimulator();

ObjectToTest ott = new ObjectToTest(fetcher);

fetcher.SimulateTimeoutOnLoad();

ott.Load();
Assert.Whatever(…);

I’m just using the method to reach into the simulator and tell it to do something specific in a specific scenario.

My goal is to do this as little as possible because it reduces the benefit we get from P/A/S, so I try to look very hard to find a way to not cheat.

I should also note that if you are using P/A/S on the UI side, you are pretty much stuck using this technique, because the things that you are simulating are user actions, and none of them can be triggered through using the real adapter.

Agile team evaluation

October 5, 2015 at 4:35 pm

I’ve been thinking a bit about team evaluation. In the agile world, this is often done by looking at practices – is the team doing pairing, are they doing story mapping, how long is their iteration length?

This is definitely a useful thing to do, but it can sometimes be too prescriptive; a specific practice needs to be good for a team where they are right now, and that’s always clear. I’m a big fan of not blocking checkin on code review, but I need something to replace it (continuous code review through pairing or mobbing) before it makes sense.

Instead, I’ve tried to come up with a set of questions that focus out the outcomes that I think are most important and whether the team is getting better at those outcomes. I’ve roughly organized them into four categories (call them “Pillars” if you must).

 

1: Delivery of Business Value

  • Is the team focused on working on the most important things?
  • Are they delivering them with a quality they are proud of?
  • Are they delivered in small, easy-to-digest chunks?
  • Is the team getting better?

2: Code Health

  • Is the code well architected?
  • Are there tests that verify that the code works and will continue to work?
  • Is the team getting better over time?
    • Is the architecture getting cleaner?
    • Is it easier to write tests?
    • Is technical debt disappearing?
    • Are bugs becoming less frequent?
    • Are better technologies coming in?

3: Team Health

  • Is the team healthy and happy?
  • Is there “esprit de corps” in the team?
  • Are team members learning to be better at existing things?
  • Are team members learning how to do new things?
  • Does the team have an experimental mindset?

4: Organization Health

  • Are changes in approaches by the team(s) leading to changes in the overall organization?
  • Are obstacles to increase speed and efficiency going away?
  • Are the teams trying different things and sharing their findings? Or is the organization stuck in a top-down, monocultural approach?
  • Is there a cleared vision and charter for the organization?
  • Does the organization focus on “what” and “why” and let the teams control the “how”?

Agile management

July 9, 2015 at 1:43 pm

A friend at work posted a link to the following article:

I’m Sorry, But Agile Won’t Fix Your Products

and I started to write a quick note but thought it deserved a bigger response

+++++++++++++

Okay, so, I agree with pretty much all of the article.

As is often the case, I went and wrote some analysis, didn’t like it, tried to make it better, and ended up abandoning it to write something else. I agree with the comments in the article about command-and-control, but I think there is another aspect that is worth discussing. I’ll note that some of this is observational rather than experiential.

Collectively, management tends to value conformity pretty highly. If, for example, your larger group creates two-year products plans, you will be asked about your two-year product plans and – even if you have a great reason for only doing three-month plans that your manager agrees with – you will become an outlier. Being an outlier puts you at risk; if, for example, things don’t go as well as expected for you, there is now an obvious cause for the problem – your nonconformity. Or, your manager get promoted, and the new manager wants two-year plans.

This effect was immortalized in a saying dating back to the days of mainframes:

Nobody ever got fired for buying IBM…

Because of this effect, you end up with what I call a “Group Monoculture”, where process is mostly fixed, and inefficiency and lack of progress are fine as long as they are the status quo.

It is a truism that, whatever skills they might also possess, there is one commonality amongst all the managers; they possess the ability to be hired and/or promoted in the existing corporate culture. That generally means that they are good at following the existing process and good at conformity. This reinforces and cements the monoculture. Any changes that happen are driven from high up the chain and just switch the group to a different monoculture.

Different is bad, which, last time I checked, was not one of the statements in the Agile Manifesto…

How can agility happen in such an org? Well, it happens due to the actions of what I call process adapters. A process adapter adapts the process that exists above a group in the organization to be closer to the process that the team wants to have. For example, the adapter might keep that two-year plan up to date but allow the team below to work as if short planning cycles were the norm. Or an adapter might adapt the team’s one-week iteration cycle to the overall group’s 12-week cycle.

Adaptation is not a panacea. The adaptation is always imperfect and some of the process from above leaks down, and it can be pretty stressful to the adapter; they are usually hiding some details from their manager, fighting battles so that they can be different, and running a very real career risk. As their team gets more agile and self-guided, the adaptation gets more leaky, and the adapter runs more risks; the whole thing can be derailed by investing time in reducing technical debt which slows them down, some unexpected questions by the agile team members to management, or the adapter getting a new manager.

I’ve seen quite a few first-level (aka “lead”) adapters; leads tend to be focused more down at their team than up and out and can usually get away with more non-conformity; leads are viewed as less experienced and there’s often a feeling that they should have a lot of latitude in how they run their teams. Leads are also more likely to be senior and technically astute, which gives them more options to “explore different opportunities” both inside and outside the company.

I haven’t seen any second-level adapters be successful for more than a year or so, though I have seen a few try really hard.

Sometimes, adapters get promoted into the middle of the hierarchy or are hired from outside. This is often a frustrating position for the adapter. As Joe Egan and Gerry Rafferty wrote back in 1972:

Clowns to the left of me,
Jokers to the right,
Here I am
Stuck in the middle…

One of two things tend to happen.

Either the adapter gets frustrated with the challenges of adapting and trying to drive broader change and decide to do something else, or the adapter gets promoted higher. Further promotion often doesn’t have the hoped-for effect; as the adapter moves up they get broader scope, and the layers underneath them are managed by – you guessed it – the rank and file managers who are devoted to the existing monoculture. Not to mention that the agile “teams are self-organizing and drive their own approach” tenet means that adapters tend to give less direction to their reports.

A little something that made me happy…

June 2, 2015 at 2:15 pm

Last week, I was doing some work on a utility I own. It talked to some servers in the background that could be slow at times and there was no way to know what was happening, so I needed to provide some way of telling the user that it was busy.

I started writing a unit test for it, and realized I needed an abstraction (R U busy? Yes, IBusy):

public interface IBusy
{
    void Start();
    void Stop();
}

I plumbed that into the code, failed the test, and then got it working, but it wasn’t very elegant. Plus, the way I have my code structured, I had to pass it into each of the managers that do async operations, and there are four of those.

The outlook was not very bright, but I can suffer when required, so I started implementing the next set.

Halfway through, I got an idea. When I added in the asynchronous stuff, I needed a way to abstract that out for testing purposes, so I had defined the following:

public interface ITaskCreator
{
    void StartTask<T>(Func<T> func, Action<T> action  );
}

This is very simple to use; pass in the function you want to happen asynchronously and the action to process your result. There is a TaskCreatorSynchronous class that I use in my unit tests; it looks like this:

public void StartTask<T>(Func<T> func, Action<T> action)
{
    action(func());
}

What I realized was that the times I needed to show the code was busy were exactly the times when I was running a task, and I already had a class that knew how to do that.  I modified TaskCreator:

public class TaskCreator : ITaskCreator
{
    public EventHandlerEmpty StartedTask;
    public EventHandlerEmpty FinishedTask;

    public void StartTask<T>(Func<T> func,
        Action<T> action  )
    {
        if (StartedTask != null)
        {
            StartedTask();
        }

        Task<T>.Factory.StartNew(func)
            .ContinueWith((task) =>
            {
                action(task.Result);
                if (FinishedTask != null)
                {
                    FinishedTask();
                }
            }, TaskScheduler.FromCurrentSynchronizationContext());
    }
}

It now has an event that is called before the task is started and one that is called after the task is completed. All my main code has to do is hook up appropriately to those events, and any class that uses that instance to create tasks will automatically get the busy functionality.

I am happy when things turn out so neatly.

What makes a good metric?

June 2, 2015 at 1:18 pm

I got into a discussion at work today about metrics – a discussion about correctness vs utility – and I wrote something that I thought would be of general interest.

——

The important feature of metrics is that they are useful, which generally means the following:

a) Sensitive to the actual thing that you are trying to measure (ie when the underlying value changes, the metric changes).

b) Positively correlated with the thing you are trying to measure (a change in the underlying value produces a move in the correct direction of the metric).

c) Not unduly influenced by other factors outside of the underlying value (ie a change in the underlying usage does not have a significant effect on the metric).

Those give you a decent measure. It’s nice to have other things – linearity, where a 10% in the underlying value results in a 10% move in the metric – but they aren’t a requirement for utility in many cases.

To determine utility, you typically do a static analysis, where you look at how the metric is calculated, figure out how that relates to what you are trying to measure, and generally try to come up scenarios that would break it. And you follow that up with empirical analysis, where you look at how it behaves in the field and see if it is generating the utility that you need.

The requirements for utility vary drastically across applications. If you are doing metrics to drive an automated currency trading system, then you need a bunch of analysis to decide that a metric works correctly. In a lot of other cases, a very gross metric is good enough – it all depends on what use you are going to make of it.

——

Two things to add to what I wrote:

Some of you have undoubtedly noticed that my definition for the goodness of a metric – utility – is the same definition that is use on scientific theories. That makes me happy, because science has been so successful, and then it makes me nervous, because it seems a bit too convenient.

The metrics I was talking about were ones coming out of customer telemetry, so the main factors I was worried about were how closely the telemetry displayed actual customer behavior and whether we were making realistic simplifying assumptions in our data processing. Metrics come up a lot in the agile/process world, and in those cases confounding factors are your main issue; people are very good at figuring out how to drive your surrogate measure in artificial ways without actually driving the underlying thing that you value.