SIR Mountain Populaire 2006

September 24, 2006 at 8:11 pm

Today, I rode the Mountain Populaire 100K, put on by the Seattle International Randonneurs.

Randonneuring is a long-distance cycling discipline that originated in France (hence the name) way back in the 1800s. It’s organized around a series of rides known as “brevets”, which are pronounced exactly the way you would expect if you speak French. The goal is to finish a ride of a specific distance within a specific time limit. For example, the 200 km brevet typically has a an overall time to finish of 13:30, the 300 km a time limit of 20:00, and so on – all the way up to 75 hours for a 1000 km ride.

Given the length of most of the brevets, some clubs host “populaires”, which are shorter events for new riders for “introducing new riders to the ways of randonneuring”.

These rides are different from most organized rides in the following ways:

  • They are not supported. There are a few people who follow the group around, but there is no sag wagon, no mechanics, no food.
  • There are no dan henry’s – you need to follow the route sheet.
  • There are controls along the route. Some are manned, where somebody will sign your sheet. Some are unmanned, where you’ll need to answer a question to prove that you were there.
  • They’re typically much smaller.

Instead of doing a typical introduction, the folks at SIR decided to host a “Mountain Populaire”. Instead of doing a typical course, it’s a course with as much climbing as possible. (note that I’m assuming that SIR is different in this regard – it may be that all Populaires are like this).

In this case, the course packs 5480 feet of climbing into 110 km. The climbs are:

There is a claimed 8th hill, but I don’t recall exactly where.

So, how does this compare to the Summits of Bothell or 7 Hills? Well, 7 Hills has a lot of climbing, but only seminary hill and winery hill are really challenging. Summits of Bothell has a lot of steep climbs, but most of them aren’t very long. And they’re both in the 40ish mile range.

This ride is 69 miles, and while it does have a couple of fairly easy climbs – 164th and Tiger mountain – it starts out with a 1000′ climb, and then finishes with a 700′ climb, both of which have slopes in excess of 15%. My legs were certainly tired when I got to Mountain Park, but I had to tack back and forth to make it to the top (I was not the only one).

Definitely the hardest ride I’ve been on, and a nice way to end the season. Beautiful day, and a nice group to ride with.

Recommended

Yellow Sticky Exercise

September 21, 2006 at 10:25 pm

Yellow Sticky Exercise


Take one pack yellow stickies (aka “Post It” brand sticky paper notes). Place them strategically on your hands and arms, and wave them around for 10 minutes.


Wait… That’s the wrong version.


The yellow sticky exercise is a tool that is used to collect feedback from a group in a manner that encourages everybody to give feedback, doesn’t waste time, and lets people give the feedback (mostly) anonymously.


Microsoft has a tradition of doing “Post Mortems” on our projects, which are designed to figure out what went wrong, what should be done about it, and assign an owner. What typically happen is the group complains for an hour, three people dominate the conversation, several rat holes are encountered, a few things get written down as action items, and nothing ever happens with the results.


The yellow sticky exercise is an alternative. It works well whenever you want to figure out the combined opinion of the group. It was taught to me be a very sharp usability engineer.


Get everybody together in a room. Each person gets a pad of stickies and a pen. Both pens and stickies should be the same color, so things are anonymous.


In the first segment, each person writes down as many issues/problems as they can, one per sticky note. It’s useful to tell people ahead of time so they can come with lists ahead of time, but that’s not necessary. This generally takes about 10 minutes, and you continue until most of the people have run out of things to write down.


Ground rules for issues:



  1. They can’t be personal (ie no names on them)

  2. They should be specific

  3. They should have enough detail so that anybody in the room could figure out what they meant.

If you’re doing this to collect the problems you have, it’s good to ask people not to put down anything personal, and to include enough data so that


At this point you have a lot of diverse feedback items, and you need to get them placed in groups. That is done by the group. You ask the group to put the stickies up on your wall/whiteboard in meaningful groups, and just let them go at it. When there’s a group of 5 or more stickies, ask somebody in the group to come up with a label for the group and write it on the whiteboard or on a sticky. You should also ask people to move stickies around if they belong in a different group.


When everything is on the wall and people are happy with the groups, you’re done with the group. Somebody will need to stick the stickies on paper and own typing them all up.


If you do this, I’m confident that the breadth and the depth of the feedback will be much better than other methods.

Bugs, blogs, and surrogates

September 21, 2006 at 9:39 pm

Back in 1996, I was a QA lead for the C++ compiler, and our group wanted to incent people to fix and close bugs.


One of the other QA leads had the brilliant insight that lego blocks make excellent currency amongst development teams, and I – because of my demonstrated aptitude in generating reports from our bug-tracking system – became the “Lego Sheriff” for the group, handing out blocks. I believe the going rate was three blocks per bug.


Not surprisingly, some people started to game the system, to increase the number of blocks. Those of you who are surprised that somebody would go to extra effort to get blocks that retail at about a penny per block have never seen a millionaire fight for to get a free $10 T Shirt.


But I digress.


That there was a system to game was due to a very simple fact. Our goal wasn’t really to get people to fix and close bugs, our goal was to get the product closer to shipping. But we didn’t have a good way to measure the individual contribution to that, so we choose active and resolved bug counts as a surrogate measure – a measure that (we hoped) was well correlated with the actual measure.


This was a pretty harmless example, but I’ve seen lots of them in my time at Microsoft.


The first one I encountered was “bugs per tester per week”. A lead in charge of testing part of the UI of visual studio ranked his reports on the number of bugs they entered per week, and if you didn’t have at least <n> (where <n> was something like 3 or 5), you were told that you had to do better.


You’ve probably figured out what happened. Nobody ever dropped below the level of <x> bugs per week, and the lead was happy that his team was working well.


The reality of the situation was that the testers were spending time looking for trivial bugs to keep their counts high, rather than digging for the harder-to-find but more important bugs that were in there. They were also keeping a few bugs “in the queue” by writing them down but not entering them, so they could make sure they hit their limit.


Both of those behaviors had a negative impact, but the lead liked the system, so it stayed.


Another time I hit this was when we were starting the community effort in devdiv. We were tracked for a couple of months for things like “newsgroup post age”, “number of unanswered posts”, or “number of posts replied to by person <x>”.


Those are horrible measures. Some newsgroups have tons of off-topic messages that you wouldn’t want to answer. Some have great MVPs working them that answer so fast you can’t a lot to say. Some have low traffic so there really aren’t that many issues to address.


Luckily, sharper heads prevailed, and we stopped collecting that data. The sad part is that this is one situation where you *can* measure the real measure directly – if you have a customer interaction, you can *ask* the customer at the end of the interaction how it went. You don’t *need* a surrogate.


I’ve also seen this applied to blogging. Things like number of hits, number of comments, things like that. Just today somebody on our internal bloggers alias was asking for ways to measure “the goodness” of blogs.


But there aren’t any. Good blogs are good blogs because people like to read them – they find utility in them.


After this most recent incident of this phenomena presented itself, I was musing over why this is such a common problem at Microsoft. And I rememberd SMART.


SMART is the acronym that you use to remember the measures that tell you that you’ve come up with a good measure. M means measurable (at least for the purposes of this post. I might be wrong, and in fact I’ve forgotten what all the other letters mean, though I think T might mean Timely. Or perhaps Terrible…).


So, if you’re going to have a “SMART goal”, it needs to be *measurable*, regardless of whether what you’re trying to do is measurable.


So, what happens is you pick a surrogate, and that’s what you measure. And, in a lot of cases, you forget that it’s a surrogate and people start managing to the surrogate, and you get the result that you deserve rather than the one you want.


If you can measure something for real, that’s great. If you have to use a surrogate, try to be very up-front about it, track how well it’s working, don’t compare people with it, and please, please, please, don’t base their review on it.