Summits of Bothell 2006

August 28, 2006 at 12:08 pm

Yesterday I rode the Summits of Bothell. Like its more popular cousin to the south, S (I’m not a big fan of “cute” acronyms so I’m not going to use it…) is a ride focused around hills, originally 7 but now 8 in a bid to outdo the competition.

Their other bid to outdo the competition is in steepness. The advertisement sheet says two very troubling things. The first is an offhand comment about equipment:

Triple cranks and <24″ low gear highly recommended as well as beefy brakes

24″ refers to “gear inches”, a measure of how far your bike moves forward for each pedal revolution. An average “riding on the flat” gearing for me is something like a 42-17 (teeth on the front/teeth on the rear), which is about 67 gear inches. My lowest gear (with my 12-27 rear cassette), is 30 gear inches. 24 gear inches is very low.

The second comment is a list of the hills:

Ascents – 14% (2), 16% (3), 18% (1).

Though I run a site for bicycle climbs, I am not a climbing specialist. But I have done some steep stuff. 10% is steep. 15% is painfully steep, and above that is just crazy.

But I’ve survived the Zoo multiple times, so I know what it takes to get up those kinds of hills. And I’m doing the Mountain 100 km Populaire in September, which includes the zoo (and other such climbs), so I need the practice at the steeps.

In other words, my strategy is sound.

My tactical decisions, however, are fairly suspect. I’m riding with a few friends, but what I’ve forgotten is that rides like this are self-selecting – the people that show up are the ones that can *climb*. I have at least 25 pounds on all of these guys, and at 170 pounds on my 6’2″ frame, I’m not carrying a lot of weight.

Franklin doesn’t ride that much more than me, but is strong on the flats and fast up the hills. He “went on a long run” yesterday.

Steve is riding his steel Colnago. Steve is scary fit – on our group rides, he’ll just be talking to you at the bottom of a climb near the back of the group, and then he’ll just ride past everybody on the ascent. He “rode 120 miles” yesterday. That is not a good sign, it’s a sign that I’m out of my league. Anybody who rides 120 miles on Saturday and then shows up for a pain-fest on Sunday is to be watched.

And finally, we have Joe. Joe has focused in on the ride guarantee – “If not completely satisfied, you can ride the course a second time for FREE!

He is planning on taking them up on that.

The start is very low key. A sign-in table, where you get a map (this is the 2005 map, and doesn’t show the 8th hill up Hollyhills drive) and a T-shirt (included in the $25 fee). Some water and sports drink, and that’s about it. And there are only about 20 people milling around. A nice change from the 3000 people that do flying wheels and the 9000 that do STP.

We head out to the first hill (Hollyhills), and ride up it. I start slow at the bottom, but stay about 10 seconds behind the group. It’s fairly easy – 7%-8%, and I can compete on those grades. The group crests the top, circles once as I crest, and we descend down. One down.

The second hill (Bloomberg) is a different story. It starts at around 10%, the group gaps me, and then it kicks up to about 13%, and the gap grows. And then it gets steep – 16% (ish) for the last half block.

At this point, I part ways with the group. This is a very good thing – I killed myself on the southern cousin by trying to stay with this same group.

The next three hills – Beckstrom, Nike, and Westhill – blend together. They’re steep, I ride up them trying to keep my HR in the low 150s. The day is perfect, and I talk to the very few riders I run into as we ride between hills.

Which takes us to Finn Hill. I’ve climbed this hill a few times, but never in this direction. This hills is probably the steepest one of the ride. I ride slow (around 4MPH), but manage to get up it in one piece without having to tack back and forth across the hill. This is the only place I see people walking.

Norway hill is next, but it’s Norway from the South side, the easy way up, then Brickyard Road, and back to the finish, my finish-line ice cream, and my Costco cookie.

I don’t have my full stats, but I do know my average speed was 13.4 MPH. There are no real flats on this ride, and there are a lot of stop signs and traffic lights, so don’t expect to be pace-lining it.

Discounting the considerable amount of pain involved in the climbs, this was a very enjoyable ride – I liked it more than its smaller cousin. Very low key – the rest stops were very tiny but with great volunteers, cold water and the right food. And they had some cookies at the end of the ride, when you really want something solid. The course markings were all great.

I do have one small complaint with the signage. When I’m climbing, it’s nice to know how to mashall my effort, and to do that I need to know when the pain will end. On several of the climbs, you’ll finish the really steep section, and then ride at least half a mile on gentle slopes or rolling terrain to get to the “summit”. Delaying the signage until that point diminishes the feeling of accomplishment at finishing the steep section, and makes you think there’s more steep. I think Norway is the only exception, where the climb finishes right at the top.

It also makes the stats seem weird. Bloomberg hill is 440′ high at the true summit, but the climb distance is perhaps 7000 feet, giving it a gradient of only about 6%. But the section on 240th gains 230 feet in 1770 feet, putting it right at 13%, and that’s the average for that section, not the max.

It would be nice to have an indication of the steeps on the map (perhaps with some beads of sweat on the route), and a sign that says, “Pain lessens” at the end of each steep section.

I was going to suggest that they get a real website, but that would encourage more riders to participate…

Feel the pain…

August 21, 2006 at 6:36 pm

Dare wrote a post talking about the advisability of making developers do operations.


Which is really part of a philosophical question…


When you’re setting up a software organization, how much specialization should you have, and where you should you draw the lines around the responsibilities of the various groups?


Some orgs take a very generalized view of what people own, and others take a very specialized view. I’ve worked in both sorts of environments.


I’ve worked for a startup where, as a developer, I wrote the code, tested the code, built the code, made tapes to ship out to customers, and answered customer support calls.


And I’ve worked in other organizations where the job of developer was to implement what was written down in the spec and pass it off to the QA org. Those orgs typically had structures and policies designed to insulate the developers, so they wouldn’t be distracted.


That eliminated a bunch of the outside noise that they would otherwise have to deal with, and make them more efficient at getting their development work done.


And how did those efficient organizations fare in their products?


Not very well.


They were reasonably good at shipping software, but their software didn’t turn out to be very good for users. New updates didn’t address issues that users had been hitting. New features were hard to use and/or didn’t hit the sweet spot. They answered questions that users didn’t ask.


All of this was because the developers were out of touch with people who had to deal with their software. They didn’t feel the pain that the users were experiencing setting up their software. They didn’t feel the pain when a bug in the software meant that the user’s business was loosing money. And they didn’t understand why users were having trouble using features that seemed obvious to them.


All that happened in DevDiv, and the issues showed up in our customer satisfaction numbers. So, it was decided to let developers (and the testers, and PMs…) talk directly with customers.


There was a fair amount of angst around this decision. It would take up too much dev time. Developers would insult customers. Customers didn’t know enough to give good feedback.


But it turned out that all of those things were wrong. The developers liked to solve problems, and they also liked to help people. They remotely debugged customer issues on other continents. And they listened very closely to the detailed feedback customers gave about how the current software didn’t meet business needs and what was good and bad about future plans.


And the organization adapted what they were planning, so that it addressed the areas that needed addressing.


Distraction is not the enemy. Pain is not the enemy. Pain is to be embraced, because only through feeling pain are you motivated to make it go away.

Other views on programming sins…

August 8, 2006 at 1:31 pm

At the beginning of the sin-tacular, I asked for people to come up with their own lists. And here they are:



My original plan was to comment on some of the individual sins that people listed, but they’re all great – you should go and read them all.


I was a bit intrigued, however, by Chris’ comment (or should that be “Chris’ Comments’ comment?):


Hey, Eric, what are the 7 Heavenly Virtues of Programmers?


Hmm…

Seven deadly sins of programming – Sin #1

August 3, 2006 at 5:24 pm

So, the time has come for the worst sin.


Just to recap – and so there is one post that lists them all – here are the ones that I’ve covered so far:



Some people have remarked that all of these are judgement calls, and really more a matter of aesthetics than actual sins.


That is true. I didn’t include things like “naming your variables i, j, & k” as sins, because I don’t think that’s a real problem in most of the code I’m likely to have to deal with, and there really isn’t much argument over whether it’s a good idea or not.


It perhaps would have been better to title this series, “Seven things Eric would really prefer that you don’t do in code that he has to work with”, but that is both ungainly and lacking the vitality of a post with the term “sin” in it.


It’s all marketing, you see – or you would if you were actually reading this post, but given my track record on the last six, it’s probably a good idea to cut your losses now and spend your time more productively, like in switching your entire codebase from tabs to spaces (or spaces to tabs…)


When I was a kid, I was fairly interested in WWII. I read a lot of books about it, from general histories about the war, to books on the warfare in the Pacific, to books about the ground war in Europe.


One of the interesting features of the military during that time – one that I didn’t appreciate until much later – was how they balanced the desire for advancement in their officer corps vs the need to only advance the most talented and capable. There were really two schools of thought at the time.


The first school advocated an approach where a lower-ranked officer – say, a colonel – would be promoted to fill a vacancy directly, on the theory that it made the chain of command cleaner, and you’d quickly find out if he had “the right stuff”.


The second group advocated using “field promotions”, in which a colonel would be temporarily promoted to see if he could perform in the job. The theory here was that the service would end up with only the best colonels promoted, and that it was much easier (and better for both the officer and the service) to let a field promotion expire rather than demote an officer already given a full promotion.


Over time, the approach advocated by the second group was borne out as having far better results, and the danger of the first approach was recognized.


Which brings us on our roundabout journey to our final sin:


Sin #1 – Premature Generalization


Last week I was debugging some code in a layout manager that we use. It originally came from another group, and is the kind of module that nobody wants to a) own or b) modify.


As I was looking through it, I was musing on why that was the case. Not to minimize the difficulty in creating a good layout manager (something I did a bit of in a previous life), but what this module does really isn’t that complex, and it has some behavior that we would really like to change.


The problem is that there are at least three distinct layers in the layout manager. I write a line of code that says:


toolbarTable.SetColMargin(0, 10);


and when I step into it, I don’t step into the appropriate TableFrame. I step into a wrapper class, which forwards the call onto another class, which forward onto another class, which finally does something.


Unfortunately, the relation between the something that gets done and the TableFrame class isn’t readily apparent, because of the multiple layers of indirection.


Layers of indirection that, as far as I can tell (and remember that nobody wants to become the owner of this code by showing an any interest in it or, god forbid, actually making a modification to it…), aren’t used by the way we use the layout manager. They’re just mucking things up…


Why is this the #1 sin?


Well, as I’ve been going through the sins, I’ve been musing on how I ranked them. One of the primary factors that I used is the permanence of the sin.


And this one is pretty permanent. Once something is generalized, it’s pretty rare that it ever gets de-generalized, and I this case, I think it would be very difficult to do so.


<Agile aside>


This might be slightly different if there were full method-level tests for the component – one could consider pulling out that layer. But even with that, it would be hard to approach in a stepwise fashion – it could easily turn into one of those 3-hour refactorings that makes you grateful that your source code control system has a “revert” feature.


</Agile aside>


Or, to put it another, fairly obvious way:


Abstraction isn’t free


In one sense this seems obvious – when you develop a component that is used by multiple clients, you have to spend a little extra effort on design and implementation, but then you sit back and reap the benefits.


Or do you?


It turns out that you only reap the benefits if your clients are okay with the generalized solution.


And there’s a real tendency to say, “well, we already have the ListManager component, we can just extend it to deal with this situation”.


I’ve know teams where this snowballed – they ended up with a “swiss army knife” component that was used in a lot of different scenarios. And like many components that do a lot, it was big, complex, and had a lot of hard-to-understand behavior. But developing it was an interesting technical challenge for the developers involved (read that as “fun and good for their careers”…)


The problem came when the team found that one operation took about 4 times as long as it should. But because of the generalized nature of the component doing the operation, there was no easy way to optimize it.


If the operation had been developed from scratch without using the “uber-component”, there would have been several easy optimization approaches to take. But none of those would work on the generalized component, because you couldn’t just implement an optimization in one scenario – it would have to work for all scenarios. You couldn’t afford the dev cost to make it work everywhere, and in this case, even if you could, it would cause performance to regress in other scenarios.


(At this point, I’m going to have to have anybody thinking “make it an option” escorted out of this post by one our friendly ushers. How do you think it got so complex in the first place?)


At that point, you often have to think about abandoning the code and redeveloping in the next version. And in the next cycle, this group *did* learn from their mistakes – instead of the uber-component, they built a small and simple library that different scenarios could use effectively. And it turned out that, overall, they wrote less code than before.


HaHA. I make joke.


What they really did was build *another* uber-component that looked really promising early (they always do), but ultimately was more complex than the last version and traded a new set of problems for the old ones. But, building it was a technical challenge for the developers involved, and that’s what’s really important…


How do you avoid this sin?


Well, YAGNI is one obvious treatment, but I think a real treatment involves taking a larger view of the lifecycle costs of abstraction and componentization.


It that one general component really going to be better than two custom solutions?


(if you didn’t understand the story, look here and and see what rank is above a colonel…)