Chris Flaat and Scrum

April 7, 2005 at 12:34 pm

Adding an overdue link.


I don’t think I’ve linked to Chris’ blog before. I worked with Chris in my early VC++ days when we were both on the VC++ QA team.


Chris is a developer now, and has been writing about his experiences with SCRUM (not scrum), and integrating it into the larger process in his division.

Design for Performance vs. Tune for performance

March 23, 2005 at 10:55 am

A question related to performance came up today on a mailing list that I’m on, and I stated that the right way to do things was to do performance work based on measurements (ie don’t engage in speculative optimization).

Another list member wrote back and said, “I understand what you’re saying, but isn’t that at odds with what many books say about designing for performance?”

I wrote an answer, and decided that it would be a good thing to share. This is what I think about writing performant code -what do you think?

*****

Yes, those two perspectives (design up front vs measure and tune) are somewhat at odds with each.

To do good performance optimization requires that you understand the performance bottlenecks of the system. There are two ways to understand that: intuition, or measurement.

A considerable amount of performance work is done on intuition. The problem with intuition is that it turns out to be a poor predictor – the performance bottleneck in most systems ends up being somewhere different than where you expect it to be. There may be exceptions to this – consultants who repeatedly implement a similar design for different customers, perhaps – but for most software, you just don’t have a very good idea where the bottleneck is going to be.

While performance work is often enjoyable, it tends to be fairly time consuming. If you optimize an area that didn’t need to be optimized (ie your image shows up in 0.05 seconds instead of 0.07 seconds), you are spending time that you could be using to implement/polish other features, work on the performance of areas that do matter, or finish early. And you’re generally creating code that is harder to read and maintain.

So, that leaves measurement. The important dictum here is “measure early, measure often”. “Measure early” may even mean writing some prototype code to validate some overall assumptions about the system – is it possible to pull data from the database quickly enough over the existing network to support what you need to do? How fast can DirectX render a frame?

Many software projects have time devoted to “performance tuning”. This is a “good thing” if you spend the time on an ongoing basis, measuring and improving as necessary. It’s a “bad thing” if you finish your implementation and then start looking at performance, as the kinds of changes that improving performance requires are generally the kinds that you don’t want to be making in the endgame. I’ve seen lots of examples where you get to a point where you a) understand the perf issue and b) understand how to fix it, but can’t because of where you are in the development cycle.

Using agile methods can help. If you have good unit tests, performance refactorings are less risky, and if you run on a short cycle, it’s harder to put things off. But you need to develop a “performance culture” in the team so that they care about performance all the time.

Hope that makes sense.

Flow, coding, and math

March 9, 2005 at 9:09 am

Rory wrote a post entitled “Whole Brain Coding” a couple of days ago, in which he asserts that coding requires both the left and right halves of the brain, the left brain working on the sequential and analytical parts of the task, and the right brain working on the intuitive and holistic parts (reverse these if you live in the southern hemisphere…)

When things are going well and you’re in the “flow“, my guess is that you’re seeing involvement of both sides of the brain, but I’m not sure that that’s all there is to it (I’m not asserting that Rory said that). I did a few searches to try to see what research had been done into the “flow”, but didn’t come up with much. There is:

In the Zone: A Bio-Behavioristic Analysis of Csikszentmihalyi’s Flow Experience

but I have a hard time parsing sentences like:

Primarily, the decision making process behind such behaviors as disparate as creative thinking, problem solving, or walking to the store are all dependent upon and influenced by somatic or neural activation variables that are mediated by abstract environmental contingencies.

I think that’s saying, “The way we make decisions is dependent on what’s going on around us”, which makes me happy that I’m not a psychologist who has to read and write papers like that.

There’s also

Understanding the Psychology of Programming

which is a light intro to the topic.

On the whole math vs. coding thing, though I have a math minor and enjoyed my math classes up through linear algebra and multivariable classes, I ended up in software for two reasons:

  1. There’s more opportunity in it
  2. Coding is way easier than math for me

 

Cyrus takes on Hungarian Notation

March 9, 2005 at 8:35 am

Cyrus takes on Hungarian Notation

I’ve written a fair bit of code that both uses Hungarian, and code that doesn’t use Hungarian. I think Hungarian works okay for C code, but when you get into the object oriented world, you can’t really come up with prefixes that are both meaningful and short, so I currently prefer the .NET style of naming.

 

Excellent

January 10, 2005 at 9:26 pm

Note that you’ll have to speak the title using a Jeff Spicoli voice (as performed by Sean Penn in Fast Times…). Or perhaps from Bill and Ted…

I’m spending the week becoming a more excellent developer, in a five-day course for developers. “Excellence” has been a big hit since Tom Peter’s 1982 smash hit In Search of Excellence (which was followed by “A Passion for Excellence” in 1985, and then, finally, with “A messy divorce from Excellence” in late 1989).

Classes like this can be hit or miss – they’ll cover a lot of ground that you’ve seen before, but this one has a lot of audience participation, and it always brings up a few topics of interest.

Interestingly, I sat next to a guy that used to work in the same building as I did back at Boeing Computer Services in the late 80’s, though we never met.

Coping with Burnout…

December 12, 2004 at 9:47 pm

In a comment, Todd wrote:

http://blogs.msdn.com/ericgu/archive/2004/12/09/278797.aspx#278985

I have a few thoughts.

First, it takes a while to get burnt out, so you can’t expect to get un-burnt-out in a week.

For me, there have been a few things that have worked well:

  1. Change the situation that led to your burn out. In my case, it was mostly job-created. If you can’t address the root cause, it’s hard to get better.
  2. Make a list of the things that you haven’t been getting done that you’d like to get done.
  3. Work at the list, slowly.

So what other advice can you give?

Design up-front vs. along-the-way

December 1, 2004 at 9:36 am

I had a discussion at lunch yesterday about the right way to do design.

Waaaaaay back when I was in school – when Van Halen’s “Jump” was at the top of the charts – we were introduced to the Waterfall Model of software development. (aside – it Royce’s model was iterative, but it was rarely discussed that way – the typical discussion broke the whole project down into phases).

Anyway, the waterfall model (and some other models as well) have distinct phases. In waterfall, you first collect requirements, then you design, then you implement, etc.

I’ve never found that to be very workable. As Field Marshal Helmuth von Moltke said, “No plan survives contact with the enemy” (actually, what he said was, “No operation plan extends with any certainty beyond the first encounter with the main body of the enemy” (though he likely said something like “Kein Betrieb Plan verlängert mit jeder möglicher Sicherheit über dem ersten Treffen mit dem Hauptkörper des Feindes hinaus” (or at least that’s what he would have said if he spoke English and used Babelfish…)))

That doesn’t mean that you should have a design, it just means that you should expect your design to change along the way as you learn more. But without spending time on design, you are likely to make some errors of architecture that will be difficult or costly to fix.

And just going off and starting coding would not have been tolerated by Tony Jongejan, my high school programming teacher.

At least that’s what I used to think. These days, I’m not so sure, for most applications (see caveats later).

The problem is that, unless you’ve already built such a program (some of which is codified in the prototyping and “build one to throw away” schools of thought), you rarely have enough information to make informed choices, and you won’t have that data until you actually write the code.

I think that you’re far better off if you focus on building clean and well-architected (at the class level) code that has good unit tests, so that you’ll have the ability to adapt as you learn more. It’s all about the resiliency of the source to the change that is going to come, and letting the code evolve to do what it needs to do.

Caveats? Well, there are a few.

If you’re in the library business, you will at some point have to put a stake in the ground and ship the thing, which will severely constrain further modifications. Your library may also be used in ways you didn’t imagine.

I think the right way to solve that is with a little design up front, a lot of prototype use, and then a good review process before you ship it.

So what do you think? What amount of design is right before coding?

Coming soon to a medicine cabinet near you…

September 16, 2004 at 12:36 pm

Let’s see… band aids… Motrin… Ah, here they are!

Best “Improve Your Dev Skills” books…

August 20, 2004 at 9:43 am

I’ve been looking through a few of my “Improve your Dev Skills” books:

What are your favorite books in this genre? Why?

Jay and Properties…

April 29, 2004 at 9:55 pm

Jay wrote a post entitled Properties? Not my bag, baby.


When I first started writing C# code, I used properties for everything. But recently, I’ve felt that I was wasting a lot of time writing trivial properties. Yes, I know that in Whidbey I’ll be able to use expansions to write them easily, but that still means that I have to deal with the property bodies cluttering up my code.


So, that got me thinking about whether it makes sense to be writing properties in the first place. After a bit of thought, here’s my current position:


Properties are a great thing for component libraries. There are certainly cases where you would want the future-proofing and decoupling that properties gives you.


But when you’re working on a single project that gets built all at once, I don’t think you’re getting any future-proofing benefits, and you have to pay the “property tax” the whole time.


This may be heretical, since “use properties” has been the common guideline.


What do you think?