Design for Performance vs. Tune for performance
A question related to performance came up today on a mailing list that I’m on, and I stated that the right way to do things was to do performance work based on measurements (ie don’t engage in speculative optimization).
Another list member wrote back and said, “I understand what you’re saying, but isn’t that at odds with what many books say about designing for performance?”
I wrote an answer, and decided that it would be a good thing to share. This is what I think about writing performant code -what do you think?
*****
Yes, those two perspectives (design up front vs measure and tune) are somewhat at odds with each.
To do good performance optimization requires that you understand the performance bottlenecks of the system. There are two ways to understand that: intuition, or measurement.
A considerable amount of performance work is done on intuition. The problem with intuition is that it turns out to be a poor predictor – the performance bottleneck in most systems ends up being somewhere different than where you expect it to be. There may be exceptions to this – consultants who repeatedly implement a similar design for different customers, perhaps – but for most software, you just don’t have a very good idea where the bottleneck is going to be.
While performance work is often enjoyable, it tends to be fairly time consuming. If you optimize an area that didn’t need to be optimized (ie your image shows up in 0.05 seconds instead of 0.07 seconds), you are spending time that you could be using to implement/polish other features, work on the performance of areas that do matter, or finish early. And you’re generally creating code that is harder to read and maintain.
So, that leaves measurement. The important dictum here is “measure early, measure often”. “Measure early” may even mean writing some prototype code to validate some overall assumptions about the system – is it possible to pull data from the database quickly enough over the existing network to support what you need to do? How fast can DirectX render a frame?
Many software projects have time devoted to “performance tuning”. This is a “good thing” if you spend the time on an ongoing basis, measuring and improving as necessary. It’s a “bad thing” if you finish your implementation and then start looking at performance, as the kinds of changes that improving performance requires are generally the kinds that you don’t want to be making in the endgame. I’ve seen lots of examples where you get to a point where you a) understand the perf issue and b) understand how to fix it, but can’t because of where you are in the development cycle.
Using agile methods can help. If you have good unit tests, performance refactorings are less risky, and if you run on a short cycle, it’s harder to put things off. But you need to develop a “performance culture” in the team so that they care about performance all the time.
Hope that makes sense.
Sure, experimentation is absolutely key in perf work, but a consequence of that is that experience can become key too. As you measure and find bottlenecks, you gain experience and get better at designing in a way which will be more performant from the start.
Another thing is that performance tuning will be made of small design changes, but what if a global redesign of the whole application seems the only way out? In this case, strong perf-oriented design from the start is a big time-saver. And it doesn’t have to rely on intuition, experience is much better.
Good article addressing performance. Adding some performance detecting and tuning methodologies would be better.
All too often performance is not even a consideration during the initial design and implementation. Performance needs to be treated as a first class citizen from the very beginning – perf requirements need to be defined just as the feature set needs to be defined (I would add reliability/robustness to that as well).
As others have pointed out, experience goes a long ways in making the right choices from the very beginning.
Some devs never think about perf at all…until it’s so slow that they’re forced to. By then it may require changes to large parts of the system to get performance where it needs to be.
Test early, test often, and include performance specific unit and integration tests in the automated test suite.
In your statement:
"The problem with intuition is that it turns out to be a poor predictor – the performance bottleneck in most systems ends up being somewhere different than where you expect it to be."
This is due to the fact that you are optimizing the things you expect to be bottlenecks.
Your statement is akin to saying:
"When you are searching for something, it is always the last place you look."
I would hope it is the last place I look – I would hope the bottleneck is someplace I didn’t expect.
All that being said, I agree with you that it is easier, faster, more accurate and less costly (time wise) to performance tune as you go along – stubbing as necessary.