Design up-front vs. along-the-way
I had a discussion at lunch yesterday about the right way to do design.
Waaaaaay back when I was in school – when Van Halen’s “Jump” was at the top of the charts – we were introduced to the Waterfall Model of software development. (aside – it Royce’s model was iterative, but it was rarely discussed that way – the typical discussion broke the whole project down into phases).
Anyway, the waterfall model (and some other models as well) have distinct phases. In waterfall, you first collect requirements, then you design, then you implement, etc.
I’ve never found that to be very workable. As Field Marshal Helmuth von Moltke said, “No plan survives contact with the enemy” (actually, what he said was, “No operation plan extends with any certainty beyond the first encounter with the main body of the enemy” (though he likely said something like “Kein Betrieb Plan verlängert mit jeder möglicher Sicherheit über dem ersten Treffen mit dem Hauptkörper des Feindes hinaus” (or at least that’s what he would have said if he spoke English and used Babelfish…)))
That doesn’t mean that you should have a design, it just means that you should expect your design to change along the way as you learn more. But without spending time on design, you are likely to make some errors of architecture that will be difficult or costly to fix.
And just going off and starting coding would not have been tolerated by Tony Jongejan, my high school programming teacher.
At least that’s what I used to think. These days, I’m not so sure, for most applications (see caveats later).
The problem is that, unless you’ve already built such a program (some of which is codified in the prototyping and “build one to throw away” schools of thought), you rarely have enough information to make informed choices, and you won’t have that data until you actually write the code.
I think that you’re far better off if you focus on building clean and well-architected (at the class level) code that has good unit tests, so that you’ll have the ability to adapt as you learn more. It’s all about the resiliency of the source to the change that is going to come, and letting the code evolve to do what it needs to do.
Caveats? Well, there are a few.
If you’re in the library business, you will at some point have to put a stake in the ground and ship the thing, which will severely constrain further modifications. Your library may also be used in ways you didn’t imagine.
I think the right way to solve that is with a little design up front, a lot of prototype use, and then a good review process before you ship it.
So what do you think? What amount of design is right before coding?
Step back from the design and see what drives that….most often it is the lack of proper requirements that leads to an incomplete design. The team then has to augment the design as requirements become clearer. That means that it’s a constant challenge to keep the original elegance of the design intact and add more and more pieces to it. So bottomline is, try as best as possible to nail down the requirements.
Second thing useful, but very very unconventional . Have you ever thought of writing a test app for an app that you haven’t written yet? Think about it this way. When you sit down to write the test application, it forces you to think what your target application does. It forces you to think what you are going to test and that means if there are areas you didn’t seriously consider, these might come to mind. Secondly, again, it drives the requirements of the target application which helps you achieve clearer .
Somehow I’m getting the feeling I have severely digressed from the original post. Forgive me please if that is the case! 🙂
It all depends on the application. For software controlling pacemakers or nuclear missiles I don’t think "a little design up front and a lot of prototyping" is good enough.
Generally speaking I think unit testing is useful to test whether a class design is correctly implemented. But how do you validate the class design itself? That would require a thorough, complete, preferably formal specification of the properties that gave rise to the design, both the initial requirements and the additional properties resulting from design choices made along the way. This whole idea still seems to be in its infancy after 50 years of professional programming. The best we can do is create high-level models of some relevant aspect of our application, eg. a relational data model to capture the information structure used in a database application, or a workflow process model to capture the order dependencies in some kind of workflow support or task handling system, and generate the application code from it. But that will rarely give you a finished, complete application, and usually there are many relevant properties that the high-level model or the application code generator isn’t concerned with. So that approach isn’t good enough in general. We need methods to verify a given application design wrt. many different kinds of properties, and although a lot of academic work has been done in this area, that is something we do not know how to do. Maybe it’s just too hard?
I believe that in order to create a very polished user-oriented product constant feedback is needed in the process. The best process for this type of development needs to accept a certain amount of hand-waving; development and the designers need to agree upon what the limits of change are ahead of time. Great architectures should embrace change, the fact of development is that requirements change. I have yet to encounter a designer who was able to create flawless specs that never changed, the inability to deal with these yields bad products.
Ive never found the upfront design process to be entirely viable. Its often the case that the client/user is unable or unwilling to get involved at the design phase, and then becomes more involved after something usable is put into their hands.
The key is to put something into the clients hands as early as possible, and that something should be both usefull in that it fulfills some of the clients needs, and it should also act as a starting point for further discussion about what its future is, i.e. it should engage the clients imagination.
Ive never found the prototype and throw it away approach to work in the real world. Firstly, no-one understands why you want to throw all your hard work away, and secondly, if your prototype is succesfull by any measure, people will just start using it.
What seems to have worked well on my latest project is a plug-in architecture. A single app within which cooperating plug-ins live, much like the eclipse rich client platform. Clients sense the potential for growth, and that growth can combine increments of addition and revision without too much pain.
A previous posted suggested that an incremental approach isnt suitable for pacemakers, and thats celarly true. However, the requirements for a pacemaker are relatively simple; at least, they can be known ahead of time. For something like a nuclear power station, I would hazard a guess that there is an underlying iterative process going on, even if a waterfall process is prescribed.
Theres a really good book about this that I read a while back. Its called "The Blind Men and the Elephant: Mastering Project Work" and is subtitled "Producing Meaningfull Results from Fuzzy Responsibilities" – its by David Schmaltz
http://www.amazon.com/exec/obidos/tg/detail/-/1576752534/103-4314954-3537466?v=glance
Since I just turned 0x028 years old, I remember when Jump was at the top of the charts.
I’ve never found any one model to be the end all of programming. Generally, I get as many requirements as I can, design off of that, code (including unit tests), go back and change the design, code more, test, change the design, code, and test. Multiple times in the cycle the requirements are modified, and the design and code reflect that.
For the past lot of years, I’ve worked on a project that gets vague requirements from multiple external sources. We design, code, test, and release in the space of four weeks. After the release, some requirements are updated or clarified, new ones are added, and the cycle starts all over again. For this, a well-architected app is key.
It really depends on the application.
As stated already, applications that require some sort of user interface really need to go through iteration near the final step. You can’t design a perfect UI — you need to test & develop the UI. Hence, more "design time" at the end.
However, for other applications that don’t have a user interface component, you don’t need the design time at the end.
If you go to the other end, and have some embedded code that may or may not include an RTOS, well, then, some prototype/proof-of-concept work coupled with a detailed design would probably work out ok.
You can’t apply the same process that you use for a RAD front end to a database on a piece of avionics equipment.
Since this is a Microsoft blog though, I’m going to assume that the focus is the former. 🙂
First, to resonate with a couple of previous posts, no matter how you try to put it, the iterative nature of the process is unavoidable. Absense of iterations implies attainability of perfection, and we all know that’s impossible. Whichever great deed you set out to do, you WILL have iterations. The only difference is whether the circles of iterations touch the end customer and how close do you want or able to get to the said perfection. In the case of pacemakers, believe me, there are countless prototypes, test cases and "back-to-the-drawing-boards", with only the final circle resulting in product acceptance.
The waterfall model is not a flawed process. It’s simply a naive, cropped view of the iterative process. Forgetting that the boomerang circles back around almost always results in a painful experience.
Unfortunately, the neat circle (or "nautilus" spiral, depending on who you read) of the iterative process is also misleading: in reality, there are multiple circles, nested and sometimes overlapping, as small as individual code unit cycles and as large as company’s continual refinement of strategic direction (buddhists would go even further and fit all of them into the wheel of life :).
Thus, the complexity of process management (project management) is not in realizing its iterative nature, but in creating an accurate (features) and realistic (money) vision of the process and how it fits into the larger picture (customer). Software development is trivial within the process that’s well formulated and understood by all participants.
I have been reading Craig Larman’s 3rd edition of Applying UML and Patterns, and it has a lot of good information on iterative development, evolutionary requirements, embracing change as well as good object oriented analysis and design. Here are a couple of links on my blog about it:
http://davidhayden.com/blog/dave/archive/2004/11/06/602.aspx
http://davidhayden.com/blog/dave/archive/2004/11/14/624.aspx
An iterative approach to development, which is preached by all the agile methods, allows you to strike a balance between design and providing deliverables to the customer that satisfy his/her need to see progress and provide quality feedback. The book gives a lot of interesting statistics as to the failures of the waterfall method and why these failures came about.
To make sure you get a sound architecture in place so you don’t have to go through a laborious redesign half-way through the project, the book recommends focusing on a combination of requirements based on Risk-Driven Development and Client-Driven Development in the early project iterations. Risk-Driven Development recommends focusing on those high risk functional requirements that are very critical to the core of the architecture. Client-Driven Development focuses on those functional requirements perceived to be of high value to the customer. Delivering on these combinations will satisfy those needs important to you as a developer and those needs important to the customer.
I can’t recommend the book enough. It has helped me out not only in my processes but also my object design skills.
I don’t believe in the grand up-front design. I have yet to see it work. But I do believe in a simple plan and small mini-design sessions just before I code.
When the requirements change, the plan change, and so does the code. What I discover all the time, is how easier it is to change well-refactored code.
So I guess it pays to gold-plate just a little, but only where you need it.
In response to the design of pacemakers and nuclear missiles.
Unit testing is only half the equation, you still need comprehensive acceptance tests. Rather than have comprehensive requirements I think for these systems you need to have comprehensive acceptance tests, which are really just requirements expressed in automated tests. You can still incremently develop the design. What you want to do is spend a lot more time validating the end product. see http://www.testing.com/ which discusses how to do good testing using agile techniques
>>I don’t believe in the grand up-front design. I have yet to see it work.
Ok, so it’s apples and oranges, but do you live in a house? Work in an office?
It’s sad, in a way, that software "buildings" are so easy and inexpensive to destroy and rebuild. One would never think of starting to build an office building without comprehensive blueprints. Yet, 99.9% of all building projects I’ve ever seen have been successful.
Does anyone really believe that you could build a house, a space shuttle, a car, by sitting down with a stack of sheet metal, a wrench, and a bucket of bolts without blueprints? Call me crazy, but I don’t want to ride in an airplane built in this manner! Why do we feel confident betting our financial futures on software built without a solid design?
I’m not saying that changes shouldn’t be made after the product is released. Software IS more malleable than buildings, and we would be idiots not to take advantage of that. However, that doesn’t give us the right not to plan before we start spewing code out of the fire hose!
Software development is a relatively new endeavour for mankind. We’ve been building houses for thousands of years; software development is only a few decades old. I believe our industry is not yet mature enough to be able to rigorously produce the proper "blueprints" for all software applications yet. Give us a few centuries, and maybe we’ll get it right.
In reverse order:
"…doesn’t give us the right not to plan before we start spewing code…"
Please read my post again. I said I do believe in the plan. But I don’t believe we can plan for more than we know. We shouldn’t plan for requirements which are not there, we shouldn’t plan for undetailed requirements, and we shouldn’t detail them before we implement them, because requirements are known to change.
But that’s part of the bigger picture. I was refering to the activities related to code design.
"Yet, 99.9% of all building projects I’ve ever seen have been successful."
Can you define successful, please? Most buildings get finished, but that’s because it’s more expenisive to terminate a building project, rather than a software project. And just because the building is finished, doesn’t mean it’s useful. And if you cheat, like someone did, they won’t stand the earthquake everyone know will come.
"Most buildings get finished, but that’s because it’s more expenisive to terminate a building project, rather than a software project"
I disagree with that. Unless you’re offshoring the entire project to a cheaper-but-equally capable labor country (even then it’s not cheap to terminate), project terminations are hard decisions.
But then again, comparing software development to constructing a building is a bad idea. They’re definitely not the same.
By the way, how do you design something upfront when you’ve got new technology involved in that equation? Say your PM says, "Do this but do it using web services based SOA"…and say you haven’t really played in that technology space very much. How are you going to design without fully understanding the capabilities and limitations of you implementation platform?
I get so frustrated with people who whip out the building analogy.
Yeah, constructing software is like constructing a building. If you were putting it on Jupiter. And constructing it with entirely alien materials and tools.
Architects and construction workers have the luxury of working within the laws of physics. Software developers don’t. We work within the constraints of "some other guy’s idea of how things should work", which is considerably less.. reliable.
Here is my take.
Requirements are truly the heart of programming. On one hand they provide an objective way of assessing how successful a project was. So, at any time during the project one should strive to have clear requirements. These should include what you know, but also what you DON’T KNOW.
The big design up-front doesn’t work in practice because programmers try to do too much: they are not aware of the limited size of their own skulls (<a href="http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD340.html">EWD340</a>). Instead of trying to specify the whole thing in one go you should try to devise a hierarchical design. A simple example was mentioned above: the plugin architecture is an instance of a 2-level design. When a project is very big 2 levels might prove not to be enough. Also, the binary compatibility implied by the word "plugin" is not essential as far as design goes. Nevertheless, the hierarchical design approach can be viewed as a generalization of the plugin architecture.
At a certain point in time you do the design for one level (going top->down: note that as the plugin example illustrates this does NOT mean that you don’t have a functional program at all times!). You shoud take into account the "what you know" part of the requirements and construct module specifications/interfaces from it. The "what you DON’T know part" should have the form x is either x1, x2, …, or xn. (a trivial example: "we don’t know at this point what exchange rate does the client want to use here, but with high probability is one of this history.."; a more complex example: "we don’t know if the client wants to run this translator and get a result that compiles and runs, or only wants something readable and resonably semanticaly resembling the original") You should take into account the "what you don’t know" part too. You must make sure that whichever the choice would be you won’t need to change what you have already specified but merely ADD to the specification.
It might sound strange so I’ll try to project what I said above on the plugins scenario, which I found to be very suitable (for this purpose and in practice in general :)). You should design the "plugin architecture" so that it does not need to change. When something needs to be added you just add a new plugin. Its design is done recursively.
The real problem is that people DO NOT ALWAYS KNOW WHAT THEY DON’T KNOW. It is yet another instance of the "limited skull size" principle :). When you realize that there was something that you didn’t knew that you don’t know you need to acknowledge the fact quickly and refactor/change the design as soon as possible. Ignoring the situation is a sure recipe for disaster.
But this doesn’t change the fact that you should strive to have clear requirements. Collect everything you know AND everything you know you don’t know. For this purpose using a language as clear as possible (be it even a graphical language) is indispensable.
A blog post of mine somehow related to this discussion: <a href="http://rgrig.blogspot.com/2004_01_01_rgrig_archive.html">here</a>.
PS: I hope the link html tag works..
Creating software is like creating blueprints. Running the install-program is like building the house…
It would surprice me if MS had to customize the INSTALL of (for instance) Office for each customer, or if a (building) architect did’nt have quite some iterations with the customer.
In this light: I realy cant see the difference between building houses and building software.
Steinar: Try to live in a software and you will get it 🙂
BTW, building is actually done with F7, ctrl-shift-B, make, or something similar, not with install. See http://www.bleading-edge.com/Publications/C++Journal/Cpjour2.htm
A great article.
Radu: I do of course know that in software-terminology building is the process of compiling the source to program files. 🙂
My point however is that the process of creating software can’t be mapped one to one to creating buildings. When building a house you are in a way just ‘installing’ the blueprint. It’s *creating* the blueprints that is the complex job. – The job of the architect. 🙂
Steinar: Of course I knew that you knew 🙂 It was partly a joke and partly an attempt to point out that "to build" is often used in software.. and the choice of words is not completely haphazard.
As somebody who designs, develops AND consumes my own financial derivatives software, my own experience is that it’s impossible to specify stringent requirements up front. The customers (myself plus others) are constantly evolving the requirements as their business environment changes. I guess that’s really my main finding from my own experiences: that the underlying business logic for many parts of an application tends to evolve randomly as time progresses.
For me, architecture choices tend to be made so that I commit the minimum amount of lock-in to the current business logic and architecture. Codebases just don’t last 20 years.
I’ve found out that the building (actually, more of a construction) analogy works for me sometimes, but not the usual (i.e. notoriously wrong) one: design -> build -> forget about it & relax & enjoy.
There are moments when it helps me to think of software in terms of joints, ripples, cracks in structure. In some contexts it fits just fine. Sure, with usual limits to every analogy.
Two different tools for two different problems.
Sometimes the requirements are sufficiently knowable that you can use big-design-up-front approach successfully. If you truly know the requirements, BDUF works really well (been there, done that, loved it).
Sometimes the true requirements will not be known, can not be known, until very near (or after) the customer gets the product. In that case, BDUF is a waste of time (been there for a month, not loving it at all).
It can be very hard to know whether you truly understand the requirements though – "understand" in the sense that what you think you need to create really is what the end user wants.
My favorite quote ever: "It’s just what I asked for, but it’s not what I want."
PingBack from http://backyardshed.info/story.php?title=eric-gunnerson-s-c-compendium-design-up-front-vs-along-the-way
PingBack from http://paidsurveyshub.info/story.php?title=eric-gunnerson-s-c-compendium-design-up-front-vs-along-the-way
PingBack from http://jointpainreliefs.info/story.php?id=57
PingBack from http://weakbladder.info/story.php?id=5834
PingBack from http://debtsolutionsnow.info/story.php?id=10175