Unit testing through the UI

October 9, 2007 at 11:24 am

One of my readers asked whether there were any UI unit testing tools.


While I have seen some ASP.net tools like this, in general I’d expect that you would unit test a UI by making the UI a very thin layer (one that doesn’t really need testing), and writing the unit tests to talk to the layer underneath.


Though I haven’t had the opportunity to try it on a full project, I think that Presenter First has a lot going for it.

Beautiful code…

August 14, 2007 at 11:33 am

O’reilly publishes Beautiful Code


Jonathan Edwards counters with a beautiful explanation.


Now, I haven’t read the new book, but I have a strong resonance with what Edwards wrote.  You should definitely read the whole thing, but I few sentences jumped out at me.


A lesson I have learned the hard way is that we aren’t smart enough. Even the most brilliant programmers routinely make stupid mistakes. Not just typos, but basic design errors that back the code into a corner, and in retrospect should have been obvious.


and


It seems that infatuation with a design inevitably leads to heartbreak, as overlooked ugly realities intrude. 


Precisely.


If there’s anything that agile says, it says that we should build things simply and with a eye to revision because we not only are we “just not smart enough”, there are too many unknowns when we start.


The problem with “beautiful code” as a concept is that it is closely related to “beautiful design”, and I’ve mostly come to the conclusion that any design effort that takes more than, say, 30 minutes is a waste of time.


The concept also gets confused about what the goal of software is anyway. The goal is not to have beautiful, elegant, code. The goal is to have *useful* code that does what you need it to do.


Discuss and comment

YAGNI and unit tests…

June 28, 2007 at 12:48 pm

Thanks for your comments.


I decided to go ahead and write the unit tests for that layer, both because I knew what not writing them would be like, and I wanted to play with wrapping/mocking a system service.


I also decided – as some of you commented – to do the right thing and encapsulate it into a class. That would have happened long ago, but though I’ve written it several times, I don’t think I’ve ever duplicated it within a single codebase – and the codebases where I did write it are pretty disparate. Now, I have something where I could at least move the source file around…


Writing tests for this was a bit weird, because in some sense what I needed to do was figure out what the system behavior was, break that down, write a test against my objects, and then write mocks that allowed me to simulate the underlying behavior.


So, for example, I created a test to enumerate a single file in a single directory, wrote a wrapper around DirectoryInfo, and then created a mock on that object so I could write GetFiles() to pass back what I wanted. And so on with multiple files, sub-directories, etc.


So, I did that, went to write the little bit of code that I needed in the real version (to use the real GetFiles() calls and package the data up), hooked it up to my real code, and it worked.


*But*, when I went back and looked at the code, I found that what I had really done was create two sets of code. There was the real code that called the system routines and shuffled the data into my wrapped classes. And then there was my mock code that let me control what files and directories got returned. But there wasn’t any common code that was shared.


So, my conclusion is that I really didn’t get anything out of the tests I wrote, because the tests only tested the mocks that I wrote rather than the real code, because the only real code was the code that called the system functions.


In this case, TDD didn’t make sense, and I will probably pull those tests out of the system.TDD may make sense the next level up, where I’ve written a new encapsulation around directory traversal, but it seems like the only code there is hookup code.


So, the result of my experiement was that, in this case, writing the tests was the wrong thing to do.


 


 


 

Does YAGNI ever apply to tests?

June 27, 2007 at 1:03 pm

I’ve been writing a small utility to help us do some configuration setup for testing. It needs to walk a directory structure, find all instances of a specific xml file, and then make some modifications to the file.


I TDD’d the class that does the XML file stuff, and I’m confident that it’s working well. I’m now going to do the class to walk of the directory structure and find the files.


And there is my dilemna. I don’t know if I’m going to do TDD on that.


I know exactly how to write it, I’ve written it before, and my experience is that that is code that never changes nor breaks. And, figuring out how to write tests for it is going to be somewhat complex because I’ll have to isolate out the live file system parts.


So, I’ve already decided what I’m going to do, but I’m curious what you think. Does YAGNI apply to test code, or is that the first step to the dark side?

Reducing the risk of codevelopment

March 1, 2007 at 11:56 am

Software is always built on other software – your dependencies – and the dependencies that you choose have a considerable influence on your success. Choose the existing technology that you know, and you have good predictability, but you might not produce a great product, or it might take too long to finish. Choose a hot new technology, and it’s harder to predict what will happen. Maybe the benefits will be great and you’ll finish faster (ASP.NET vs old ASP…). Maybe things won’t be as good as promised (insert the name of a technology that you were “disappointed with” in the past).


Or maybe it’s not finished when you need it. Welcome to the wonderful world of co-development, where you are depending on features that aren’t implemented yet. How do you reduce the risk of features/APIs not showing up, or being substantially different than you expected?


Well, the first (and best) way to reduce this risk is simply not to do it. If you only depend on features and APIs that are currently available, you know they are there.


If you can’t wait a full release cycling, then perhaps you can take some sort of incremental approach, where you plan to use feature <X> but don’t *commit* to using it until its actually there. My preference would be an agile approach (such as Scrum), so that when feature <X> shows up, it’s actually finished and working.


That’s really just the same thing I said first – don’t take on the dependency until something is done.


But what do you do if you really need that feature – if your plans would be derailed unless the other team finishes the feature? I have four things in mind that can help:


Accept the Risk


First, you have to accept that you are taking on risk. Software scheduling beyond a period of a month or two is not only an unsolved problem, I believe it isn’t a tractable problem. Decades of project slippage have demonstrated that, and we should just embrace the uncertainty involved rather than trying to “do better”.


Note that while there are teams out there that can give good estimates for tasks in the next month (and perhaps up to two months), you can’t assume that you are dealing with such a team. There are many teams who are essentially unpredictable even in short timeframes.


Understand the Risk


Second, you need to understand the risk. This will require you to work with the team that’s building whatever you are needing. You need to understand where your feature ranks in the things that they are doing. It might be a feature that they absolutely have to have to ship, or it might be a “nice to have” feature. You need to understand this. It’s closely related to how close your requested feature is to their main charter. You do not want to be the outlier group amongst all their clients, the customer they don’t want to have.


You also need to understand when they’re building the feature. If it’s very early in the cycle, then it’s likely to get done. If it’s late in the cycle, it’s less likely to get done.


If they don’t think of features in this way and/or are working on features in parallel, it’s more risky.


It would also help to understand what development methodology they use, and their history of being done when they guess they will be done.


Plan for Mitigation


What are you going to do if things don’t work out, if the feature is late or is cut? Even in the best organizations, people get married, are out for months on medical leave, have accidents, or leave to form their own companies.


What is your group going to do when this happens?


Track the Risk


In an ideal world, the group you depend on would give you regular updates about the feature you’re waiting for. And some groups do do this, but it’s your risk, and you’re going to need to stay on top of it. The details of that depend on the groups involved.


Accept the Outcome


If things work, great. But if the feature doesn’t show up, remember that you were the one who accepted the risk in the first place.  


 

The siren song of reuse…

December 5, 2006 at 1:08 pm

We’ve been doing some planning ’round these parts – planning that I unfortunately can’t talk about – but it’s led to a fair amount of discussion about architecture, both inside the team and outside the team.


Which has got me thinking about reuse.


Reuse has been one of the Holy Grails of software development for a long time, along with… Well, work with me, I’m sure there are others. True AI!. That’s another.


Anyway, reuse has been discussed since time immemorial (October 13th, 1953), for some pretty sound reasons:



  • Software is hard and expensive to develop

  • We already break things apart into “components” to simplify development

  • We’ve all written subroutines that are used in multiple places.

It seems that if we did a little more planning, paid a little more attention, were just a little smarter, we could build our components in a more general way, and others could benefit from them.


And yet, people have been trying to do this for a long time, and have mostly failed at it. There are successes – widely-used successes – but they’re fairly small in number. Surprisingly, people are still optimistic about going down the reuse path, and since they are likely to fail anyway, I therefore present some rules that can help them get there faster.


Eric’s N rules for failing at reuse


Authoring reusable components:



  • It really helps to have delusions of grandeur and importance. You are going to be the ones who succeed at doing what others have failed at.

  • Pick a wide/diverse scope. You’re not building a good UI framework for client applications, your framework is going to work for both client and web applications.

  • Plenty of technical challenge. That’s what makes it fun and keeps you interested.

  • No immediate clients. You are building a *Component*, and when you are done, clients can use it.

In my experience, that’s more than enough by itself, but it helps if you can throw in some obscure algorithms and quirky coding styles. I’m already assuming that you don’t have any real tests.


Consuming other people’s components:



  • You absolutely *must* sign up to work with something as it is being developed. There is simply no better way to waste vast amounts of time. Unfinished implementations block you, regressions set you back, and even if you don’t get those, you at least get hours of refactoring to switch to the new version. It’s best if you do this on a “milestone” basis, roughly every 6 weeks or so. That’s short enough that you’re randomized often, but long enough that waiting for bug fixes is really painful.

  • Commit to using the other component early and deeply. If somebody asks, “what will you do if <x> is too buggy, too late, or doesn’t do what you need?”, you can’t have an answer.

  • Make sure that your use of the component is a variant of what the component is really being used for. In other words, the creator of the component is building the thing to do <x>, and you want to do <y>, which is kindof like <x>.

  • A quick prototype should be enough to ensure that you can do it.

  • If you can’t get the authoring group to commit to producing what you want, take a snapshot of their code and work on it yourself.

  • Don’t plan any schedule time or impact to deal with issues that come up.

  • If possible, buy the component you’re using. Because if you paid for it, the quality will be higher, and they’ll have to give you good support.

I hope these tips help you.


If you’re a bit leery of reuse, then good for you. I have only a few thoughts to offer:


If you’re thinking about doing something, it’s always a build vs buy decision. Even the best general-purpose framework out there is just that – a general-purpose framework. It’s not designed to do exactly what you want to do.


In the abstract, there are three phases of using a component in your code:


Phase 1 is great. The component is doing what you want, and it’s quick and easy to do it. Let’s say for sake of argument that this gets you to the 80% point in your project, and it gets you there quick.


Phase 2 is a harder. You’re starting to reach the limits of the component, and it’s tough to get it to do what you want. Tough enough that it’s taking more time, and you’re using up the time that you saved in phase 1. But you still feel like it was the right decision.


Phase 3 is much harder. It’s taken you as long to get here as a custom-written solution would have taken, and making further progress is considerably slower than if you had written everything. Worse, you can see the point where you’ll reach a wall where you can’t do anything more, and it’s close.


Different projects obviously reach different phases. Some never venture out of phase 1, and others are deep in phase 3. It’s hard to tell where you’ll end up, but if a given component is central to what you do, you are much more likely to end up in phase 3.  


The obvious problem is that prototyping is always done in phase 1, and the rapid progress you make there is oh-so-tempting. The whole application UI is laid out in a week of work using Avalon. I got this demo with moving pictures done in 3 days using XNA. We all want to believe that it’s really going to be that easy.


Stay strong against the lure of the siren song.

scrumbut

October 13, 2006 at 12:05 pm

As somebody who is interested in Scrum but hasn’t yet had a chance to try it, I’ve been paying attention to the various experiences people are having with it.


I’ve been noticing something for a while, but I didn’t really realize that there was something bigger going on.


I call that phenomena “Scrumbut”. It shows up in the following way:


We’re doing Scrum but…



  • our sprints are 12 weeks long…

  • we do two normal sprints and one bugfix sprint…

  • we do all our planning up front…

  • we skip the daily meeting…

  • our managers decide what’s in each sprint…

  • we haven’t read the books yet…

  • our team has 30 people…

I’m not a strict methodologist – a specific methodology may need to be adapted to a specific situation. But most of these are anti-scrum rather than modified-scrum.


That this phenomena exists may not be news to you, and it wasn’t to me. But what I realized this last week is that scrumbut has led to another phenomena…


Namely, it has led to scrum being a naughty word. Managers are working with groups that say they are doing scrum, and then when scrumbut doesn’t work, they decide that scrum doesn’t work.


How to approach this? Well, I think you need to advocate specific principles rather than advocating scrum. If you tell your management that you are going to be “ready to release” on a monthly basis and that they get to give feedback on what has been done at what to do next every month, I think you will likely get a better response.

Agile development and the Software Researcher

October 3, 2006 at 10:42 am

A while back I came across this post about agile.


I originally was going to write a comment about that post, but I think the commenters have done a good job at that. I will point out that the environment described in the post is a lot like many of the internet boom startups, few of which are around right now.


But in writing some thoughts down, I realized that I was really writing about why I found agile to be interesting. So that’s what this post is about.


I’ve worked on a ton of projects, some short, some long, some small teams, some big teams. But fundamentally, writing software is an exercise in:



  • Making progress in the face of imperfect information

and



  • Managing change

These shouldn’t be a surprise to anybody. When we start coding, we always have imperfect information. We don’t know if our design will really meet the customer’s needs, we don’t know how long it’s going to take us to write, we don’t know if our architecture is feasible to build, we don’t know if it will perform well enough,  etc. And things will change along the way. Market conditions will change. Customers will modify their priorities. New technology will become available. People will join or leave the team.


There’s a name for trying do something in that sort of environment, and it’s not “engineering”.


It’s research.


Applied research, to be specific.


And doing research requires a different approach than engineering.


Way back in 1961, President Kennedy made his famous “moon speech“.  At that time, the only US astronaut was Alan Shepard, who had flown a 15 minute sub-orbital flight. The ability to send people to the moon seemed out of reach. But the leaders at NASA came up with a plan – they decided on three different programs: Mercury, Gemini, and Apollo. Each program had a theme and a set of goals, and each flight had a set of smaller goals.


Sometimes things worked, sometimes they didn’t. After each flight, they looked at what had happened, decided how to adapt, and went on to the next flight. And over the span of 8 years, they accomplished their goal. All by following the “design a little, build a little, fly a little” approach.


After Apollo, NASA decided to build the shuttle. They did a lot of design up front, made a lot of promises (both schedule and capabilities) and then started building. They ran into lots of difficulties with the main engines (ultimately solved), and the thermal protection system (not really solved). Ultimately, they finished the vehicle, but it took years longer than they expected and ultimately didn’t do what they had designed it to do or what they needed it to do. And it has very high maintenance costs.


The analogy to software development projects should be obvious. Shuttle is like most big software projects – lots of planning up front, lots of promises, followed by delays as problems are addressed, and then the ultimate release of something that doesn’t measure up to the original idea. We convince ourselves that “this time it will be different”, despite the fact that the same thing has happened every time we take this approach.


Incremental development accepts the realities of the environment, and takes a different approach. You know that you don’t know enough to finish the project and you know that things are going to change, so you bite off a manageable portion – say a month’s worth of work – do enough design, write some code, and then see how well it works.


And after that month, you modify your plan based on what you learned.


Design a little
Code a little
Fly a little


That’s what agile means to me.

Feel the pain…

August 21, 2006 at 6:36 pm

Dare wrote a post talking about the advisability of making developers do operations.


Which is really part of a philosophical question…


When you’re setting up a software organization, how much specialization should you have, and where you should you draw the lines around the responsibilities of the various groups?


Some orgs take a very generalized view of what people own, and others take a very specialized view. I’ve worked in both sorts of environments.


I’ve worked for a startup where, as a developer, I wrote the code, tested the code, built the code, made tapes to ship out to customers, and answered customer support calls.


And I’ve worked in other organizations where the job of developer was to implement what was written down in the spec and pass it off to the QA org. Those orgs typically had structures and policies designed to insulate the developers, so they wouldn’t be distracted.


That eliminated a bunch of the outside noise that they would otherwise have to deal with, and make them more efficient at getting their development work done.


And how did those efficient organizations fare in their products?


Not very well.


They were reasonably good at shipping software, but their software didn’t turn out to be very good for users. New updates didn’t address issues that users had been hitting. New features were hard to use and/or didn’t hit the sweet spot. They answered questions that users didn’t ask.


All of this was because the developers were out of touch with people who had to deal with their software. They didn’t feel the pain that the users were experiencing setting up their software. They didn’t feel the pain when a bug in the software meant that the user’s business was loosing money. And they didn’t understand why users were having trouble using features that seemed obvious to them.


All that happened in DevDiv, and the issues showed up in our customer satisfaction numbers. So, it was decided to let developers (and the testers, and PMs…) talk directly with customers.


There was a fair amount of angst around this decision. It would take up too much dev time. Developers would insult customers. Customers didn’t know enough to give good feedback.


But it turned out that all of those things were wrong. The developers liked to solve problems, and they also liked to help people. They remotely debugged customer issues on other continents. And they listened very closely to the detailed feedback customers gave about how the current software didn’t meet business needs and what was good and bad about future plans.


And the organization adapted what they were planning, so that it addressed the areas that needed addressing.


Distraction is not the enemy. Pain is not the enemy. Pain is to be embraced, because only through feeling pain are you motivated to make it go away.

Other views on programming sins…

August 8, 2006 at 1:31 pm

At the beginning of the sin-tacular, I asked for people to come up with their own lists. And here they are:



My original plan was to comment on some of the individual sins that people listed, but they’re all great – you should go and read them all.


I was a bit intrigued, however, by Chris’ comment (or should that be “Chris’ Comments’ comment?):


Hey, Eric, what are the 7 Heavenly Virtues of Programmers?


Hmm…