Bug Driven Design

| 1 Comment

I wrote the other day about delaying design decisions until the last responsible moment, per the belief that only at that point do I have sufficient knowledge to design the feature to meet current needs. I can anticipate future needs, in the sense that I do not work to prevent them, but sufficient unto each release are the troubles thereof.

This principle rests upon the assumption that I have sufficient knowledge at each last responsible moment. Perhaps I've talked to the people who want the feature and have a detailed list of behaviors and expectations I can turn into tests. Perhaps I have a specification or RFC or test suite I must pass. Perhaps I'm writing the feature for myself and know exactly what I want.

Still bugs happen.

I'm not necessarily happy about this, but I do like the feedback. Not only are people using the software, but they care enough about it to tell me what they expect it should do. More important is that a good bug report gives me additional details about user expectations.

Would it be nice to have them before adding the feature? Of course! Would it be nice to have them before releasing the software? Undoubtedly. Would I like to avoid bugs in general? You know it!

Yet bugs happen.

Some bugs are bugs of understanding. I may write a lovely JIT for Parrot and receive effusive praise and a repurposed bowling trophy, but when someone says "That's 32-bit x86 only, and we'd really like it on our 64-bit processors," I know I have more work to do. That's good and that's valuable -- not only to give me a better understanding of what real users need, but to remind me to talk to people to discover what they really want and really need.

Some bugs are bugs of implementation. Perhaps I don't understand the pipeline and instruction scheduling system of a 64-bit processor and the resulting JIT code is unaligned and four times slower than it should be. As much as I'd like to avoid these problems, they happen. That's good and it's valuable, not just to improve the software for users, but also to help improve the test suite.

Furthermore, analyzing the causes of both types of bug can help me improve the process of creating software. Perhaps I'm not testing enough. Perhaps the characteristics of my sample workloads do not match real world uses. Perhaps the assumptions I've made about how people use the software need to change; perhaps their goals and values have changed.

I wish I could get this information reliably before I add a feature, but I'm a practical guy sometimes. Sometimes the best feedback you can expect is "Hey, it doesn't work!" That you can fix.

1 Comment

This is analogous to the Tracer Bullet idea of application design. The idea being that it's far better to have some approximation to the feature you want that at least fails in interesting ways than to have no feature at all.

I reckon that this is one of the fundamental Agile ideas; the hard part of anything is starting, so make your initial implementation as minimal as you can get away with and still have passing tests. Then you have a working, but buggy system, and it's all maintenance programming from then on. I find it surprisingly hard to discipline myself to do things that way, but whenever I do, results seem to flow remarkably naturally.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Entry

This page contains a single entry by chromatic published on September 23, 2009 8:24 AM.

Necessity Driven Design was the previous entry in this blog.

Genericity versus Optimization is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?