I wrote the other day about delaying design decisions until the last responsible moment, per the belief that only at that point do I have sufficient knowledge to design the feature to meet current needs. I can anticipate future needs, in the sense that I do not work to prevent them, but sufficient unto each release are the troubles thereof.
This principle rests upon the assumption that I have sufficient knowledge at each last responsible moment. Perhaps I've talked to the people who want the feature and have a detailed list of behaviors and expectations I can turn into tests. Perhaps I have a specification or RFC or test suite I must pass. Perhaps I'm writing the feature for myself and know exactly what I want.
Still bugs happen.
I'm not necessarily happy about this, but I do like the feedback. Not only are people using the software, but they care enough about it to tell me what they expect it should do. More important is that a good bug report gives me additional details about user expectations.
Would it be nice to have them before adding the feature? Of course! Would it be nice to have them before releasing the software? Undoubtedly. Would I like to avoid bugs in general? You know it!
Yet bugs happen.
Some bugs are bugs of understanding. I may write a lovely JIT for Parrot and receive effusive praise and a repurposed bowling trophy, but when someone says "That's 32-bit x86 only, and we'd really like it on our 64-bit processors," I know I have more work to do. That's good and that's valuable -- not only to give me a better understanding of what real users need, but to remind me to talk to people to discover what they really want and really need.
Some bugs are bugs of implementation. Perhaps I don't understand the pipeline and instruction scheduling system of a 64-bit processor and the resulting JIT code is unaligned and four times slower than it should be. As much as I'd like to avoid these problems, they happen. That's good and it's valuable, not just to improve the software for users, but also to help improve the test suite.
Furthermore, analyzing the causes of both types of bug can help me improve the process of creating software. Perhaps I'm not testing enough. Perhaps the characteristics of my sample workloads do not match real world uses. Perhaps the assumptions I've made about how people use the software need to change; perhaps their goals and values have changed.
I wish I could get this information reliably before I add a feature, but I'm a practical guy sometimes. Sometimes the best feedback you can expect is "Hey, it doesn't work!" That you can fix.