Language design is hard work—not only because consistency of vision is a laborious process, but because the only way to know what users will do with your language is to wait until users use your language to do things. I wrote several questions for my friends to use in their book Masterminds of Programming, and one question I recommended asking of every language designer was "How do you plan for an uncertain future when the only thing you know is that you'll want to change something you had wrong?"
I'm rushing one of my side projects to the point where a hand-picked group of friends and family can use it for practical purposes because they're my target customers, and I'm neither so charming nor dashing that they won't tell me with brutal honesty what works, what doesn't work, and what I never should have thought they'd do.
No plan survives first contact with the enemy.
I have tremendous respect for the brave release managers of Perl, as a seemingly innocuous change could cause countless hours of work for thousands, tens of thousands, or even hundreds of thousands of developers. (One time I almost broke half of the CPAN myself.) The need to get everything right exerts immeasurable pressure to not do anything wrong.
The difference between "doing it right" and "not doing it wrong" is a vast abyss.
Consider: I extract a CPAN module from code I'm using. I make it more generic and reusable. I polish it. I publish it. Then the first report I get about it is "This would be great, if..." and that's a great thing. In truth, that's an expected part of Perl culture. That's one reason CPAN version numbers stay below the magical candy-flavored rainbow sparkle 1.0 threshold for so long (POE took ten years to reach 1.0, and it's by no means the only example): we want the right to be wrong and to change what's wrong.
That's one reason the common answer to "How do I get a new feature into the Perl core?" is "Write it as a CPAN extension first." (That answer is, as often as not wrong, but it's right for philosophical reasons and wrong for merely pragmatic reasons, and the latter are at least solvable.)
That's one reason the Moose backwards compatibility policy exists. When inventing new things that no one has ever invented before, it's easy to get things only mostly right, and it's impossible to know what's wrong without people using it and reporting on what's difficult or impossible or ugly.
The lack of that feedback is one reason that Perl 6's language design and implementation has gone through cycles of reinvention and foundering. With over a decade of no real users and no usable implementations, it's not easy to see where the language works and where it doesn't. Even though you can argue that Perl 6 gets a lot of things right practically, the consistency of vision and design required rearrangement of the middle layers of philosophy. Above all, there's no substitute for running code to discover how a system actually works, if it works at all. As a consequence, I've seen systems freeze themselves into poor designs because there's no solid evidence as to what users actually need. (Ask yourself why P6 has stagnated.)
That's one reason test-driven design and frequent, small, timeboxed iterations are part of agile or lean or whatever buzzword name you want to slap on the very practical, flexible, effective development process most of the great developers I know use—not because that's the only way to write great software, but because you can learn so much so quickly about what you really need when you give yourself the right to be wrong with little consequence.
I'm not praising the desire to rewrite willy-nilly, nor am I suggesting that the right approach to supporting software is to leave users to read changelogs and run their comprehensive test suites to decide whether and when to upgrade to new versions where everything is a candidate for major changes. I'd also never suggest that the right to be wrong is a substitute for careful thought and design based on the best information you have at the moment. We're professionals, after all, and we need to bring our best professionalism and knowledge and talent and care to our work.
Yet I am suggesting that we work with incomplete information—information that's expensive to gather in toto, if that's even possible—and that we have to do our best in the face of those limitations. Without the freedom to make little mistakes (however we apply that to our projects), we limit our possibility to make big wonderfuls.