By now it's clear that the people behind the Perl 5 release process consider "stability" one of their primary constraints. Support isn't easy to define for community-produced software; neither is "stable". That's no reason not to try.
Expectations of Stability
A reasonable first definition of "stable" is that an average user should be able to follow the documented process of obtaining, configuring, and building the software on a supported platform. After that, the user should expect that the automated tests pass.
There may be glitches in that process, but any software release for which the developers cannot make that guarantee is not stable.
Any disagreements? Good.
Does "stable" guarantee that all tests will pass on your system? No; your system may have unique characteristics that the developers did not anticipate. No plan survives first contact with the enemy (and feel free to quote me on this truism: no test plan survives first contact with entropy). Careful design and revision can ameliorate this, as can extended testing in heterogenous environments, but the only true test of whether the software works as intended is when real users use it for real purposes.
This is why we have bug trackers, patches, and subsequent releases.
Does "stable" guarantee that all previous uses of the software will continue to work as expected? No; your specific uses may rely on the presence of bugs, unspecified features, happy accidents, or other behaviors not captured in the test suite. Again, careful attention to detail can ameliorate this, but you can never guarantee it unless you vet all uses of your software.
This is why we have bug trackers, patches, subsequent releases, deprecation policies, and comprehensive documentation.
Does "stable" guarantee that the software will meet all of your needs? No; the software only does what it says it will do. You may be able to coax it to perform other functions beyond those documented and supported, but if you depart from what the developers expect, intend, and promise, you are on your own.
This is why we have roadmaps, overviews, and comprehensive documentation.
Does "stable" guarantee that there are no bugs? No; though whole-program analysis and computer-aided proofs can assist in verifying algorithms, this is infeasable for most software. There will be bugs of implementation, bugs of unclear specification, bugs of poor documentation, bugs of misunderstood requirements, bugs of portability, bugs of testing, and bugs of many other types.
This is why we have bug trackers, bug reports, subsequent releases, and you get the picture by now.
Does "stable" guarantee that you can replace the previous version of the software with the new version in your production environments on the day of release and sleep soundly that night? No; I cannot imagine anyone who would not call you a fool for doing so.
That's why you have comprehensive test suites, test plans, deployment environments, and access to the bug tracker, release notes, deprecation plans, support policies, and lines of communication to the developers.
That's why you have access to testing releases (not that anyone uses them).
That's why maintaining your software is your responsibility -- not because developers are jerks who hate you and would burn down your office if they could, but because the only effective way to prove that a piece of software is sufficient for your needs is for you to verify it against your needs.
Should developers get sloppy about their release processes? Of course not. Automated testing, exploratory testing, smoke testing, and other techniques to keep code quality high are amazingly important. If anything, most projects don't take them seriously enough.
Do developers get a free pass if a release has embarrassing bugs? Of course not. Mistakes are mistakes, and software that's not suitable for its intended uses needs immediate attention.
Are users part of the testing and verification process? They always have been, especially for general purpose tools such as a programming language. If software developers could get it all right the first time, we wouldn't need to argue over release processes or write automated test suites or include FAQs and disclaimers in our documentation. You wouldn't need to test software against your own purposes.
We don't live in that world, though.
Can regressions and embarrassing bugs be rare? Of course. The quality of the code and development process is exceedingly important. A well-maintained test suite verifying a well-factored and maintainable codebase updated regularly with thoughtful analysis of feedback, bug reports, and feature requests from active users can achieve very high quality. Mistakes will happen, but there are ways to reduce their risk and frequency.
Is it worth pretending that we can achieve perfect stability by delaying the release of software? In my opinion, no. It's more important to me to reduce risk by the flexibility and agility of being able to release software frequently.
I suppose that a development process that has to coordinate dozens of separate projects to converge upon a stable point on multiple platforms simultaneously while operating on all sorts of other constraints can eventually converge on that miraculous point of stability... but it'll still have bugs. There'll still be patches. Users will still need to test their own software against it.
Are users are better off waiting months and months for fixes to problems they have right now than they are dealing with mythical bugs and regressions and problems they might have in a future so-called "stable" release? Given that you can't know those problems until users encounter them, and given that users don't test release candidates, they'll discover these problems sooner or later, and only after a release.
The question is whether it's better to address those problems sooner or later. My bias is toward fixing problems as people report them. Certainly released software with fixes for known bugs must be more stable than unreleased and untested software.