Of all the languages I've used for pay and for hobby, none compare to Perl in terms of testing culture and ecosystem.
Sure, with a few seconds and your favorite search engine anyone can find countless examples of awful code written by people who had no concern for writing good code. (That's not a language problem.) Sure, you can find countless examples of Perl code written to the standards of 1992 with little regard for documentation or formatting or robustness or even the minimum effort at basic procedural programming. (That's not a language problem.)
Shameless plug: I wrote a book called Modern Perl. You can buy Modern Perl: the book in paperback from Amazon (and other booksellers) or buy Modern Perl: the book in Kindle format—or read it online or download it as PDF or ePub for free. Furthermore, if you'd like to talk about how to improve the testing of your product or project, I am available for consulting.
One of the reasons we can talk about such things as Modern Perl is due to the quality of Perl's testing culture. I had a seat in the front row for almost all of the Perl testing and quality revolution, starting in 2000.
Actually it starts in 1987. If you find and download a Perl 1.0.x tarball, you'll see that it includes a tiny test harness and a series of language tests. This predates the notion of Test-Driven Development. It predates even Test-First development. (As far as I can tell, it even predates the invention of SUnit, the Smalltalk test framework that inspired xUnit, arguably the most popular style of testing in most languages.)
Update: As Larry himself said in a 1997 Larry Wall interview with Linux Journal:
You can restructure all your code into modules and unit test it in a jiffy because the Perl interpreter is so handy to invoke.
In 2000 and 2001, Perl 5 started taking testing more seriously. Even though Perl 5 has no formal specification outside of "Whatever the implementation does, as long as the documentation agrees," a group calling itself Perl QA took up the banner of developing tests, a testing system, and a testing culture to help Perl grow and evolve safely and on purpose through the next phase of its life.
As part of that process, Michael Schwern and I developed a library called Test::Builder to unify the internals of multiple test libraries and to allow future test libraries to share the same backend.
It's been wildly successful.
It's been so successful that you can download from the CPAN today hundreds of testing libraries which are all composable and play together nicely in the same process in the same file. They all work together with the standardized test harness and reporting tools because the Perl world does agree on formal standards. See TAP.
(You don't even have to use Perl to take advantage of TAP. I've written TAP emitters and Test::Builder libraries in multiple languages.)
That's one area of success. Another area of success is the adoption of testing and testing tools by people who don't write testing tools. (Of course people in Perl QA would use these tools, but if they never reach anyone else, what's the point?)
After Schwern and I made Test::Builder, I started to work on the test coverage for Perl 5.8 and its core library. So did other people. The number of tests of the core language and its libraries quadrupled. So did the quality of those tests, as the adoption of newer, better test libraries improved. So did the quality of those tests and those libraries as we gained experience writing good tests and understanding how to write better tests.
The quality and test coverage of CPAN and deployed Perl applications improved, too.
As I wrote in Why I Use Perl: Reliability, it's reasonable to expect that you can install a new version of Perl 5, run all of the tests for CPAN dependencies, run all of the tests for your application, and everything will just work. This isn't magic. It's science.
As part of the process of developing Perl 5, a few people run automated test runs of the CPAN against commits to the new version of Perl 5 in progress. Read that sentence again carefully. Automated processes can tell you when a commit to Perl 5 in progress causes a test to fail on the CPAN—not merely one of Perl 5's core language tests or a test in the Perl 5 core library, but a test in a CPAN distribution. An automated process will notify the maintainer of that CPAN distribution as well as the developers of Perl 5 with a link to the offending commit.
The collective test suite of the CPAN (as of this writing, 108889 modules in 25473 distributions, for a collection of millions of tests) is the continuous integration test suite of the Perl 5 language itself.
Similarly, a larger army of automated test runs reports the test results of new CPAN uploads against a huge array of platforms and Perl 5 versions. This is CPAN Testers. Within a few minutes of uploading a new distribution to the CPAN, you may get back a test result. Within a couple of days, you will have plenty of test results.
The infrastructure is there. The will to quality is there. The history of encouragement with code and documentation and tools and community expectation is there. The onus is on people writing Perl to take advantage of the testing ecosystem to write the right code.
(Did I mention that other great tools exist to test your code coverage, to test the coverage of your documentation, and even to add tests for coding standard and awkward semantic violations?)
I've written code in a lot of languages. I've debugged code in all of those languages. I've used TDD in most of those languages (excluding Postscript and minicomputer BASIC). While I've seen a focus on good testing in many of those language communities, Perl's the only language community that I see that takes testing really seriously.
(Addendum: the existing Perl Testing book is still decent, but I have this recurring notion of writing a new one. If I started a Kickstarter project to gauge interest, would you pay for early access to electronic versions of the book in progress?)