If you use a search engine to find a beginner's Perl tutorial, you're more likely to find a lousy Perl tutorial. (Perl Beginner's site is a good place to start.) The problem isn't Perl as much as it is a systemic problem with modern search engines.
Summary for skimmers:
- New doesn't automatically mean better
- Best is a term with necessary context
- The popular has a tyrannic inertia
- The solution isn't as easy as "Just publish more!"
If you remember the early days of the web, Yahoo's launch was a huge improvement. Finally, a useful and updated directory to the thousands of new websites appearing every month! Then came real search engines and search terms, and we started to be able to find things rather than navigating hierarchies or trying to remember if we'd seen a description of them.
(It seems like ages ago I managed to download 40 MB of scanned text of ancient Greek manuscripts to create my own concordance for research purposes, but this was 1996.)
Then came Google, and by late 1998 it had become my most useful website. The idea behind PageRank was very simple (and reportedly understood by a few other large companies who hadn't figured out what to do with it): people link to what they find useful. (Certainly I oversimplify PageRank, but you can test current versions inductively to see that it still suffers this problem.)
PageRank and Wikipedia have the same underlying philosophical problem: reality and accuracy are not epiphenomena arising from group consensus. ( An epiphenomenist or a full-fledged relativist might disagree, but I refute that by claiming I was predestined to believe in free will. Also Hegel is self-refuting, so there.)
PageRank's assumption is that people choose the best available hyperlink target. (For another example of the "rational economic actor" fallacy, see modern economics.) This is certainly an improvement over manually curated links, but without a framework for judging what "best" means in the author's intent or the author's historical context at the time of writing, PageRank users cannot judge the fitness of a link for their own purposes.
(While I'm sure some at Google will claim that it's possible to derive a measurement of fitness from ancillary measures such as "How many users clicked through, then performed a search again later?" or "Did the search terms change in a session and can we cluster them in a similarity space?", you're very unlikely to stumble upon the right answer if the underlying philosophy of your search for meaning is itself meaningless. The same problem exists even if you take into account the freshness of a link or an endpoint. Newer may be better. It may not be. It may be the same, or worse.)
In simple language, searching Google for Perl tutorials sucks because consensus-based search engine suckitude is a self-perpetuating cycle.
Wikipedia and Google distort the web and human knowledge by their existence. They are black holes of verisimilitude. The 1% of links get linkier even if something in the remaining 99% is better (though I realize it's awkward to use the word "better" devoid of context, at least I let you put your own context on that word).
It's not that I hate either Google or Wikipedia, but they share at least one systemic flaw.
Certainly a fair response of my critique is that a concerted effort by a small group of people to improve the situation may have an eventual effect, but I'm discussing philosophical problems, not solutions, and even so I wear a practical hat. A year of effort to improve the placement of great Perl tutorials in Google still leaves a year's worth of novices reading poor tutorials. (At least with Wikipedia you can sneak in a little truth between requests for deletion.)
Of course this effort is worth doing! Yet I fear that the tyranny of the extant makes this problem more difficult than it seems.
Edit to add: there's no small irony in that the tyranny of the extant applies to some of the Perl 5 core documentation as well. I saw a reference to "You might remember this technique from the ____ utility for VMS!" just the other day.