October 2011 Archives

Update: PDFs, ePub, and Mobi files are available from Modern Perl: The Book, and you can read Modern Perl: The Book online now!

Modern Perl: the book is getting a second edition.

We're not calling it a second edition. We're calling it the 2011-2012 edition, because it covers Perl 5.12 and 5.14.

I've edited the book again, updated it for new features of Perl 5.14, fixed several reported bugs in the first edition, and have improved the coverage of new ideas and trends in the world of modern Perl while removing some things that turned out not to have been good ideas.

I'd like to get the book to the printer in the next couple of weeks. Here's how you can help.

Modern Perl: The Book: 2011 - 2012 edition, the draft is available as a letter-sized PDF. I'll leave it up for a couple of weeks. Please report errors of content at the Modern Perl book repository. (Please also update CREDITS if you provide a pull request or remind me to do so, if you're not already credited.) Feel free to ignore any formatting errors; there are likely a few.

This edition will also have free electronic copies when the printed edition comes out. We've also discussed hosting a web version of the book on this very site; is that interesting to you?

Thanks for reading.

If you use a search engine to find a beginner's Perl tutorial, you're more likely to find a lousy Perl tutorial. (Perl Beginner's site is a good place to start.) The problem isn't Perl as much as it is a systemic problem with modern search engines.

Summary for skimmers:

  • New doesn't automatically mean better
  • Best is a term with necessary context
  • The popular has a tyrannic inertia
  • The solution isn't as easy as "Just publish more!"

If you remember the early days of the web, Yahoo's launch was a huge improvement. Finally, a useful and updated directory to the thousands of new websites appearing every month! Then came real search engines and search terms, and we started to be able to find things rather than navigating hierarchies or trying to remember if we'd seen a description of them.

(It seems like ages ago I managed to download 40 MB of scanned text of ancient Greek manuscripts to create my own concordance for research purposes, but this was 1996.)

Then came Google, and by late 1998 it had become my most useful website. The idea behind PageRank was very simple (and reportedly understood by a few other large companies who hadn't figured out what to do with it): people link to what they find useful. (Certainly I oversimplify PageRank, but you can test current versions inductively to see that it still suffers this problem.)

PageRank and Wikipedia have the same underlying philosophical problem: reality and accuracy are not epiphenomena arising from group consensus. ( An epiphenomenist or a full-fledged relativist might disagree, but I refute that by claiming I was predestined to believe in free will. Also Hegel is self-refuting, so there.)

PageRank's assumption is that people choose the best available hyperlink target. (For another example of the "rational economic actor" fallacy, see modern economics.) This is certainly an improvement over manually curated links, but without a framework for judging what "best" means in the author's intent or the author's historical context at the time of writing, PageRank users cannot judge the fitness of a link for their own purposes.

(While I'm sure some at Google will claim that it's possible to derive a measurement of fitness from ancillary measures such as "How many users clicked through, then performed a search again later?" or "Did the search terms change in a session and can we cluster them in a similarity space?", you're very unlikely to stumble upon the right answer if the underlying philosophy of your search for meaning is itself meaningless. The same problem exists even if you take into account the freshness of a link or an endpoint. Newer may be better. It may not be. It may be the same, or worse.)

In simple language, searching Google for Perl tutorials sucks because consensus-based search engine suckitude is a self-perpetuating cycle.

Wikipedia and Google distort the web and human knowledge by their existence. They are black holes of verisimilitude. The 1% of links get linkier even if something in the remaining 99% is better (though I realize it's awkward to use the word "better" devoid of context, at least I let you put your own context on that word).

It's not that I hate either Google or Wikipedia, but they share at least one systemic flaw.

Certainly a fair response of my critique is that a concerted effort by a small group of people to improve the situation may have an eventual effect, but I'm discussing philosophical problems, not solutions, and even so I wear a practical hat. A year of effort to improve the placement of great Perl tutorials in Google still leaves a year's worth of novices reading poor tutorials. (At least with Wikipedia you can sneak in a little truth between requests for deletion.)

Of course this effort is worth doing! Yet I fear that the tyranny of the extant makes this problem more difficult than it seems.

Edit to add: there's no small irony in that the tyranny of the extant applies to some of the Perl 5 core documentation as well. I saw a reference to "You might remember this technique from the ____ utility for VMS!" just the other day.

What Perl 5's use Really Does

No programming language will prevent anyone so inclined from writing bad code, but some programming language features lend themselves to misuse. A poorly written macro can wreak havoc on a C, C++, Lisp, or scheme program. An unchecked file open can cause a seemingly harmless PHP program to execute remote code.

Sometimes Perl 5 lets you put the verb in front of the subject in OO code.

I've written before about the problems with indirect object notation (also called the dative case). While it's a lot of work to excise this syntax from examples in the core documentation, it's even more work to convince people not to use this fragile syntactic construct.


Consider this example from perldoc perlmod:

Perl modules are included into your program by saying

    use Module;


    use Module LIST;

This is exactly equivalent to

    BEGIN { require Module; import Module; }


    BEGIN { require Module; import Module LIST; }

Unfortunately, that's not true. (I've submitted a patch for this.)


#!/usr/bin/env perl

sub JSON
    die "Did you expect this?"

use JSON;

Experienced Perl 5 programmers should agree that this is effectively the same as:

#!/usr/bin/env perl

sub JSON
    die "Did you expect this?"

    require 'JSON.pm'; 'JSON'->import;

... but that behaves very different from what the documentation suggests:

#!/usr/bin/env perl

sub JSON
    die "Did you expect this?"

    require JSON; import JSON;

The biggest problem with the latter example is that the parser has to guess what JSON means. It's obviously easy to trick the parser with regard to require and import. You can't trick the parser with regard to use with such ease; the parser actually emits a method call for import and avoids parsing anything. It's always a method call, never an ambiguous resolution which may be a method call.

perlmod does waffle a little bit about how require MODULE gives the parser some hints, but waffles are food, not facts.

I've encountered this problem personally in the past couple of months with XML and JSON modules. Lest you think that this is a contrived example, remember also that you don't have to declare a function with a colliding name to cause this problem. You can merely import one, whether you intend to or not. Worse, the order in which you use modules can hide or expose this bug.

If you still don't think this is a problem, imagine how you'd explain to a novice what's going on and how to fix it. (If you think telling a new Perl 5 developer to read the comments in the language parser which explain the rules of interpreting barewords and divining what might be a method call is an acceptable explanation, perhaps you should reconsider your calling as a teacher.)

Alternately, the documentation and our code examples could use the unambiguous syntax and avoid subtle but difficult to debug incorrectnesses.

Reimplementing the Wheel, not the Road

A significant portion of my day job is the publishing side of Onyx Neon. We have invested in a toolchain which takes manuscripts written in PseudoPod and produces XHTML, PDF, ePub, and print-ready documents. (We're happy to build on work which has come before, such as LaTeX and POD and parsers and so forth.)

The existing PseudoPod formatters had their flaws, though (because we hadn't pushed them hard enough to admit to ourselves that everything is a compiler). In a small business like ours, the best thing to do now is often the simple and easy fix—if you're careful that you don't delay doing the right thing for too long.

Doing things the right way is much easier now that I've improved our tools to create a real document object model which is traversable correctly.

The secret is twofold:

  • Think really hard about the problem you're trying to solve, especially the edge cases which are neither obvious nor easy.
  • Reuse existing tests as much as possible.

The latter point is far subtler than it seems. Many, many Internet discussions debate endlessly the pros and cons of test-driven design. Many, many people make the point that unit tests can be fragile and cumbersome and may not provide the most practical benefits we'd all like to get. (Every debate degenerates to this, as if anyone seriously argued that highly specific unit tests were the prime goal of test driven design.)

I am fortunate that writing a document formatter and transliterator has a well-defined input and a well-defined output. (I suspect that many programs have such mappings.) I have an input document with all of the features the translator should support, and I know what kind of output they should produce.

Reimplementing this formatter was a matter of making each test file pass, a few assertions at a time.

With that said, testing a few hundred assertions in less than a dozen files is a relatively small job—perhaps a few hundred lines of code. Yet I believe the principle applies, especially if you have well-factored and well-tested components.

You can see the same principle at work in Ward Cunningham's Fit project. Making tests reusable and retargetable allows the possibility of reimplementation with a baseline of correctness.

I don't have specific suggestions about how to write tests that are so useful, but I've noticed that these tests as well as the tests of web interfaces on other projects have tended to converge on a model of producing careful input and examining output for very specific results. Loose coupling isn't just for code components.

It's interesting to read Things You Should Never Do, Part I and The CADT Model in this context. (This is perhaps the one thing that Rakudo hasn't screwed up; at least they have a test suite, mostly thanks to the foresight of Audrey Tang.)

Adding Dates to Modern::Perl


The Modern::Perl metapragma needs some attention. I'd like to add autodie by default and enable unicode_strings on 5.12 and 5.14, but there's a dilemma.

This day was bound to come anyway. What was modern in 2009 for the Perl 5.10 era won't be modern soon.

I don't mind if you write use Modern::Perl; and get the most modern semantics of the version you have installed. (I'm tempted to check if the distribution is more than two years out of date and print a single warning that Perl 5 has marched on, but you may be able to talk me out of that.)

I think the best option for the module is to recommend the use of a date, as in:

use Modern::Perl 2010; # Perl 5.12 semantics
use Modern::Perl 2009; # Perl 5.10 semantics + autodie
use Modern::Perl 2011; # Perl 5.14 semantics

This helps document the intent of the code, as well as the degree of modernity (or decrepitude) of existing code which uses this metapragma.

I know how to implement this. What do you think of the interface?

Two Paths Diverge


Why are Puppet and Chef written in Ruby?

Because Rails was a sufficiently easier onramp for simple database-backed web programming than Java and sufficiently cleaner than PHP.

I'll let you ponder that for a second, until you're no longer composing angry responses to this article. When you've convinced yourself that that's indeed correct, the next paragraph is one blank line away.

I've oversimplified, of course. Luke Kanies (creator of Puppet) told me directly that he wrote Puppet in Ruby because he wanted to learn Ruby. (I won't claim that Adam Jacob told me anything similar, but I can believe it.)

If Rails was Ruby's killer application in 2005 and 2006, Rails brought new attention to Ruby, and that new attention brought Ruby to new places. The Internet will certainly debate whether Rails is Ruby's primary driver right now, and the Internet incessantly debates whether Ruby or Python or Perl or shell or whatever is better or more popular or more fashionable for non-Rails tasks such as administration or automation, but in 2011 your eyebrows don't automatically raise themselves when you run across an automation tool written in Ruby as they would in 2005 or 2004 or 2001.

The sudden popularity of Rails brought a second wave of popularity for Ruby for non-Rails.

I suspect you don't see the same popularity for command-line PHP because no one adopts PHP because it's better than something else. People adopt PHP because it's easier than everything else.

(I also don't know how to characterize Node.js, except that Netscape tried it back in the '90s and it didn't work, and it reminds me of the AOLServer dilemma, except that Node.js has a little more shiny for some reason. Maybe a JIT and something resembling a working module system will do that for you. Then again, Perl's second wave came because system administrators already had a pretty decent language for system administration, and isn't it easier to write web programs in this than in shell or C? (I own a book about web programming with the Korn shell. I am not making this up.))

Conventional thinking during Perl 6's beginning claimed that Perl 6 would be the natural evolution of Perl 5, just as Perl 5 was the natural evolution of Perl 4. The RFC process demonstrates this—we want a better object system! Better threading! Improvements here! Improvements there! More consistency! Fewer edge cases!

Even the early talk of Parrot and Ponie bore this out. At some point, Perl 5.12 was to run in Parrot, and the two languages would merge together.

Some people are happy that didn't happen. I admit I don't understand their view, but I'm honest enough to admit that they may have a point. I want a language with a great object system, with malleable control flow, with macros, with native grammars, with optional typing, with an optimizing compiler, with better garbage collection, with a great native interface, with serializable bytecode, with proper language introspection, and with finally and ultimately function signatures, and, yes, access to the CPAN.

I don't have 14 years of legacy code to worry about, in part because I have App::perlbrew, but in part because I run my code on the latest stable releases of Perl as they come out.

Maybe Perl will gradually become Perl 6 over the next decade or so and solve that problem, but I have to write and deploy code on October 14, 2011, so that doesn't really help right now.

It should be pretty obvious that if Perl 6 ever comes out and ever gets any adoption, it won't do it by being a better Perl. At least, at the time of this writing, there's no evidence to suggest such a thing ever happening. (Don't confuse the fervor of a few true believers with evidence of anything but the fervor of true belief.)

In other words, if you're looking for the pragmatism of Perl with some of the language flaws fixed but with access to the CPAN, Rakudo isn't it and won't be it for as long as anyone can foresee.

There is another path.

I first used Ruby in late 2000 or early 2001. Dave Thomas told me about this Rails thing in August 2004. Rails took off in January 2005. While I don't believe that the web will eat all software ("The script on this page is taking a long time to recompile your IDE. Would you like to stop it or continue waiting?), Rails did offer those poor plebes in the J2EE and PHP worlds something far better than what they had.

The problem is that Perl 6 has no reason to exist. There's no singular problem that it solves. It has no users and no use cases, and the only user feedback it gets from people who aren't interested in it because they believe it's fun to write a compiler is "It's slow and buggy and you can't do anything with it."

It's difficult to bring a product to the market without customers and it's difficult to find customers if you don't know what they want or need.

Thing is, you can't easily predict what that will be or when and especially why. You can't optimize for that based on user feedback. Even so, I trust Larry's instincts and good taste. I believe him when he says he believes it's possible to avoid Worse is Better and, for once, get a Better is Better cycle.

Maybe the best way to explain what Perl 6 is, as it stands today, is as a revolution in search of a cause, but that's not exactly a roadmap to the success of relevance.

In Search of Minimum Viable Utility


I've spent most of 2011 focusing on my business. Sure, I write heaps of code, but I direct most of my attention to figuring out how I can build a sustainable business which produces value and wealth.

Eric Ries's the Lean Startup contains a great deal of wisdom (and his Build chapter is the best non-code metaphor for test-driven design that I have encountered). In particular, he argues that the only way to figure out what customers are willing to pay for is to try to sell them something. The best predictor of why people will give you money is what people give you money for.

Yet you're here to read about Perl and programming.

My rant a couple of months ago about Why My Side Project Doesn't Use Perl 6 came from deep frustration. After eleven years of development and more than a year after the long-awaited Rakudo Star ("finally ready for early adopters!") release, it was incredibly frustrating to discover that, even as someone with commit access to the entire Rakudo stack from specs to tests to Parrot to NQP to PCT to Rakudo itself, the whole thing just wasn't ready for me to use even in a simple side project.

The biggest users of Perl 6 (via Rakudo) code right now are, as far as I can tell, Rakudo itself (via a stripped-down bootstrapping mechanism), a single blog written in Perl 6, and Rosetta Code chrestomathy examples.

Yet I'm not here to bury Perl 6.

I've spent some time writing JavaScript, and I can't understand why people say there's a nice language in there despite its flaws. (In my mind, it's mostly decent, despite its flaws.) I suppose that's like saying "Wow, Python supports functional programming!" if you learned Java because Scheme was too scary or "Ruby is basically Smalltalk!" if everything you know about object systems comes from realizing that PHP continues to get it wrong.

I've been thinking about how, if Dart is not the language of the future, what the language of the future might be, and what Larry says about evolution versus revolution.

Arc turned out not to be the language of the future either.

Gradual improvement is well and good, and it's starting to serve Perl 5 pretty well. The ability to shed cruft and mistakes and poor designs holding back future improvements (a real metamodel! slimmer memory footprint! JIT! safe exceptions! function signatures! multiple dispatch! safe sandboxing! cheaper and easier parallelism! implementations on other backends!) is crucial.

Feedback makes that happen.

Sure, you can predict that a project which relentlessly focuses on one and only one hot thing (Write a blog in 15 minutes! Manipulate DOM elements on the client! Deploy a dynamic web page just as easily as writing HTML!) will have a success, but real revolutionary success comes from making a new niche, like web development in 1994.

As much as I'd like to argue that you can iterate your way to a revolution through carefully gathering and considering real user feedback, I don't know that that's the case.

I worry that once you release a programming language with minimum viable utility, you ossify early decisions of design and implementation such that you can't quite yank the floorboards out from underneath any users you have, lest they stop being users and stop providing feedback.

Somehow you have to balance the desire to preserve an existing community with the need to attract new members to the community, and that may be the hardest problem in programming language development.

If you read Dan's Parrot post-mortem, you see those two tensions. Parrot's two biggest technical failings from the start were writing the wrong code and keeping it around. The revolution of Pugs wasn't that Haskell is magic or that all Perl needs is the superhuman effort of a superhero (though Audrey certainly gave it her best shot), but that there is no substitute for working code.

Parrot doomed itself for a decade because there was no serious working Perl 6 implementation for years. (There were three half-hearted attempts, each a rewrite of the previous.) Parrot lumbered along implementing things it thought Perl 6 might probably need without getting real working feedback by actually running Perl 6 code. (Dan's been out of Parrot longer than he was in Parrot, and some of that wrong code is still around, because you can't yank the foundation out from under people's houses without some structure and warning and planning.)

Yet I'm not here to praise or bury Parrot either.

If I'm right that normal people adopt languages to get stuff done, you have to take a pragmatic approach. Make something radically better at getting something revolutionary done, and you have a chance.

(PHP is radically better at making simple web pages than just about anything else, even though the language itself is just about the worst thing imaginable. Similarly, Java is radically better than C++ because of garbage collection and a break from the PDP-8 type system of C, and also it was cheaper than Smalltalk, at least until Oracle bolts a coin slot onto the JVM.)

In conclusion, I don't know.

On paper, Perl 6 is a nicer Perl than Perl 5.

In practice, none of its implementations reach the Minimal Viable Utility stage for me, a fairly normal and fairly positive early adopter. I don't know if that's good or bad for Perl 6.

It's good in the sense that it gives Perl 6 freedom to experiment to find its revolution. It increases the technical possibility that it won't fall into the "Wow, that's all Python 3000?" or "Arc? Scheme with some renamed builtins?" or "PHP 5 is the new PHP 6" or "That's all the better they could do with Go?" or "17,000 lines to write 'Hello, world!' in Dart, and they misunderstand OO so badly that they think Java interfaces are a good idea?" trap.

It's bad in the sense that I'm not sure that any of the big three Perl 6 users right now are representative of the kind of revolution Perl 6 needs—plus after eight years of trying to get something useful shipped, it's hard to work up the motivation to donate even more time and effort toward that always nebulous someday.

Sure, Pugs demonstrated that writing tests for the specifications was incredibly valuable. By no means should anyone diminish that.

Sure, Rakudo demonstrated that a serious implementation of Perl 6 on Parrot helped focus both projects.

Yet ultimately, both Parrot 1.0 and Rakudo Star were more fizzle than sizzle, and I think that's because neither one rose to the level where normal people (for however you want to characterize the kind of people you'd consider normal users) could do useful things.

I know, I know it's really difficult to start a revolution, especially when you don't know where it will end up, but very especially when you don't know what it will be. I know you want to give yourself all sorts of escape hatches and valves to help guide your evolution into a revolution....

... but at some point, I have to get work done, and if the shortest distance between here and there is using a lesser tool and if I'm not sacrificing too much future to do it, I have to hold back a tear and wish that Perl 5 had native method signatures or multiple dispatch or malleable control flow with continuations or macros.

(What about Niecza? I really shouldn't have to explain this, but IRC commentary on how I'm a bad person cannot seem to get this right. In mathematical terms, the set of things which are not promissory estoppel contains the opinions of everyone who isn't legal counsel or a director at Microsoft opinion about how it's safe for my business to use Mono. See also Microsoft Sues TomTom. If you live in a country without the threat of software patents, or if your livelihood otherwise can't be threatened by the possibility of a lawsuit against your use of a project in a way a known patent aggressor with patents on fundamental technologies you use and threatening indemnification agreements with now-defunct licensees doesn't like, good for you. We take your well-intentioned legal advice under advisement.)

The JFDI Theory of Language Adoption


Perhaps you've seen a proposal to reboot PHP. How telling that its explanation of new features focuses on syntax issues.

I don't know if phpreboot is doomed. I do know that PHP isn't doomed, at least not because it has awful syntax (it does) or baffling feature inconsistencies (it does) or lends itself to awful code (it does).

Consider also JavaScript, a halfway decent language tied for the longest time to implementations which competed to out-awful each other, bound to the horrible DOM, and limited by syntactic inconsistencies, a mish-mash of ideas from Scheme and Self, a strange paucity of built-in data structures, and a scoping implementation so deliberately wrong you might scratch your head and wonder if its designer accidentally stumbled upon the other principles of Scheme and Self.

JavaScript's not doomed, either. (It's not the all-singing, all-dancing crud of the world, and if you liked awful desktop software written in VB and Access, you absolutely must look forward to a world where the only software is web software written in client-side pre-ECMAScript 5 JavaScript, but it's not doomed.)

Programmers and techies have a theory that the best product will win, even though we all know this is a silly theory. To start, we don't all agree on what's best. Some people like mixing code with HTML. Other people have good taste. Some people like semicolons and sigils. Other people like their code to look like a big homogenous mishmash of runon sentences. The amount of which this matters as a general explanation for the whole set of potential programming language adoptees is slim, because aesthetics matter much, much less to techies than we want to admit. (How strange that we complain about slick marketers in suits nattering on about presentation, then argue over the placement of braces and the presence or absence of parentheses on method calls.)

The JFDI theory of language adoption is this: the primary concern of a novice adopting a programming language is the distance between zero and just getting something done.

For PHP, that's dumping new tag in the existing HTML file you already know how to write and upload.

For JavaScript, that's copying and pasting code from View Source into an existing HTML file you already know how to write and upload.

For Perl CGI programs, that's dumping a slightly-better-than-shell script in a directory on the server where you already know how to create an HTML file.

The quality of PHP or JavaScript or any implementation is relevant to the JFDI Theory only in so far as an implementation which blocks this short onramp prevents the initial success. Any other implementation or design problems do not matter.

The semi-obvious corollary to the JFDI Theory is the Nomad Theory of Language Unadoption: the people to whom semicolons and pseudo-DSLs and sigils are Vitally Important Issues and Markers of Good Taste and Reasons Your 20 Year Old Language Family will Soon be Irrelevant will move on en masse to something else in six to eighteen months anyway, so why bother pleasing them anyhow?

In other words, the primary driver of the possibility of large-scale adoption of a new language or an existing language in a new niche is the ease of which regular people can get something done.

That's why projects like ActiveState's PPM repository—much improved over a couple of years ago—and Strawberry Perl and Perlbrew and Plack and cpanminus and hopefully Carton are important to Perl. Continuing to ease the onramp to Perl and the CPAN can help attract and retain new users.

Test::Tutorial needed some attention. I've referred to it from various places in the decade-plus of its existence, but when I mentioned it most recently in the new edition of the Modern Perl book, I looked at it again.

The tutorial holds up pretty well for its age, but it didn't mention done_testing, which is a great solution to the "Should my test file declare a plan?" In short, you have three options for figuring out if your test program ran to completion: count all of the tests you expect and predeclare that number, skip it all and hope, or call done_testing at the normal exit point of your program. If done_testing runs, the test run succeeds. If done_testing doesn't run, an exception or other abnormal exit interrupted the test run, and the test run fails.

Along the way, I took the opportunity to clean up a few other things. In particular, I removed the -w flag from the hash-bang lines at the start of the test programs. (A flag enabling global behavior? In a test file? In 2011? More than a decade after the introduction of the perfectly cromulent lexical warnings implementation? DELETE!)

Schwern objected, as is his right, giving four reasons Perl tests should run with -w. In particular, he believes that it is the responsibility of the consumer of a dependency to handle warnings in dependencies.

I can't agree.

Consider the possible cases where a dependency may produce a warning.

package MayWarn
    sub no_warning
        my $foo;
        return "<$foo>";

        use warnings;
        sub lexical_warning
            my $foo;
            return "<$foo>";
            no warnings 'uninitialized';

            sub disabled_warning
                my $foo;
                return "<$foo>";

It's pretty clear that interpolating an undefined value into a string will cause a "Use of uninitialized value" warning in two circumstances: where global warnings are enabled and where lexical warnings are enabled.

No Lexical Warnings

Compare calling no_warning with and without -w:

$ perl -Ilib -MMayWarn -e 'MayWarn::no_warning()'

$ perl -Ilib -MMayWarn -w -e 'MayWarn::no_warning()'
Use of uninitialized value $foo in concatenation (.) or string at lib/MayWarn.pm line 6.

In this case, -w affects the behavior of code written without regard for lexical warnings.

Lexical Warnings Enabled

Compare calling lexical_warning with and without the -w flag:

$ perl -Ilib -MMayWarn -e 'MayWarn::lexical_warning()'
Use of uninitialized value $foo in concatenation (.) or string at lib/MayWarn.pm line 14.
$ perl -Ilib -MMayWarn -w -e 'MayWarn::lexical_warning()'
Use of uninitialized value $foo in concatenation (.) or string at lib/MayWarn.pm line 14.

The -w flag is irrelevant to the reporting of warnings, because lexical warnings are always in effect.

Lexical Warnings Enabled, Uninitialized Warnings Disabled

Compare calling disabled_warning with and without the -w flag:

$ perl -Ilib -MMayWarn -e 'MayWarn::disabled_warning()'

$ perl -Ilib -MMayWarn -w -e 'MayWarn::disabled_warning()'

The -w flag is again irrelevant, because lexical behavior overrides global behavior.

-w in Test Suites

I see the point that the rightest thing to do is to track down every CPAN module which isn't warnings-clean and try to convince the authors to Do The Right Thing, but isn't blindly applying global behavior in the hopes of finding and convincing other people to fixing bugs the same argument so many people had against UNIVERSAL::isa? (Just consider how much buggy code is still out there, years into that argument.)

Furthermore, modules which use lexical warnings properly (code written to at least Perl 5.6 standards—a March 2000 level of modernity—receive no benefit from -w. That flag is irrelevant.

Code written to the previous millennium's standards does offer the possibility of benefit from -w applied...

... but I'm not going to be the one arguing that it's sensible for Test::Harness to magically add flags I didn't ask for, and that that obligates other authors to modify their code.

Maybe I'm wrong, and maybe the desire to improve the CPAN trumps my desire to be able to reason about what my code does (and to, you know, use features lexically as I see fit), but it seems to me that the correct place to apply the big hammer of All CPAN Code Should be Free of Warnings, Globally (But Thou Canst Use Lexical Warnings, You Philistine) should be in the CPAN testers, which can test things in and of themselves.

Even so, I can't justify blindly injecting global behavior into what should be protected lexical scopes. Test::Harness doesn't have enough information to decide why dependencies written without use warnings; or no warnings; lacks either pragma. Perhaps the author doesn't know about it. Perhaps the author doesn't care. Perhaps the code is free of warnings, but the author distributes the code without the pragmas enabled to save time or memory (yeah it's probably misguided, but it happens). Further, Test::Harness can't know whether any warnings represent actually dubious code, or whether they're useless. (A handful of experienced and respected and intelligent Perl 5 programmers have argued that the uninitialized value warning does more harm than good, and sometimes I agree.)

What do you think?

Happy families are all alike; every unhappy family is unhappy in its own way.

— Leo Tolstoy, Anna Karenina

I've noticed that most folks who default to Perl tend to write Perl in a manner largely inconsistent with the next guy's.

eropple, Hacker News discussion of Puppet, Chef, Ruby, Python, and Perl

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.... "Ah, so you shall be sure to be misunderstood." — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.

— Ralph Waldo Emerson, Self-Reliance

I suspect, but cannot yet prove, that the internal consistency of a program is an emergent property discovered through active development, maintenance, feedback from real users, and the concomitant reframing of features and refactoring of code.

I believe, from long and often painful experience, that every good program is good in its own way, while bad programs are all alike.

I understand that I will understand my problem domain better as I learn more by solving it in parts. (But I'm happy to borrow a well-tested implementation of the All Points Shortest Path algorithm when it's obvious I have a graph traversal problem.)

I laugh at the people who recoil in mock horror at the use of sigils and variable declarations and the general principle of TIMTOWTDI, but encourage everyone else to monkeypatch global classes to create their own parser-abusing "internal DSLs".

Then again, the dominant criticism of Lisp seems to be "Look at all the parentheses!" and not "Wow, you mean I have to learn the semantics of this series of macros and their possible combinations before I can help maintain this program?"

May we laud our languages and tools and ecosystems for the ways in which they support our discovery and promotion of deep consistencies.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Archive

This page is an archive of entries from October 2011 listed from newest to oldest.

September 2011 is the previous archive.

November 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?