January 2014 Archives

Fatal Warnings are a Ticking Time Bomb

When Perl 5.6.0 introduced lexical warnings, it gave programmers finer control over what parts of their program would produce warnings and how to handle those warnings. The most important feature is that the warnings pragma has a lexical effect. In other words, rather than enabling warnings for the entire program unilaterally, some files and blocks can request stricter or looser warning categories than others.

Whole-Program Robustness is Not a Library Concern

This is especially important for Perl applications which are often a combination of custom code and CPAN modules. Even though the CPAN is free and open source software and even though many of the newer CPAN distributions have public source control repositories to which you can submit patches and feature requests, there's often a thick line between "code we maintain for this project" and "code someone else maintains". Lexical warnings mean that the maintainers of CPAN distributions can choose their own warning strategies and your team can choose your own warning strategy and those strategies don't have to be the same.

Keep that separation of concerns in mind for a moment.

Blurring the Lines Between Warnings and Exceptions

The warnings pragma also included a feature which can promote any warnings caused in a lexical scope into exceptions. If, for the purpose of security or clarity or maintainability or coding standards, your team decides that uninitialized value warnings are so severe that they should be exceptional conditions, you can ask the warnings pragma to promote that warning to an exception within a lexical scope. This is a powerful feature that you should use with caution; unless you're prepared to catch those exceptions and deal with them (or not catch them and deal with the consequences), those exceptions will cause your program to terminate. That may be what you want, but if you've asked for it explicitly, Perl believes you know what you're doing.

Perl also gives you the option to promote all warnings in every warning category to an exception with use warnings FATAL => 'all';. If enabling fatal warnings for one warning category is serious, promoting all warnings to exceptions is grave.

Promoting Library Warnings to Exceptions

The warnings pragma also added an interesting feature by which modules can register their own categories of warnings. Perl will treat them as lexical warnings just as if they were core warnings. In other words, a module which chooses to register its own warning category will be able to emit warnings which respect a use warnings; or no warnings; in caller scopes.

This is a wonderful feature because it respects the separation of concerns. It's not up to a library to dictate what the users of that library consider warnable conditions. The users of that library can elect to accept warnings or disable them as they see fit.

Warnings registered with warnings also fall under the purview of the fatal warnings promotion code. If you've enabled fatal warnings and if any of the modules you use within that lexical scope emit a warning, that module's warning will become an exception.

Fatal Warnings are Not Future Proof

If you have the mindset for reading awkward bug reports, Perl RT #121085 demonstrates one serious danger of unilateral fatal warnings. Because you've asked Perl to treat all warnings as fatal, any warning you get will be fatal.

Because Perl, by default, does not treat warnings as fatal (because Perl uses the word "exception" to mean "exception" and "warning" to mean "warning", thus distinguishing between the two as a matter of syntax and semantics), it's possible that a newer major release of Perl may emit more warnings than a older major release of Perl. In fact, that happens. As Perl evolves and improves and Perl users discover more interesting and useful interactions with the language and as bugs get fixed, certain conditions lend themselves to error messages.

If you've written use warnings FATAL => 'all';, you've accepted the responsibility for checking to see if a newer release of Perl emits newer warnings. Even if you didn't know you had that responsibility, you should have known. That's what you asked for. Your program threw an exception and you didn't catch it. That's your responsibility.

Many people know that. That fact is rarely controversial. If you read the RT #121085 bug report, you'll see the argument that p5p should be extra careful adding new warnings because some people choose to treat them as exceptions, and so they're effectively exceptions, and p5p is causing previously valid programs to break—but that's a silly argument.

Perl developers brag rightly about Perl's commitment to backwards compatibility, but an understated corollary to backwards compatibility is forwards compatibility. It's your responsibility as a developer to write code that's compatible with future releases of Perl (within reason) or to face the consequences. If you write code that is likely to break when upgrading to a new version of Perl, that's your fault. p5p will do its best to ease the transition (that's why deprecations are multi-year processes which begin with warnings and eventually become exceptions), but you have a responsibility too.

That fact is a little bit more controversial, but it's pretty well established on p5p. It's not at all well established on the CPAN, which is where the problem gets a lot more serious and subtle.

Remember how modules can register their own lexical warnings? Just as Perl may add new warnings in major releases, modules can add new warnings—except that CPAN modules can add new warnings whenever they see fit, because there are no normative community standards about when it's acceptable to add new warnings, how often releases will happen, how long the support period for releases is, and any commitment to backwards compatibility or deprecation.

If you've enabled fatal warnings, you've asked for any warnings in any CPAN module that you didn't write and you don't maintain to become exceptions which may terminate your program. The risk isn't "We upgrade our core Perl once a year and read the list of changes then run our entire test suite and only then deploy a new version". The risk is "We've updated CPAN modules and now have to audit all of them for new warnings."

That's a bigger risk—but it gets worse.

Fatal Warnings Do Not Belong in Module Code

What if a CPAN module you use but do not maintain uses fatal warnings? Any change to its dependencies which throws a warning the module did not account for may now throw an exception. Because it's in code you do not maintain, it's time to break out the debugger to figure out what went wrong, then fire off a patch and hope the maintainer releases a new version. In the meantime, your code is broken because a CPAN maintainer did not think about the risk of promoting warnings to exceptions for circumstances which are outside of his or her control.

"Wait," you say. "Which CPAN author would be silly enough to write use warnings FATAL => 'all'; in library code? That's awful, and I wouldn't use that code."

The problem is that Moo enables strictures by default in all code that simply writes use Moo; and strictures enables fatal warnings in all code which uses strictures, so writing use Moo; enables fatal warnings in the enclosing lexical scope.

In other words, any CPAN module which uses Moo is, by default, vulnerable to a change in any of its dependencies which may legitimately produce new warnings. By "vulnerable" I mean "your program may start crashing in library code you didn't write and do not maintain and, by the way, may be in a dependency several levels deep that you didn't even know uses Moo."

One solution is to use Moo::Lax instead of Moo in code you write for the CPAN. That doesn't fix existing code, but it doesn't make the problem worse.

I suppose the well-intentioned people who wrote strictures could make the argument that "You should be submitting patches against CPAN modules anyhow!", but I have trouble reconciling that against their justification for enabling fatal warnings, which seems to be "Novices should take exceptions seriously!" as the argument then becomes "If new Perl programmers aren't willing to learn enough about Perl to submit upstream patches when our code crashes their programs, there are many other languages they can use with library ecosystems which aren't hostile to new developers"—and that's a silly argument to make too. (Then again, strictures has the misfeature, documented in RT #79504, that its behavior will change if you're in or slightly under a directory managed by a VCS even if you're not a Perl developer or even if it's not a Perl project or even if the program you're running has nothing to do with the directory or even if strictures is a dependency of library code several levels deep.)

In Other Words, strictures is Broken?

I suppose you could claim that it's Moo that's broken, for inflicting the silliness that is the current behavior of strictures on the CPAN and all downstream code that's unfortunate enough to somehow, somewhere use code that depends on Moo, but the real problem is that use warnings FATAL => 'all'; does not belong in library code that you're handing other people to use. It's just too risky.

Perl's Special Named Code Blocks

If you read perlmod's "BEGIN, UNITCHECK, CHECK, INIT, and END", you'll learn that Perl has several special named code blocks that let you run arbitrary code at various points in the compilation and execution process.

David Mertens wrote about this recently in Lexical closures with END blocks warn, but Just Work. His post explains a couple of features of these blocks but doesn't tell the entire story.

These code blocks are code blocks in the same way that function definitions are code blocks. They exist in lexical environments. They provide their own lexical environments. They don't get stored in the symbol table the way you might expect a normal function does, but they have their lexical bindings created and fixed up in the same way that named functions do. In fact, you can just as easily write them:

sub BEGIN { say 'In first BEGIN'  }
BEGIN     { say 'In second BEGIN' }

The declaration is the same either way. That ought to make the behavior David wrote about more clear; the error message he saw is the same as the one you get when you declare a function within another function. Yes, the inner function will close over the lexical scope of the first function, but they won't share inner lexicals. If you've done much work with closures in Perl, that ought to make sense to you.

You might find this surprising though:

#!/usr/bin/env perl

use Modern::Perl;

exit main( @ARGV );

sub main {

    return 0;

    our $AUTOLOAD;
    say "Hit autoload for $AUTOLOAD";

package Foo;

sub new { bless {}, $_[0] }
DESTROY { say 'Destroying!' }

Perl's tokenizer (toke.c in the Perl source code tree) has special cases for the five special code blocks as well as AUTOLOAD and DESTROY. This allows you to provide or elide the leading sub from the declaration of any of the five code blocks, the AUTOLOAD function, or the DESTROY method—though for the sake of clarity, you should probably use sub on the latter two and not on the former five.

(You may be able to make the philosophical argument to treat these seven code blocks in the same way because they never get called directly with arguments; in most normal code, you expect Perl's runtime to call them implicitly. That falls apart only a little bit because a subclass's DESTROY might want to redispatch upward, but Moose's DEMOLISH strategy solves that anyhow.)

I've worked with Ovid before on a couple of projects. We've consulted for several companies before, so we've seen companies succeed and companies struggle. If you asked him for tales of bizarre project management failure, he and I would eventually get around to telling the same story.

When he wrote ditching a language, I'm sure that his experiences on one of our shared projects came to mind.

In the decade-plus we've worked together off and on, I've come to see the business reasons for making project decisions that wouldn't necessarily make sense from the purely technical viewpoint. (I also want to write great software and hate deadlines and like pure abstractions, so I see how my tendencies would frustrate business users.) You have to read Ovid's post with one question in mind: How can I identify and minimize the risk to my business if things go wrong?

Good programmers are paranoid. They catch exceptions. They check the return values of system calls. They vet and validate user input. They make their assumptions explicit and clear, and usually this serves them well when things go wrong. (Things tend to go wrong less often with good programmers.)

Good programmers should be able to understand business risks the way they do technical risks. Similarly, good businesspeople should be able to understand technical risks. Sometimes we have to put these technical/business risks in terms businesspeople will understand:

Before you approve a fullscale rewrite of your software, ask yourself if it will destroy your business.

You have to make one assumption first. If you don't believe this, there's no hope for you. You have to assume that software costs time and money to get right. If you think software is free—if you think that programming is merely transcribing a feature request described vaguely by a user—then you have no business running a project and you've already failed.

If, however, you can acknowledge that getting things right costs time and money and requires talented people who know what they're doing, then you can ask yourself several questions to figure out if rewriting software from the beginning will destroy your business:

  • Why do you want to rewrite the software?
  • Does it have any users?
  • Is it making any money?
  • Are you going to tell the users?
  • If you told them, would they go away? (If you're not going to tell them, what are they going to do when they find out?)
  • Why are you rewriting the code?
  • If it's a mess, what makes you think it'll be any better this time?
  • No, seriously. If it's a mess, what have you changed in your development process to make it better this time?
  • Do you know what the existing software does? Do you have a list of features?
  • Do you know which features are used? Do you know who uses them? Do you know how often they're used? Do you know which customers would leave if those features disappeared?
  • Do you have a semi-objective measurement of how much duplication the existing software has?
  • Do you have a semi-objective estimate of how large the replacement system will be?
  • Do you have test cases for the features the new system needs to implement from the old system? How comprehensive are these test cases? Are any of them executable?
  • Will the existing developers be performing the rewrite? If so, what makes you think they'll do any better this next time? (What has changed to help them do a better job?)
  • If a new team will be performing the rewrite, will they have access to the original developers? (If not, are you in a position to resign? If you don't know why you should resign, how do you even have a job at all? If you think so, but you're laying off the old team, do you really think they're going to help you out without charging you an arm and a leg?)
  • Will the developers have access to customers, or at least to a project manager who understands the necessary features?
  • What's the timeline for the rewrite? Do you have an estimate based on the complexity of the project and the features? (It doesn't count if you say "Well we can probably shrink the SLOC by an order of magnitude, and it took two years to write, so probably two to three months.")
  • How are you going to deploy the new version?
  • Is there a flag day at which everyone will switch over?
  • What if you find a missing feature?
  • What if the new version has a bug?
  • If you're hiring a new team, do they have experience with businesses like yours? Do they know customers like yours? Have they written software like this before?
  • Do you have a plan for the new software missing its target date? Have you prepared to talk to customers about that? (Have you even notified them?)
  • If you have customer support staff, have you prepared them for the switchover? Have you set staffing levels appropriately?
  • Do you need to migrate data? How are you going to do that?
  • If you need to run two systems parallel in while, which system is the primary repository of the data? How are you going to synchronize it between the two systems?
  • Are you certain that you have to throw everything away and start over? Is there a grown-up somewhere in your organization with software experience who says the same thing? That there's absolutely no other option? Maybe someone who's read Working Effectively with Legacy Code?

There may be other questions, but answering these questions in detail would have helped every rewrite I've seen. There are two special cases, but the same questions apply.

What If the Code is Completely Unmaintainable?

Why is it unmaintainable? What circumstances produced that situation? Until and unless you identify and address those, you're going to spend a lot of money and time getting yourself back into the same situation. It's wasteful to throw out a working mess only to replace it with a different kind of mess.

What If I Can't Hire Developers in the Original Language?

Then you're probably not paying them enough and either your business model is broken or it will be. (Alternately, hire good developers and train them, because you're going to have to train them in your business anyway. Sure, it takes some perks to convince people to stick around to maintain a crufty codebase, but if you make it a real, actual priority to clean things up in place, you can find good people who specialize in that.)

Sometimes the right solution is to throw away everything and start over, but usually when you have customers and competitors and are making real money for your business from software, the risks of things not going 100% right are real and important.

(It's software. It's complicated. It never goes 100% right. That's why you're in this mess. Let that sink in for a while.)

Secrets of cpanm cpanfile Support

On a previous client's project, I set up Carton to manage dependencies. This worked out really well, but I came to realize that most of the benefit on that project was using a single cpanfile to list all dependencies.

(Allison wrote a tiny Makefile to manage something else on the project, so it made a lot of sense to add make targets to "update all dependencies" or "use Sqitch to deploy the current version of the database.)

Carton tends to work best if you want to bundle everything into a single directory you can either deploy as is or check into source control. It reminds me of Java's Maven, but without the awful craziness. That's way more than I need for my current small project, where I want a couple of really simple things:

  • Manage database migrations with a single command
  • Manage module installations with a single command

Sqitch gives me the former. Carton's more than I need for the latter, but cpanfile support is great. Fortunately, I have control over the deployment environment. First I set up perlbrew with Perl 5.18.2. I installed Perl locally so that the deploying user account on the server has write access to module installation directories. Then I found the special cpanm incantation to install dependencies from a cpanfile. Here's the Makefile target:

        cpanm --cpanfile    ${PWD}/config/cpan/cpanfile \
              --installdeps ${PWD}/config/cpan

That's it. Keep in mind three things. One, you must use both the --cpanfile and --installdeps options. You can't get by without --installdeps. Two, you need a recent version of cpanminus. I updated to 1.7001 and that's fine. Three, the paths you provide to both arguments must be absolute. I struggled with this for quite a while until I realized the latter.

This is a small project, but even this little bit of automation has been incredibly helpful. I'll have no trouble keeping my code and database synchronized between my development machines and my production server.

A couple of years ago, I wrote Expression Visions for Perl 5 and Vision and the Perl 5 Ecosystem. I've changed my mind a little bit since then.

What do you use Perl for? It does a little bit of everything for me:

  • text processing
  • system administration
  • dynamic web sites
  • batch processing
  • database automation
  • graphic generation
  • statistical programming
  • textual analysis
  • report generation

I've even written a game or two in Perl. All of this is possible because Perl is a general purpose language. Much of it is practical because the CPAN hosts plenty of working, well tested, easy to install, freely usable and redistributable code. As Matt Trout is fond of quoting Audrey Tang: Perl is my VM and CPAN is my language. (Thanks to things like Moops and Moose::Exporter, this is doubly true.)

Sometimes you hear people talk about the original intent of a language to prove (or disprove) something or other—usually to prove that you're a fool for using a specific language. For example, you might well hear that programming Perl for modern web sites is silly because "it was designed for system administration and later CGI", or that "Go is a great language because it was designed for systems programming". (Then again, PHP was designed as a templating language because TT2 didn't exist, and you don't see people writing microframeworks for TT2.)

A vision should guide the design and maintenance of a project or a programming language, but that vision should not constrain the uses of the language. For example, Java used to be Oak, a project to create a small language for set top boxes and not the language used by Sun to sell memory modules for Solaris boxes and the language currently used by Oracle to sell expensive licenses for everything from phones to database servers. Does that make Java the wrong language for Android programming? Does Guido's desire to write a better beginner language than ABC (one with library support) mean that Python can't be a decent general purpose programming language?

Vision can be a dangerous thing, too. Setting out to write the next great programming language for the next 20 years can doom you to perpetual rewrites, cause community schisms (D might qualify and Python 2 versus Python 3 currently does), or dim the hopes of widespread adoption if your language fails to impress with its changes (Arc). That vision in and of itself is too inspecific: you don't know what the challenges of the next twenty years will bring.

(You can predict pretty well that finding a way to handle concurrency and parallelism is important, but you have to keep in mind that they're two very different things. If you need to use as much computing power as possible, your language probably needs really careful control over memory to the point of knowing which pages are available to which processors and the cache implications. Whereas if you need to avoid blocking as much as possible, you probably need a system of isolated work units that you might find in the Erlang VM and high level language constructs to mark work units and their synchronization points without necessarily worrying about assembly level composition semantics.)

Being all things to all people is difficult, because without direct feedback from real users, you can only assume about the problems they're trying to solve. Worse, you'll probably default to the problems you're trying to solve. (I'd make a lousy pumpking because I can easily afford to upgrade my production software to remove smartmatch, for example, so the cost of deprecation is low for me.)

Yet you also want to allow your language to evolve to meet the needs of other users doing things you never anticipated. That's why I've come around to the idea that Perl is best suited as a solid core which makes using the CPAN easy and experimenting with new features on the CPAN easy. This leaves room for adding things like the p5-mop to the core while not requiring any one specific object system syntax.

In one sense, my evolving vision for Perl is a lot smaller. I'd like less XS and a faster runtime and less memory use, but from the language design, there's not much missing. (The aforementioned MOP and function signatures come to mind, but they're on the way.) The rest can happen on the CPAN—and that's fine by me.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Archive

This page is an archive of entries from January 2014 listed from newest to oldest.

December 2013 is the previous archive.

February 2014 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?