July 2012 Archives

Investing in Infrastructure


Software bitrots.

Yes, it's a metaphor. Hopefully the little pieces of rust and semiconductors that hold our software don't actually rot (at least before we have backups), but change happens to our software like change happens to our physical artifacts, and we have to account for that change.

You probably know that you can get the source code for unsupported versions of Perl 5—say 5.004 or 5.6.0. You may not know that that software as released may not build on modern computers with modern toolchains. (Patches and workarounds exist.)

Although we'd like to pretend that software is immune to physical laws (except for those well understood properties of memory latency and speed of light issues, and sometimes those cosmic rays which flip single bits of memory and cause those bizarre once-in-a-lifetime errors we couldn't have coded ourselves, no) and that, once deployed, software will never need maintenance to keep running now and forever, things happen. A disk in your RAID needs replacement. The fan in your server fails. You must physically move a computer between locations. Someone tripped over a power cord to the cooling system.

Expecting you can deploy software once to a fixed, frozen configuration that will never, ever change is risky and naïve.

(You're welcome to take as many risks as you like. Be aware of them.)

Clever people have devised many strategies to mitigate these risks. They all have flaws. (Even virtualization has its problems: you must continue to demonstrate that your virtualization platform exhibits the same behaviors you expect—and that your provider doesn't scuttle the project.)

The most reasonable strategy I've seen is to acknowledge this risk.

I upgrade to new stable releases of Perl 5 soon after release because that gives me a two year window of devoted support for that release. Within that window, I have two years to test upgrading to new releases, to report bugs, and to perform any migrations necessary for my software.

The new standard of Perl 5 predictability helps greatly.

Similarly, I pay attention to CPAN Testers because my projects build on several layers. If Perl 5 is the core infrastructure, CPAN is the plumbing, and it's up to me and my teams to lay out the house and choose the fixtures.

Enough about architecture and construction. Let's talk finance. In specific, let's talk about how businesses grow. Apart from Silly Con Valley startup nonsense, a business is valuable because it brings in cash. A business is more valuable in the future because it will bring in more cash in the future. You can figure out the value of a business now by projecting how much real cash it will generate over its lifetime.

(I'll talk about that more in other venues; I don't have the announcement quite ready yet.)

A business can make itself more valuable in a couple of ways: selling more products or services (bringing in more money), cutting costs (keeping more of the money it brings in), or performing some weird financial manipulations (this is how grocery stores and Amazon.com work, as they lose money on many sales but more than make up for it with volume—and if you think that's an impossible non sequitur, you're not cut out for the financial industry).

A company has a couple of choices with what to do with the free cash it generates. It can pay that out to owners of the company in dividends. It can buy back its own stock to make each share more valuable (if it's a public company). It can invest in the business to make it more efficient or able to sell more (hire more workers, build a new factory, migrate to a faster network, whatever). Sometimes it must invest in the business just to stay in business (a semiconductor manufacturer builds a new fab to produce its new line of chips or an automaker retools a factory for a new car).

That last point is ferociously important.

A business can only survive if it brings in more money than it has to spend to make that money (or as long as someone's willing to dump money into that business until it reaches that point). If it costs you a hundred dollars right now to prop up a lemonade stand that will only bring in ninety dollars this summer, you're better off setting a $5 on fire and shuttering the lemonade stand. A business that isn't growing is suddenly worth less than it was before—otherwise you're far better off putting your money in something safer that is growing.

Here's the connection to software.

Ideally, our software gets easier to write over time. If we do our jobs well, designing the software to meet real needs and finding and fixing bugs and discovering the natural patterns which emerge from our code, maintaining our software should get cheaper over time.

... unless our infrastructure changes beneath us in unpleasant or undesirable ways. If Perl 5.18 gets a little faster and uses a little less memory and upgrading is effectively free (and Perl 5.16 worked out very well for me in that respect), it's all to the good. If Python 2.7 worked well for me but Python 3.3 requires me to switch from a mature and important library to something missing an important feature, I'm worse off.

If I relied on the Blub library and it suddenly went unmaintained and started to bitrot, I'd be in real trouble.

We who rely on an amazing ecosystem of free and open source software have an embarrassment of riches from which to select, but we also have an obligation. That ecosystem only exists because of investments of time and talent and interest and effort. Some of that comes from other businesses. Some of that comes from individuals. All of that together allows us to build amazing things we'd never be able to achieve on our own individually.

Yet we're more like the poor semiconductor companies and auto manufacturers and even municipalities who maintain public roads and utility systems who have to invest in keeping that infrastructure going, because software marches on and things that don't keep up bitrot.

You, of course, get to manage your own risks. Consider, however, what would happen when something you care about were to go away for good.

That's why my business contributes to the ecosystem. Sometimes the short-term cost is a little higher than merely taking from the commons and never giving back, but over the long term I really believe we'll all come out ahead.

The singular trick of any extensible programming language is designing the capability for people to do arbitrary things the language designer never thought of, without tempting them into the Turing tarpit. (I know how to use continuation-passing style in assembly and how to write any loop in terms of compare and jump. Ask me if I ever want to do that again.)

Some languages do this better than others. Lisp's answer is "All code looks basically the same and code operates on code, so get used to manipulating trees." Forth's answer is "Push and pop are complements. Get to work." Tcl's answer is "Everything is a string, and we're happy to parse all the time." Smalltalk's answer is "Even conditionals are passed messages." Java's answer is "Look! The delegation pattern!"

Ruby's answer is "Every part of speech—including pronouns, articles, and gerunds if you're really cool—is a chained method call. Also, have you heard of blocks?"

I make no secret that I consider Moose a transformative technology in modern Perl. (The Modern Perl book even explains object oriented Perl using Moose before mentioning that you can bless references if you really want to.) Moose is really two and a half things:

  • A set of much better defaults for an object system that encourage you to do the right thing by default
  • A well designed metaobject protocol with great opportunity for core flexibility and extended customization
  • A syntactic pattern to enable the first two items.

You can think of this design as three layers, and you're right. In practice, code written with Moose looks something like:

package MyApp::App::Role::HasRS;
# ABSTRACT: App role to represent ResultSets used in other App roles

use Modern::Perl;
use Moose::Role;

requires 'schema';

has [qw( update_rs analyze_rs )], is => 'ro', lazy_build => 1;

sub _build_update_rs  { shift->schema->resultset( 'Stock' ) }
sub _build_analyze_rs { shift->schema->resultset( 'Stock' ) }

around 'update_rs'  => \&reset_rs;
around 'analyze_rs' => \&reset_rs;


... where requires() marks an attribute as required during role composition, has() introduces attributes, and around() wraps existing methods.

Moose novices often get confused by the punctuation of this code. Moose itself exports requires, has(), around(), and a few other helper functions. These functions take a list of key/value pairs which influence what happens.

That's it. That's the magic. The syntax you see is merely convention which takes advantage of a couple of pieces of core Perl syntax. There's nothing special or magical going on here.

Okay, that's not true. There is magic, but it's not in the syntax. I've recently started to use HTML::FormHandler to great effect. HTML::FormHandler::Moose takes the Moose syntax one step further:

package WineCatalog::Form::Wine;
# ABSTRACT: the wine editing form

use Modern::Perl;
use HTML::FormHandler::Moose;
use namespace::autoclean;

extends 'WineCatalog::Form::Base';

my $start_year   = 1900;
my $current_year = (localtime)[5] + 1901;

has '+item_class', default => 'Wine';

has_field 'link', type => 'Text', label => 'link to the wine on your site';

has_field 'year', type           => 'Select',
                  label          => 'Year',
                  required       => 1,
                  options_method => \&year_options,
                  empty_select   => '-- Select a Year --';

has_field 'image';
has_field 'description', type     => 'Text',
                         label    => 'The long description of the wine',
                         required => 1;

has_field 'winetype', type         => 'Select',
                      required     => 1,
                      empty_select => '-- Select a Type --',
                      label        => 'Wine type',
                      messages     =>
                          select_invalid_value => 'Invalid wine type selected'

has_field 'submit', type => 'Submit', value => 'Add Wine!';

sub form_name { 'wine_edit_form' }

sub year_options
        map { { value => $_, label => $_ } }
            reverse $start_year .. $current_year


The result is a Moose object which allows you to be very specific about what happens with each field of the form: it maps to a database SELECT statement or accepts only a certain type or range of values or has a specific initial value or something else. You denote this with a mostly declarative syntax which should look very familiar to Moose users.

Mostly declarative?

Here's the drawback. has() and has_field() are function calls. Perl doesn't know anything about them. The attributes passed to these function calls are just values. Perl doesn't know anything else special about them. You're passing a list of values to functions. If you're a Perl programmer, you do this all day.

This shouldn't work as well as it does.

What happens if you mistype the name of a key in that big bag of arbitrary metadata pairs? Hopefully something somewhere validates it when the program starts. What happens if you accidentally duplicate a key? That depends on how the pair processor processes those pairs. What happens if you leave off a key or a value and you no longer have a list of pairs? Again, that depends on the implementation.

If you're extending Moose or something built on Moose, how do you register that you support a couple of new attributes? How do you register that you require the presence of a couple of attributes? How do you do so without conflicting with attributes of the same name that another extension uses?

If you're reading the code, how do you know which attributes are available? How do you know what they mean? How do you know if they're being used correctly?

If you're using an IDE or something with syntax highlighting, how do you know which attributes are available? How do you know which are correct? How do you know the difference between an arbitrary list of key/value pairs that's present only because it's expedient and one that's custom as in the case of Moose? (You can hardcode the popular ones, but that doesn't scale.)

If you're using an optimizer or a code scanner which can analyze your code and suggest improvements or perform automated refactorings or look for error conditions, where do you even start?

Again, this works well in practice. It works very well almost all of the time (Moose's proclivity to verbose stack traces notwithstanding). It probably shouldn't work as well as it does, but it does, and that's good.

The opportunity exists, however, to consider ways to improve this syntax even further. Perhaps allowing a file of parser hints would work. Perhaps finding ways to extend syntax or to perform partial precompilation would help. I'm not sure.

The fact that this technique works as well as it does is testament to several core features of Perl 5 which deserve respect, however: the arbitrary arity of @_, the importing of new symbols, and the identifier quoting of the lvalue of the infix fat comma operator. That's one way Perl 5 has stayed fresh into its second decade (and soon its third).

The anecdote goes that, during a meeting of Perl 5 Porters at The Perl Conference in 2000, Jon Orwant seized a moment to start throwing coffee mugs at a wall. (Jon leaves little to chance.) With the dot-com bubble bursting, fierce infighting on the p5p mailing list, and a language with lots of possibility but no galvanizing principle, Perl seemed in danger of stagnation.

Most people reading this know the punchline: a dramatic rethinking of what Perl is and could be coupled with a backwards-incompatible rewrite of the internals, intended to come out in sometime around the end of 2002. (Time will tell if a usable P6 ever appears.)

Legend has it that announcing "the next major version of the Perl programming language" announcement was as much about revitalizing the Perl community itself as it was about any technical achievement. Certainly Larry's desire to see a kindler, gentler, less fractious community has taken quite some time—but things have improved.

From the technical side, I could list improvements in release process, testing, documentation, usability, performance, library availability, and syntax, but that approaches the question from the wrong angle.

The most important question in July 2000 about Perl's future is still the most important question in July 2012 about Perl's future. That question is this:

Is it reasonable to write new code in Perl right now?

The answer depends on the domain and your preferences and the dynamics of your coauthors and many other factors. You could list a dozen, and they might match the dozen I could list. The specific answer and the reasoning behind that answer matters much less than the gestalt of that answer. Who is answering that question? Why?

To the extent that technical concerns (stability of the language, future maintenance possibilities, library availability, presence or absence of bugs) exist, the Perl community knows how to manage those concerns. We've done a credible job in many areas, and we've even blazed new trails in others. (We're still a tribe of syncretic polyglots though: for every mod_perl which gives way to Plack, a PPI spawns a Perl::Critic and an ithreads is better than a GIL but otherwise somewhat unspectacular. For every Moose, someone says "Don't Tcl and Lua all require you to do something similar?")

To the extent that human factors (availability of developers, credibility of developers, cost of developers, and whether that fixie-riding hipster would deign to give up his precious for a language with a working lexical model and an actual library system) exist, we haven't solved the problem.

The problem is, as usual, one of communication and visibility.

It's not that Perl is or isn't a credible language for writing important applications. (It is, and it's an order of magnitude better in 2012 than it was in 2000 as I see it.) It's that the perception of Perl is more important to people outside the Perl community than any technical factors.

Certainly the inability of Rakudo's developers to release anything that much of anyone cares about has hurt Perl—although deconstructing the Osborne myth is interesting. It's not the only reason Perl appears to have stagnated, but splitting the community and offering a hazy potential future for more than a decade didn't help, to put it lightly.

Perl circa 2000 struggled with the success of Perl 4, the ubiquity of Perl on Unix systems, and the CGI model of writing web applications. This meant a lot of people had seen a lot of awful chewing-gum-and-baling-wire monolithic code written and barely maintained. The good news is that a decade of focusing on things like comprehensive testing, coding standards, infrastructure, and a relentless sense of improvement to design and implementation have given people who know what they're doing a fine set of master tools to craft beautiful things.

(That's one reason I like the term "Modern Perl" and others like "Enlightened Perl" or talk about "The Perl Renaissance". If I wrote the same code in 2012 that I wrote in 2000, I'd be a terrible programmer.)

The better news is that even though a novice likely won't understand what a parametric role is let alone when to use it or why, he or she can still get stuff done with minimal ceremony and no fuss. (Sure, the resulting code will be awful, but at least one aspect of the programming world can be honest about novice code: it's awful. It's never not going to be awful, even if you teach them SML or Eiffel or Ada or Python. People who don't know how to program well won't program well. Embrace the tautology.)

If I were handed Moose, Catalyst, DBIx::Class, HTML::FormHandler, Dist::Zilla, Plack, and a few other tools in 2000, I would have deleted thousands of lines of code. Immediately. We have amazing technical features. We have a vibrant community. (Look at attendance rates at YAPC and growth rates of CPAN. We still have a growing community.)

These are all good things, but the coffee mug question and the answers are still relevant and important and vital.

(Note carefully: the question says nothing about "maintaining existing code". That concern is far different and the answer from a business perspective necessarily must consider things like opportunity and sunk costs.)

The answer isn't obvious. The answer isn't easy. (The answer isn't "Can someone throw some rounded corners and drop shadows and Lobster fonts on Perl web properties?") Then again, the answer isn't "Fork and rename" or "Buy an ad in an IT weekly rag" either. Part of the answer may be "Continue to improve the core and out-evolve everything else in any ecological niche possible, but the biggest part of the answer is this:

Build credible new things. Brag about them. Repeat.



We programmers live in a bubble, where the best technology always wins, where business concerns are the fever dreams of empty-headed suits, and where marketing means lying to customers to get their money. (We also believe we're perfectly logical Vulcans, even in the face of our passive aggressive sarcasm motivated by emotional reactions in Internet flamewars.)

One aspect of our brokenness is that we undervalue the things we know how to do, saying "Oh, it's just setting up a new Debian installation. Anyone can drop in a DVD and navigate a few prompts!" or "It's a little CRUD app anyone could whip up in a day and a half."

(My buddy Dave and I have reimplemented an 8-bit minicomputer BASIC/Logo programming environment with HTML 5 and JavaScript. The fact that you can run this in any modern web browser—including many smart phones—amazes me.)

When we undervalue our skills and underestimate the complexity of the things we do, we distort both the market for our abilities (why are our salaries and contract rates lower than they should be for what we do?) and the onramps for new programmers.

Without discussing whether PHP is a good programming language, consider what it takes to deploy your first PHP application:

  • Write a document in a text editor. Even Notepad on Windows will do.
  • Use an FTP client to upload it to your ISP. They've given you instructions. They've probably also helped you download the FTP client.

Sure, you have to set up your FTP client and figure out your username and password and put the file in the right directory, but that's it. You only have to understand the basics of HTML and some details of file paths in an FTP client. (I don't mean to downplay the learning of HTML; it's daunting for people who've never done any sort of programming or markup before, and it's a real learning experience, but it's something anyone doing web programming will have to learn and thus it's a wash in this comparison.)

Now do some Java web development. (Easy target, I know.) First, download Eclipse. Then set it up. Then install some plugins. Then configure your workspace. Then make a project. Then install a local deployment server or configure the remote application server. Then import all of the JARs you need and, if you're lucky, avoid Maven.

I'm no expert Java programmer by any means, but I've done this a few times and something in that process often trips me up. (I think I get stuck at a known bug with IcedTea and Geronimo on Linux where the app server suddenly wanders out of the outfield to chase butterflies and the only solution is to do a hard restart and clear out some cache directories manually from the command line. I think.)

I can't imagine doing the first part of that on Windows. (Actually I can, because I've done it, and I have scared neighbors by almost drop kicking the laptop off of my balcony.)

I realize that in this modern age of web development, no good-hearted person would ever use something as (gasp) 1990s as Java, so consider something that was cool as recently as 2006. Here's how you deploy a Rails application using Heroku:

  • Install Rails.
  • Generate your application.
  • Make it print "Hello, world!"
  • Create a git repository.
  • Check in your application.
  • Create a new Heroku application.
  • Add a git remote for the Heroku repository.
  • Push your application to the Heroku repository.

Easy, right? Oh, right—first you have to install Git and the Heroku application, and you have to know enough about Git to work with repositories and remote repositories. (I wouldn't even begin to think about how you'd make this all work on Windows, but I've already outed myself as a member of the technocratic elite by admitting I've used Linux on the desktop for almost a decade and a half.)

Those are good to know, but in terms of sheer complexity, that's a little bit different from "FTP this file to that directory".

I'm not saying that PHP is better or worse than Java is better or worse than Perl is better or worse than Ruby. I have my opinions and you have yours. I freely admit that Java application servers have their advantages over the alternatives and that Perl's Plack offers more useful features than raw PHP and that Heroku is better than me being my own system administrator in certain domains.

What I am saying is that the more someone has to learn to start something—the more hurdles from zero to "Hello, world!"—the more likely it seems that people will gravitate toward the easy thing. Scoff all you want at the idea of a programming environment in a web browser, for example, (it would be difficult to give up Vim and the command line) but ignore that such a thing would provide advantages for a lot of people at your own risk.

The flexibility of the Catalyst offers a lot of value, though occasionally at a price. I've been impressed at the abstractions it offers, as I develop a couple of web applications that are growing in features and complexity.

One of those projects requires user registration. I've chosen an email verification system to help ensure that the system can notify users for the alerts they select. To make notifications work, I created a Catalyst model which sends emails. It looked something like this:

package Stockalyzer::Model::UserMail;
use strict;
use warnings;
use base 'Catalyst::Model::Adaptor';

package Stockalyzer::Mailer;

use MIME::Base64;
use Authen::SASL;
use Net::SMTP::SSL;

use base 'Mail::Builder::Simple';

sub send_verification
    my ($self, $c, $user) = @_;
    my $code              = $user->verification_code( force => 1 );
    my $email             = $user->email_address;
    my $register_link     = $c->uri_for(
        $c->controller( 'Users' )->action_for( 'verify' ),
        { VERIFY_USER_email => $email, VERIFY_USER_code => $code },

        subject   => ...,
        to        => ...,
        from      => ...,
        plaintext => ...,

sub send_reset
    my ($self, $c, $user) = @_;
    my $code              = $user->reset_code;
    my $email             = $user->email_address;
    my $reset_link        = $c->uri_for(
        $c->controller( 'Users' )->action_for( 'reset_password' ),
        { USER_RESET_email => $email, USER_RESET_code => $code },

    $self->send( ... );

sub send_feedback
    my ($self, $c, $params) = @_;
    $params->{path}       ||= 'no path found';
    $params->{details}    ||= '';
    $params->{type}       ||= 'severe type error';
    my $username            = $c->user ? $c->user->username : '(no user)';

    $self->send( ... );


This is a standard Catalyst model. With the appropriate configuration (specifically setting the class attribute of the Model::UserMail model to Stockalyzer::Mailer, I can access the model from within Catalyst like:

sub send_feedback :Path('/send_feedback') :Args(0)
    my ($self, $c) = @_;
    my $method     = lc $c->req->method;

    return $c->res->redirect( '/users' ) unless $method eq 'post';

    my $params     = $self->get_params_for( $c, 'feedback' );
    $c->model( 'UserMail' )->send_feedback( $c, $params );

    return $c->res->redirect( $params->{path} || '/users' );

Then the time came to add a new feature—emailing users about event notifications. The event notification processing system runs every day in an automated process separated from the web application. It doesn't use Catalyst at all.

I was glad to have my mail configuration set up so easily within Catalyst, and I wanted to reuse that model for the offline mailer—keeping all of those actions in one place makes sense. That meant decoupling the mail sending actions from Catalyst altogether, both in the configuration system and in the arguments to the model's methods.

Changing the method signatures was easy. Instead of passing in the Catalyst request context, I changed them to pass in the URL:

sub send_verification
    my ($self, $uri, $user) = @_;
    my $code                = $user->verification_code( force => 1 );
    my $email               = $user->email_address;
    $uri->query_form( VERIFY_USER_email => $email, VERIFY_USER_code => $code );

    $self->send( ... );

The configuration was a little trickier. Configuration file parsers are like templating systems: everyone wants something a little different, and for every ten developers, twelve modules exist on the CPAN. In the interest of expedience (and with the knowledge that I'll change this later), I made the single place which instantiates my mailer object in the offline processing programs grab the existing configuration:

package Stockalyzer::App::Role::DailyUpdate;

use Modern::Perl;
use Moose::Role;
use Stockalyzer::Mailer;

has 'mailer', is => 'ro', lazy_build => 1;

sub _build_mailer
    my $self        = shift;
    my $config      = do 'stockalyzer_local.pl';
    my $mail_client = $config->{'Model::UserMail'}{mail_client};
    return Stockalyzer::Mailer->new( $mail_client );

As ugly as that is, the code already operates under the assumption that it's running from the root directory of the application. Besides that, any code which handles these updates already has to do something like this.

With those two changes made, I was able to send email from the offline process using the same underlying model that the web application uses. Extracting it took moments, and generalizing it to be independent of Catalyst took a couple of minutes.

Catalyst's tutorials suggest using an adaptor layer between your web application and your models, and they're correct—but even if you don't do things "right" from the start, being careful about defining the boundaries between layers of your system means that you can make these changes later.

In addition to concerns for your development shop's technical maturity and your development shop's culture, a good Perl shop has certain characteristics.

As always, you don't have to have answer these questions the same way I do. It's not a contest—but if you haven't considered these questions, or if you don't understand them, or if you think they don't really matter—you'll probably have trouble finding and retaining talented developers.

The best development shops I know have remarkably similar answers to all of these questions.

  • Do you build your own Perl?
  • Do you build your own Perl because you have custom patches?
  • Do you stick with the vendor Perl because it's what's available?
  • Do you use CPAN modules?
  • Do you have a defined process for adding new CPAN modules to your stack?
  • ... in less than a week?
  • Do you have a defined process for updating to new versions of CPAN modules?
  • Do you report bugs in CPAN modules?
  • Do you report bugs in Perl?
  • Do you include test cases or patches when applicable?
  • Do your developers have PAUSE accounts?
  • Do they contribute to CPAN (docs, tests, patches, feature requests, bug reports, mailing lists)?
  • ... on work time?
  • ... in their personal time?
  • Have you extracted CPAN modules from your code and released them?
  • Does your interview process include writing (and explaining) real world Perl code?
  • Do you reuse code between projects?
  • Do you have your own company framework?
  • Is it built around a CPAN framework?
  • Is it a CPAN framework?
  • Do you use automated tests with the standard Perl testing tools?
  • Do you participate in a local user group?
  • ... sponsor it?
  • ... sponsor a workshop?
  • ... sponsor a YAPC?
  • Do you post job openings on jobs.perl.org?
  • Does your company library include multiple copies of the standard Perl books?
  • Do you use the standard CPAN tools for organizing your code?
  • ... for documenting your code?
  • ... for testing your code?
  • ... for distributing your code?
  • Do you have your own DarkPAN?
  • Do you test it against new Perl 5 releases when considering when to upgrade?
  • Do you test it against Perl 5 release candidates?
  • Do you test it against monthly blead releases?
  • Has your company contributed time or money to Perl 5 core development?
  • ... to the development of a CPAN distribution?

Many great shops can't answer "yes" to every one of those questions, and that's fine. If you answer "yes" to at least half of them, you're in good and rare company.

What else should be on the list?

If you'd like a detailed discussion of how to apply this list to your own company, I'm available for consulting.

In addition to concerns for your development shop's technical maturity, cultural concerns are also important. How do developers get along? How do you manage hiring and training? How do you treat your staff?

In short, what's it like to work for you in non-technical ways?

  • Do you have a coding competence test when hiring?
  • Does it include real code?
  • Did your developers have a hand in writing it?
  • Do you have code reviews?
  • Before deployment?
  • Before merge?
  • If you have multiple developers, do they all have access to every piece of code you have?
  • Do you pay a prevailing developer wage for your region?
  • ... commensurate with experience?
  • Do you have overtime?
  • ... required?
  • Do you allow telecommuting?
  • ... part time?
  • ... full time?
  • Do you have a training budget?
  • ... for books?
  • ... for travel?
  • Do you have well-defined roles?
  • ... technical leadership roles?
  • How do you resolve conflicts?
  • Do you have a defined process for scheduling features?
  • ... triaging bugs?
  • ... resolving schedule conflicts?
  • How do you handle surprises?

I'm sure you can think of several other important questions. Again, you don't have to have the same answers I have in mind for these questions, but I believe that mature development shops that want to hire good developers need to consider these questions.

If you'd like a detailed discussion of how to apply this list to your own company, I'm available for consulting.

What would you add?

(Next time: Perl-specific concerns.)

In the spirit of helping recruiters understand the Perl community better and how to identify a good Perl programmer, perhaps it's useful to discuss ways to identify a good Perl programming shop.

I see three aspects of maturity in any programming job: technical, cultural, and specific to the programming ecosystem.

These aren't rigorous guidelines. I'm not writing a purity test, and you don't have to answer all of these the same way I do to get a perfect score. (Why would you want a perfect score from me anyway? I run my own business and, while I'm available for contracting and consulting, I'm not currently looking for full-time work.)

Disclaimer aside, if you find yourself disagreeing with these questions (or if you've never considered them), your workplace may not be inviting to good programmers. Process for its own sake isn't interesting, but the ability to define what should happen when and who does what and how to resolve conflicts is vital to having a healthy business.

Let's start with the technical questions.

  • Do you use source control?
  • Do you stage deployments?
  • Do you have a defined process for deployment?
  • Do you have a defined process for rolling back a failed deployment?
  • Do you have code that "no one knows what it does"?
  • Do you have critical business code written more than five years ago that people are afraid to touch?
  • Do you have coding standards?
  • Does most of your code follow it?
  • Can you tell who wrote each piece of code by its style?
  • Do you have a standard technology stack?
  • Across multiple applications?
  • If some applications don't meet it, do you have plans to refactor them?
  • Do you refactor at all?
  • Do you have a defined process for handling bugs?
  • ... for handling feature requests?
  • ... for scheduling delivery?
  • Do you have a training or mentoring process?
  • Do you have multiple developers?
  • Can you retain developers for longer than one year? Five years?
  • Do you use automated tests?
  • Do your tests all pass?
  • ... before you check in?
  • ... before you deploy?
  • Do you have backups?
  • ... for servers?
  • ... for developer workstations?
  • ... and do you test them regularly?
  • Are developers their own system administrators?

In short, how predictable is your development process? Can you manage risk? Do you? When surprises happen, how much work is it to recover? The better your answers to these questions, the more likely that you can attract and retain talented developers—and make the most out of their abilities.

What other questions would you put in this list? Why?

If you'd like a detailed discussion of how to apply this list to your own company, I'm available for consulting.

(Next time: cultural questions.)

One of the drawbacks of little languages for things like configuration and database management and state transitions and templating is that they don't often avail themselves of the tooling and ecosystem that full languages boast. (Does your little language have a debugger? Does it have a working module system? How do you test things written in that language?)

When I work with templating in Perl, I often use Template::Toolkit, because it's well-established, well tested, well documented, and popular. I often play fast and loose with the output of the templates, in part because I try to keep my templates very simple but also because I've never found it easy to test templates and their output.

I haven't found the ultimate answer, but some code I wrote in the past couple of days works well enough that I have a lot more confidence in the quality of my output now.

This project performs financial analysis of publicly traded stocks. It looks at certain facts and figures and compares them and categorizes each of them based on their quality. For example, a company which pays no dividend gets in the "Does not pay a dividend" category, while a company with a dividend yield better than the aggregate dividend yield of the S&P 500 index gets a high dividend score. (Of course there's a disclaimer that chasing only a high dividend yield gets you to some risky stocks with very low prices or weird financial tricks trying to raise the price, but here I write about testing, not financial analysis.)

The method which rates a dividend yield is part of a role which performs these types of analyses. It looks something like this simplified version:

sub dividend_yield_rating
    my $self  = shift;
    my $yield = $self->dividend_yield;

    return 'NONE'        if $yield == 0;
    return 'SPECTACULAR' if $yield >= 3.0;
    return 'HIGH'        if $yield >= 2.5;
    return 'MEDIUM'      if $yield >  1.74;
    return 'LOW';

All of this textual analysis is present in a special template component devoted to displaying this analysis for a stock. It looks something like:

    dividend_yield_rating = stock.dividend_yield_rating

[%- IF dividend_yield_rating == 'SPECTACULAR' -%]
This company has a large dividend yield!
[%- ELSIF dividend_yield_rating == 'HIGH' -%]
This company has a high dividend yield.
[%- ELSIF dividend_yield_rating == 'MEDIUM' -%]
This company has an average dividend yield.
[%- ELSIF dividend_yield_rating == 'LOW' -%]
This company has a low dividend yield.
[%- ELSIF dividend_yield_rating == 'NONE' -%]
This company pays no dividend.
[%- ELSE -%]
Oops! We haven't figured out this company's dividend_yield yet.
[%- END -%]

How would you test this?

The design here is deliberate; data drives the behavior. A stock has a dividend yield (another part of the system, verified with model tests). My Catalyst stack knows how to find stocks by name or symbol and display their analysis pages. I have separate model tests for the method in the display role:

sub test_dividend_yield_rating
    my $stock = shift;
    my %values =
        map( { $_ => 'NONE'        } ( 0,    0.00        ) ),
        map( { $_ => 'LOW'         } ( 1.74, 1.00, 0.01  ) ),
        map( { $_ => 'MEDIUM'      } ( 2.00, 2.49, 1.75  ) ),
        map( { $_ => 'HIGH'        } ( 2.50, 2.99        ) ),
        map( { $_ => 'SPECTACULAR' } ( 3.0,  5.00, 12.00 ) ),

    while (my ($amount, $rating) = each %values)
        $stock->update({ dividend_yield => $amount });
        is $stock->dividend_yield_rating, $rating,
            "dividend_yield_rating for $amount should be $rating";

... where $stock is fixture data I don't mind modifying in place (every test file gets its own in-memory SQLite database, thanks to DBICx::TestDatabase). That tests one part of the system.

Testing the template is as easy as testing the whole stack. Effectively this is an end-to-end test (or an integration test or a customer test), because if this works, I know everything fits together correctly:

sub test_dividend_yield_rating
    my ($schema, $ua) = @_;
    my $stock         = $schema->resultset( 'Stock' )->find({ symbol => 'AA' });
    my $spectacular   = 'This company has a large dividend yield!';
    my $great         = 'This company has a high dividend yield.';
    my $medium        = 'This company has an average dividend yield.';
    my $modest        = 'This company has a low dividend yield.';
    my $none          = 'This company pays no dividend.';

    my %rates =
         12.0 => $spectacular,
         2.50 => $great,
         1.75 => $medium,
         1.00 => $modest,
         0.00 => $none,

    while (my ($rate, $description) = each %rates)
        $stock->update({ dividend_yield => $rate });
        $ua->get( '/stocks/AA/view' );
        $ua->content_contains( "<p>$description</p>" );

Again, I don't mind manipulating the fixture data in place. It may feel a little dirty, but it's a lot simpler than working up some sort of mock object framework that doesn't actually tell me anything interesting about the system as a whole.

I can be less rigorous about the test data values I use for the dividend yield because I've been more exhaustive about the corner cases in the model tests. (That's what the model tests are for, after all.)

This code could be more robust, though. In particular, it'd be nice to specify some XPath or CSS selector to say "The textual contents of this DOM fragment should contain this literal string or should match this regex", but I haven't needed anything more than this yet. (Debuggability would improve dramatically in that sense.)

I wrote this rating code in full-on TDD style with this approach, and it helped me catch and fix two real bugs I would otherwise have deployed. I didn't spend any time flipping between my code and a web browser, refreshing things, to make sure that they work. I know this template will always behave to the degree I tested it.

It wasn't even as much work as I feared it might be.

Sure, I could test templates in isolation, figuring that individual templates are separate entities on disk, thus they need separate and unique tests, but what would that gain me? I really care that when my father looks at this page I've made for him, he can see a textual description of some of the numbers my program has produced, and that they make more sense to him this way.

In other words, I've tested the behavior of the code from the point of view of the user experience, because that's what matters most. I still verify correctness, but it's correctness of the aggregate of use, not the seams of architecture.

That's subtle, but it seems important.

Perl grew up in the rich soil of linguistics, the Unix philosophy, and a cheerful syncretism of multiple programming languages (I count among its ancestors C, shell, Ada, awk, and even a little SNOBOL).

Perl is happy to let you make a mess, if you get your work done, because its theoretical axe to grind is "it's important to let consenting adults get their work done".

Perl as originally envisioned occupies the sweet spot between writing code in C (a real programming language for real programmers who are disciplined enough to cross their Ts and dot their Is, lest they corrupt real data) and shell (an emphemeral toolkit for constructing temporary edifices out of soap bubbles and shoestrings). It's more powerful and flexible than the shell (but it never puts the shell out of reach) and it's more forgiving and gentle than C (but it keeps C within arm's length).

In other words, Perl was always intended to be the last tool you reach for if you don't have a specialized tool that fits the job exactly. You might even say that, like other languages of the same lineage and timeframe (Tcl certainly, perhaps Rebol or Rexx, or Lisp if you had a Lisp machine or Smalltalk if you never had to interact with the outside world or Forth if you always built your own OS from first principles on each piece of new hardware), Perl was a toolkit for building tools.

After all, that's the Unix way.

Then came Perl 5.

Actually, first came Perl 4 and the World Wide Web and forms of user interaction far beyond "I want to automate a common task". If you look at the features added to Perl 5 in the Perl 5.000 release, you see that the intent was always there to make Perl 5 into a powerful programming language for doing far more. Perl 5 is a language for writing big programs.

(The evolution of Perl 5 into what some people call Enlightened Perl and some people call Modern Perl is the process of discovering how to use Perl 5 effectively for programs small to large and how to manage that process. That's why we care so much about libraries, encapsulation, testing, and abstraction.)

Yet Perl retained its iconoclasm. You can still write Perl 1 programs in Perl 5. You can still write one liners that would give awk's creators pause. You can still decide how much formality or ceremony you want to undertake or forego.

Unlike, for example, programs written in Python, not all Perl programs have to look the same. (Yes, that's most often a superficial criticism, but bear with me. The original intent of Python was to produce a language suitable for teaching people with little or no programming ability. That single design goal, for good or ill, still governs discussions of which features to add or promote in Python. Python's lambda is what it is because the syntactic concerns of 1991 are still more important than the practical concerns of 2012, and Python's functional programming support is what it is because the perceived accessibility to novices in 1991 is still more important than the practical concerns of 2012, at least if you agree with my definition of the word "practical".)

That ceremony is important. Consider also Java.

The second program I almost ever deployed as a professional programmer was a website update notification system in 1998. I'd written a small help desk tracking application for HP earlier that year. Another group in the customer service department wanted customers to be able to subscribe to their knowledge base articles such that the system would send out notification emails when significant updates appeared on the public web site. (RSS was years away.)

I wrote a prototype for the backend service in six or seven lines of Bourne shell on HP-UX 10. Perhaps it was HP-UX 9. It used a flatfile. It integrated with HP's existing network. It strung together a couple of shell commands in a loop. It took five minutes to write and five minutes to debug.

The Java prototype (we barely had access to Java 1.2, so good luck finding modern libraries for such exotic protocols as SMTP) was at least an order of magnitude larger just to initialize the mail service. I spent about a week on it. I never finished it. (I was switching departments already.)

I wish I'd learned my lesson then: you can be pragmatic, but some people want you to wave your hands and utter magic incantations and pretend like you've done something impressive even if solving the problem takes a few lines of shell script.

Maybe I'm far too pragmatic, or far too honest to pretend that some problems are intractable or at least difficult, when they aren't. That story came to my mind when I read Scott Walters's unscientific observations about why you can't hire good Perl programmers.

Sure, sometimes Perl isn't the right choice. Sure, some Perl programmers are far better suited for system administration and automation than they are writing large applications. Sure, a really good programmer should know several languages well enough to be effective in any of them (with practice and focus). Corner any of the world's best Perl programmers and he or she will happily tell you about flaws in the language and the ecosystem. The same goes for the top programmers of almost any language. (COBOL and Fortran perhaps excluded, because they're ecologically the Galapagos Islands.)

... but any company which wants to measure success in programming does itself and programming a disservice by making the presence of ceremony or the number of lines of code or the complexity of tooling essential markers of Really Serious Programming.

If you think that all Perl is is stringing together a few lines of gobbledygook thrown in a cgi-bin/ directory somewhere to vomit some CSV exported from Excel onto a web site, go ahead and offer $15/hour on Craiglist and require an NDA before you let your new hire check out anything from CVS (and woe to him if he wants to check it out from home!).

If, on the other hand, you really want to solve problems, hire good programmers. Hire good programmers you can trust, then trust them. Work with them to figure out what you want to do. Pay them well, in both money and respect. Then let us solve problems.

Some of those solutions will be small automations. Great! Never underestimate the efficiency gains of small solutions accruing over time.

Some of those solutions will be bigger programs. We can manage this. We do it all the time—sometimes by breaking big problems into small problems, and other times by using the amazing global ecosystem of Perl and the CPAN and the Unix universe and the rich vocabulary of actions and information the modern Internet enables.

Sometimes we'll use Perl, because we're so amazingly productive with it. Sometimes we'll use something else, because we're so relentlessly pragmatic.

Yet keep in mind what you're paying for. If you want throwaway code written on throwaway contracts, the good news is that you won't have to spend much to replace it in six months. If you want good code written by great programmers who know how to solve problems small and large, treat us well and we'll deliver.

We'll write great code and we'll document and test it relentlessly. We'll reuse well-designed and well-tested libraries—in many cases, we'll even take on the maintenance burden of them so you don't have to—and we'll solve problems from small to large to the best of our abilities.

Just don't expect us to write more code than we need, and don't expect us to wrap the whole thing in some sort of mystical wrapper of ceremony and indirection. Whipupitude means that, yes, Perl is exceptionally well suited for programming in the small, but the continued evolution of Perl 5 and the CPAN and the Perl Renaissance of modernity and enlightenment means that we apply the same pragmatism to programming in the large.

Underestimate us at your own risk.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Archive

This page is an archive of entries from July 2012 listed from newest to oldest.

June 2012 is the previous archive.

August 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?