December 2012 Archives

A friend asked me a really confusing question today:

Do you think that Perl 6 is a well-designed language?

(Okay, that's not the confusing part.) I told him that I believe so, for the most part. Perl 6 has a coherence and a consistency that Perl 5 lacks. When people call Perl 5 "a cleaned up single dialect of the wonderful mess that is Unix", it's true. It's just never quite been cleaned up enough.

For a good example of this, look at the inconsistency of regular expressions and PCRE and the other Perlish extensions to regular expressions. That's definitely a source of the complaints of line noise. (I say this as someone who's written far too many gentle introductions to regular expressions.)

For other examples of Perl 6's consistency and coherence, look at the small unifications that make sense: an improved type system, a unified object system with an intelligent metamodel, metaoperators, and a clearer system of context which provides for such niceties as pervasive laziness.

In our further discussion, I said "It's a difficult language to implement." (I've patched at least four implementations of Perl 6 over the nine years I worked on the language, and that number might be low.)

Then I confused myself by saying "I think it's easier to implement than Perl 5 though."

In one example, Perl 6 has a well-defined grammar which requires only single-pass parsing. By contrast, the Perl 5 parser weaves in and out of parsing and lexing and even actively running code to determine how to parse various constructs. Most of that is practically unnecessary, but it happens anyhow.

(Of course, Perl 6 also allows users to override the grammar even to defining what the language considers whitespace, so you don't have much chance of writing a fast or efficient Perl 6 parser because you can't even assume that you can skip over whitespace in fast code; you always have to look up the current rule for what whitespace is. A really good specializing JIT could probably inline some of these rules and assumptions, but someone would have to write that, and there's no indication that that will happen for Perl 6 any time soon. If you've ever asked "Why is the Perl 6 parser so slow?" or "Why does Rakudo use so much memory?" or "Why does Rakudo take so long to start?", now you know the answer: you're paying for flexibility in the parser that almost no one ever, nowhere, never is going to use.)

As another example, the optional type system of Perl 6 allows improvements in optimization that are difficult to implement in Perl 5. Knowing that certain arguments can only ever be read—never written—means that an implementation can cheat on the question of whether it passes arguments by reference or by value. Multidispatch as a fundamental strategy—resolved at the point of compilation—reduces runtime branching and can allow aggressive inlining. So can the decision to mark a class or module as closed to further modification.

What confused me?

Why does Perl 5 exist? Why has Perl 6 been a dead end for so many years, and why doesn't it look like that will change any time soon?

It's easy to say Worse is Better or Second System Effect, but I'm not sure either one really applies fully.

As I see it, Perl 6 suffered for its first few years with the lack of any usable implementation at all. (Parrot had a couple of half-hearted implementations from almost its start, but the less said about those, the better.) Only when Pugs appeared did the language designers have any chance at all of testing their ideas with even a modicum of working code. The best thing Pugs ever did was spur the development of a test suite.

Unfortunately, Parrot had already become a tangled mess of poor code written to the wrong design under the aegis of scope creep by that point. We had the opportunity to reduce some of that wrongness, but it never happened. (Yes, I'm responsible for part of that mess in design, implementation, and even management.)

Pugs imploded shortly after anyhow. (Heroics aren't sustainable.)

One of the drawbacks of an independent test suite is that anyone can write an implementation to that test suite. It's tempting to throw out a big mess of working code and start over, figuring that at least you can salvage your tests. (I've never seen a project of any size succeed by throwing out everything but its tests, but I've seen several projects improve by committing to and delivering a relentless series of improvements.)

I've claimed on multiple occasions that the current trajectory of Perl 6 could best be characterized by a desire to reinvent the world every twelve to eighteen months. (I also get a lot of feedback that my claims are misleading and wrong and mean-spirited, but at this point there's a lot of history to support my claims and not a lot of success to counter them, so there's that.)

At that point in the conversation, my buddy asked about Parrot's current use case. "What's it for?" he asked. "What does it do well? What is it supposed to do?"

As far as I can tell, Parrot is on life support. It exists primarily to ensure that something can run Rakudo until the Rakudo developers finish porting Rakudo to another VM. (The current target I've seen is the JVM, which as everyone knows supports Perl semantics about as well as a universal Turing machine supports the lambda calculus.)

In other words, if you tied your hopes of having a usable and useful Perl 6 implementation any time soon to Rakudo running on NQP running on Parrot, I'm sorry.

If Perl 5 is a worse language with a worse implementation, how did it succeed? It was usable from almost the start. Perl 4.0.36 came out in February of 1993 and Perl 5.0 appeared in October of 1994. That's 20 months—a long time in Internet time, but a blip in Perl 6 history.

Perl 6 is a lot more ambitious than Perl 5 to be sure, but is it seven or eight times more ambitious? Ten? If Perl 6 is generally usable by February 2017, that's 200 months. Is that an appropriate length of time for an order of magnitude increase in ambition? I don't know.

Could things change? Sure. It's just code. Parrot could have a sudden infusion of energetic developers who take great pleasure in removing awful code, revisiting inconsistent interfaces, improving poor designs, and generally adopting the useful semantics that a modern VM needs to express to have any chance of running dynamic code effectively and efficiently while not ruling out extension or optimization. (The lessons of Smalltalk, Dis, and Lua are relevant, even though the world seems to have a strange hangup on LLVM's unfulfilled promises in this realm—not to say anything bad about LLVM and Clang for what they do well, but optimizing C++ is very, very different from optimizing everything else. So there's also that.) Unfortunately, Parrot won't change. Rakudo's seen to that; by spending years complaining about Parrot, sabotaging Parrot, and driving off Parrot's developers, Rakudo's all but guaranteed that.

Could Rakudo change? I doubt it; the rifts between Rakudo and Parrot have their roots back as far as 2001, when Parrot's design took the road marked on the map as "A better VM for Perl 5.6". If, in the 2008 time period, Rakudo and Parrot had mended things such that the features Rakudo currently has bolted on to NQP to support Rakudo properly had made their way into Parrot proper, things might have turned out differently. (Parrot's current object model was designed to the satisfaction of absolutely no one—everyone involved in its design looked at it and thought "Wow, that's wrong in so many ways" but we could never communicate well enough to figure out something more correct. I suppose it's darkly amusing that the person who implemented Parrot's object model went on to spend several years alternately revising an object model for Perl 6 and telling Parrot developers not to change Parrot's object model.)

The design flaws in Parrot will never get addressed. Then again, porting NQP to other VMs will demonstrate that Parrot was still the easiest target for NQP, warts and all. Rakudo in December 2013 will look a lot like Rakudo in December 2012: modest improvements to be sure, but incomplete and generally unusable for most purposes. The state of Perl 6 in 2013 will be overpromises, underdelivery, and disappointment—just as the state of Perl 6 has been since about 2003.

Despite all that, I do believe that the design of Perl 6 as a language is solid. How unfortunate that the task of implementing the language has suffered as much as it has.

Creating yet another half-hearted (choose whichever body part you find most appropriate) implementation won't gain you much momentum, and you'll spend a lot of time reimplementing basic features that aren't all that interesting before you reach any point where you do something useful. Parrot's all but dead, and NQP is probably the next piece to suffer the inexorable rot of "Hey, this big wad of code doesn't do the right thing, so let's add yet another layer of abstraction so we don't have to touch it" because abstractions leak and leak and leak like batteries bulging in a toy left outside for the winter. I suppose it's possible that a huge failure to get NQP and Rakudo working anywhere but Parrot will cause people to look at Parrot again, but that presumes that Parrot will survive until then and that the NQP retargeting will have dramatic and obvious failures rather than modest but subtly lingering time-sucking flaws. Alternately, I could be wrong and NQP on the JVM or the CLR or the LuaVM or some JavaScript engine could work wonderfully, and building up all of the infrastructure Perl 6 needs could go very quickly and it won't suffer from major problems like the impedance mismatch of an alien memory model. I give that a 30% possibility.

Perl 5 has huge flaws in implementation, but it delivered working code in a reasonable time period, and it's not going away. In the two and a half years since the first release of Rakudo Star, it's perpetually overpromised and underdelivered. Nothing I've seen since then has convinced me it'll cease to be anything but a toy for the foreseeable future. In other words, Rakudo Perl 6 has failed and will continue to fail unless it gets some adult supervision.

When I write code that really matters, I prefer to design in the test-driven style:

  • Figure out the next behavior that I need to implement
  • Answer the question "What single test can I write that demonstrates I'm one step closer to implementing that behavior?"
  • Write it and see that it fails (to show that I'm testing something sensible or that I've already finished that step)
  • Implement that part of a feature
  • Clean up the tests and the code
  • Repeat until done

Not everyone on my team does this, and that's fine—it's more important that we deliver working, well-tested, robust features than that we all use the same style. Sometimes, however, the other developers check in features and ask me to help them write tests.

You can take this idea too far even as a rule of thumb, but I'm starting to believe that there's an inverse correlation between the difficulty of testing a feature and implementing a feature which corresponds to the quality of design.

In other words, when Allison said to me the other day "This code was really easy to write!" and I said "It was more difficult to test than to write (but it wasn't difficult to test)", that's a good sign that we've found great abstractions and discovered effective APIs in our code.

The difficulty of the tests is in building and selecting the right test data to expose all of the branches in the code.

You can obviously take this rule of thumb too far: some code is difficult to write and tedious to test because of low quality. Some code is easy to write and difficult to test because it does too much in a very obvious and straightforward way that merits some serious refactoring.

Still, when the balance of work in my programming goes toward crafting effective and useful and correct tests, I start to believe that I'm on the right track to crafting great code.

Improve Your Extracted Traversal

The code from Extract Your Traversal solved my immediate problem elegantly. It had the feel of discovering the secret behind a puzzle which previously seemed difficult—like when you first understand that calculus is about calculating infinitely small things by subdividing your measurements into infinitely small pieces which you can reason about as a group.

I like those realizations, especially in code.

Breaking Reference Circularity

Contrary to what I wrote, the code does have a little more clunkiness than I liked due to practical considerations. The code as I presented it there had a memory leak. (I know; how unfair of me to leave it in there for you to find on your own, but that would have made the article even longer.) In particular, because the traversal function closes over the action dispatch hash and the action dispatch hash entries close over the traversal function, Perl 5's simple reference counting garbage collector will never collect either data structure.

Fixing this is easy in three ways:

  • Use weak references (via Scalar::Util)
  • Break the cycle manually
  • Embrace continuation passing

Perhaps the most well-known approach is to use weak references, but I prefer the second option. Here's the resulting code:

sub html2text {
    my ( $html ) = @_;

    my $tree = HTML::TreeBuilder->new_from_content($html)->elementify();
    my $top  = $tree->find_by_tag_name( 'body' ) || $tree;

    # declare here to close over in hash
    my $traverse;
    my %actions = (
        p => sub {
            my $text = $traverse->( shift );
            return '' unless $text;
            return $text . "\n\n";
        },
        a => sub {
            my $node = shift;
            my $text = $traverse->( $node );
            my $link = $node->attr( 'href' );

            return $text . ":\n\n" . $link . "\n\n";
        },
    );

    $traverse = sub {
        my $node = shift;
        my $text = '';
        for my $child ($node->content_list()) {
            if (ref $child) {
                my $tag    = $child->tag();
                my $action = $actions{ $tag } || $traverse;
                $text .= $action->( $child );
           } else {
                $text .= $child;
           }
        }

        return $text;
    };

    my $text = $traverse->( $top );

    $text =~ s/^\s+//g;
    $text =~ s/\s+$//g;

    # break the cycle
    undef $traverse;
    return $text;
}

By undefining the traversal function, Perl can collect that and reduce the reference count to the hash.

In truth, the weak reference code is very difficult to get correct on its own because of the anonymity of the traversal function: it closes over itself.

Recursing Anonymously

One solution is to make that function a named function. That's a fine solution, but it felt wrong to me, as I was trying to avoid making a class to follow the visitor pattern I've used in other contexts. (If you prefer making named functions, great! That's more than acceptable.)

Perl 5.16's current_sub feature offers an alternative which works with both named and anonymous functions. Instead of forcing the anonymous function to close over itself, it can refer to itself with __SUB__:


    $traverse = sub {
        my $node = shift;
        my $text = '';
        for my $child ($node->content_list()) {
            if (ref $child) {
                my $tag    = $child->tag();
                my $action = $actions{ $tag } || __SUB__;
                $text .= $action->( $child );
           } else {
                $text .= $child;
           }
        }

        return $text;
    };

That's one potential memory leak gone, but the reference to $traverse in the hash and vice versa still requires either explicit undef $traverse or weak ref handling. Is there another way?

Controlling Execution

When I originally thought of this approach, I approached it like a compiler writer might (you can hang up your hat, but it still probably fits). I wanted a different kind of control flow than Perl usually gives.

I planned to pass $traverse as a parameter to all of the action functions in the %actions hash. I stopped before I did that because I realized the code as written works just fine as it is, but what if I'd gone that far?


    my %actions = (
        p => sub {
            my ($node, $traverse) = @_;
            my $text = $traverse->( $node );
            return '' unless $text;
            return $text . "\n\n";
        },
        a => sub {
            my ($node, $traverse) = @_;
            my $text = $traverse->( $node );
            my $link = $node->attr( 'href' );

            return $text . ":\n\n" . $link . "\n\n";
        },
    );

    $traverse = sub {
        my $node = shift;
        my $text = '';
        for my $child ($node->content_list()) {
            if (ref $child) {
                my $tag    = $child->tag();
                my $action = $actions{ $tag } || __SUB__;
                $text .= $action->( $child, __SUB__ );
           } else {
                $text .= $child;
           }
        }

        return $text;
    };
}

Now all of the variables in each of the actions are truly lexical. They don't close over any scopes but their own. Their (recursive) control flow truly depends on the function references they receive. There are no memory leaks in this code (using the current_sub Perl 5.16 feature) and there's no need for explicit code to work around circular references.

The code is a little bit longer and, in my mind, a little bit uglier than the previous version. It is, however, a little bit more flexible. It allows different traversal mechanisms in different handlers. (A similar technique could allow different formatting mechanisms in different handlers, perhaps to change indentation levels or font display or whatever you might prefer.) Someone either more clever or motivated might even figure out a way to perform tail call elimination in certain cases.

Aside from the fix to the memory leak, I left the code as it is. (While we're very likely to migrate to Perl 5.16 by the end of January for Unicode improvements, we haven't done so yet, so the obvious __SUB__ improvement isn't yet an option.) I've spent a while thinking about it, and my conclusion is this:

It's fun to ponder alternate mechanisms of writing the same code and to analyze them for strengths and weaknesses, but when I reached a solution that I could explain to the other developers on the project and which met our needs, I stopped. I don't need to generalize a new mechanism of control flow we use elsewhere in the project, at least for this. (I did the other day for a separate task, but that's a story for another time.) I only need to make the code strong enough that it does the job without known flaws while keeping it clear enough that it's sufficiently obvious how it does what it does.

This code is open to modification to add new handlers. So far we've only needed two, but they're easy enough to write. They don't require any additional memory management code that they might, if I had used weak references. (The more adornment your code requires to satisfy the runtime, the easier it is to get things wrong when you have to modify it or copy, paste, and modify.) The traversal function is also easy enough to understand and to modify if necessary (though that seems very unlikely).

As much as the little compiler writer in my head who's studied things like Haskell and Scheme and closure optimization in JavaScript would like to go off on wild tangents to explore just how far it can push some of these ideas, the Perl programmer in my head steals techniques wherever it can find them, catalogs them as tools and patterns, and holds me accountable for writing sufficient and necessary code.

I'm glad I have those tools, but I'm even more glad for a language and ecosystem and community with such a practical and pragmatic mindset.

Extract Your Traversal

One reason I hate writing parsers is that I hate writing traversal code. I know how to make a tree-like data structure. I know how to traverse that data structure depth-first and breadth-first. (I even understand that a tree is a specific case of a graph and I've traversed graphs.)

These problems are math problems, and having solved them once, I've satisfied my curiosity and would rather be off solving new problems.

Last week, Allison offered me a bug. "Here's an HTML stripper," she told me. "It doesn't handle nested tags correctly."

The correct way to process a language of arbitrary complexity with effectively infinite nesting is to parse it, rather than using regular expressions. (There's math behind that as well. I've done it to my satisfaction. If you want to learn more, look into the theory of grammars and natural languages and automata.) Parsing means using a parser.

Parsing often means creating a graph-like data structure. In the case of HTML, if it's anything even close to well-formed, it's a tree with a single root element and an arbitrary amount of children. To find only the textual content—to remove all markup correctly—you have to traverse the tree from the root to the leaves. You can't assume, for example, that you'll only ever have to turn something like:

<p>Hello, world!</p>

... into "Hello, world!", because as soon as you do that, a user will track chaos and entropy all over your clean floor by expecting you to turn:

<p>Hel-<em>LO</em>, world!</p>

... into "Hel-LO, world!", which is the bug Allison asked me to fix. (The fact that the naïve parser we had produced "Hel-, world!", not that users make demands. I don't have the source code to the human race in the preferred form to make and distribute modifications.)

The code had a nice comment in it that said, in short, "Yes, this doesn't handle nested tags. You'll have to use a more complicated recursive solution." At least it used HTML::TreeBuilder and had that comment.

As you can guess from the name, HTML::TreeBuilder turns a flat wad of text in HTML format into a tree. Then you can walk the tree, inspecting each node all the way from the root to the leaves.

That means recursion.

(The problem with the buggy code is that it had a hard-coded limit of looking only at the direct descendants of the root node. I could have made it look two levels down, but what if users nested tags even further? You end up with code indented so far to the right that not even your modern widescreen laptop will save you from all of the whitespace on the left, or you realize that that's why recursion exists, or someone hands you The Little Schemer and all of a sudden you get the functional enlightenment and vow to learn Emacs for real and when you wake up in a cold sweat a week later, you're better off for it.)

I hate writing recursive code to handle parsing and data transformation. It's always an exercise in writing code that's so very similar to the last code I wrote to do almost the same thing that it triggers my "Why don't you just automate this once and for all and be done with it?" guilt center, as irrational as I know that feeling to be. I've used the Visitor Pattern sometimes and I've used roles to compose in visitor methods, but I've never found an approach I don't end up hating a little bit.

In the spirit of the original code, I decided to embrace limitations (after all, our users are great about filing bugs) and handle only two tags explicitly. Paragraph tags are obvious. Text inside paragraph tags should appear pretty much verbatim, and they need blank lines between them. Hyperlinks need a little more formatting; they ought to appear on their own lines, to make them easier to click.

Any other tag needs its textual components extracted, and we can worry about things like tables and weird spans later. (If that ever occurs, our users will file bugs.)

Because every tag may have nested content, the part of the parser that handles the content must be able to descend into any taggy children to look for hyperlinks or paragraphs or text.

Then inspiration struck. (I've written this kind of code far too many times. Perhaps it wasn't insight. Perhaps it was repetition fatigue.) You traverse the children of an HTML::Element node (returned from HTML::TreeBuilder) with something like:

    $traverse = sub {
        my $node = shift;
        my $text = '';
        for my $child ($node->content_list()) {
            if (ref $child) {
                my $tag    = $child->tag();
                my $action = $actions{ $tag } || $traverse;
                $text .= $action->( $child );
           } else {
                $text .= $child;
           }
        }

        return $text;
    };

In plain English, the nodes returned from an Element's content_list() method are either simple scalars of textual content or HTML::Element objects themselves. You gather up the text and return it and recurse into the objects to find their textual contents, and on and on until you've visited the leaves.

... but as you can see in that code, it doesn't actually do anything other than gather text. It just recurses, and it looks in a hash called %action. The complete code is:

sub html2text {
    my ( $html ) = @_;

    my $tree = HTML::TreeBuilder->new_from_content($html)->elementify();
    my $top  = $tree->find_by_tag_name( 'body' ) || $tree;

    # declare here to close over in hash
    my $traverse;
    my %actions = (
        p => sub {
            my $text = $traverse->( shift );
            return '' unless $text;
            return $text . "\n\n";
        },
        a => sub {
            my $node = shift;
            my $text = $traverse->( $node );
            my $link = $node->attr( 'href' );

            return $text . ":\n\n" . $link . "\n\n";
        },
    );

    $traverse = sub {
        my $node = shift;
        my $text = '';
        for my $child ($node->content_list()) {
            if (ref $child) {
                my $tag    = $child->tag();
                my $action = $actions{ $tag } || $traverse;
                $text .= $action->( $child );
           } else {
                $text .= $child;
           }
        }

        return $text;
    };

    my $text = $traverse->( $top );

    $text =~ s/^\s+//g;
    $text =~ s/\s+$//g;

    return $text;
}

You can see the two special rules for handling paragraphs and hyperlinks. Anything else gets treated as plain old boring tags which may eventually contain text nodes. All of the recursion is in $traverse. Handling a heading is as easy as adding another entry to the hash. As long as that entry calls $traverse, the contents of that heading get handled correctly, without me having to write any more recursive code.

I like not worrying about me or someone else getting the recursion and its end cases correct. (I wrote it wrong the first time, in fact.) I really like having the default case do the right thing for almost everything. I very much like having the mechanism of the recursion separate from the formatting.

Mark Jason Dominus makes the point in his Higher Order Perl that this sort of code is the bread and butter of computer science, and it's the sort of thing that a great computer science education (did I mention Scheme before and Structure and Interpretation of Computer Programs?) will teach you. He also says that Perl's syntax is uglier than it needs to be.

I hate disagreeing with MJD, because he usually ends up right, but I don't mind the look of this code all that much. I prefer the anonymous functions to the structure of setting up and naming functions; that seems to me to spread out the essential behavior, whereas this code keeps a data structure near a traversal function in a pleasing way.

With that said, I can think of at least two ways to improve this code, or at least write it differently. One of them requires Perl 5.16, but the other requires you to rethink control flow. More on both later.

Not the Final Word on Mock Objects

I've been critical of mock objects for testing for quite a while now (Mock Objects Despoil Your Tests). A persistent argument some people put in favor of mock objects is that they enable isolation of testable components.

As if that's a good thing in and of itself!

The argument often suggests that only through rigorous isolation (or at least primarily through rigorous isolation) can you determine which component fails due to a change. To this I respond "So what?" because it's wrong.

The goal of testing as I practice it is to give me confidence that code behaves as I expect it to (positive enforcement) and that future changes will retain that behavior (negative feedback). I don't care very much for isolation as a design principle in and of itself because it's more important to me that I test real code in situations as realistic as possible and that I have good coverage. Setting up a framework for mock object injection of every component possible reminds me of mathematicians in a convention hotel when the fire alarm goes off. Everyone shuffled out of bed, looked at the map of exits, then went back to bed relieved that, to quite, "A solution exists."

Some people push back on this (not mathematicians, who need the joke explained). "But how do you debug tests when many of them might fail when one component fails?" The same way you debug anything: you look at things and think about them and understand the system. It helps if you work in small steps and run your tests every few seconds or couple of lines of code. (I do this.) You would have to do the same thing if you had only one test assertion for every unique thing which could go wrong, and in that case, even the heartiest mocker admits that you have to test your system as a whole, because you don't know if it works together until you prove that it all works together.

I'm not suggesting that you should throw everything in one big bag and test it all end to end, every piece of it, but in the same way, I think you're silly to break it into tiny pieces and test every one of them in isolation, especially when you have to end up writing or generating lots of fake code you never actually use to drive it.

(Similarly if you practice rigorous isolation because you're afraid your tests will be slow then don't write slow tests.)

I don't care about isolation in as much as I care about testing the real APIs I've produced.

I made a really silly typo. It went a little something like this:

#!/usr/bin/env perl

use Modern::Perl;

package Oops {
    use Moose;

    has 'typo ', is => 'ro', default => 'oops';
}

Yes, there's an extra space in 'typo '. (If I were using the fat comma's autoquoting behavior, I wouldn't have made this typo, but I have my Moose attribute style for good or for ill, and this is one of the drawbacks.)

Moose doesn't care. Should it? (No.) Perl does. Should it? (As far as it affects how the parser works, yes.)

The result is plain:

my $oops = Oops->new;
my $meth = 'typo ';
say $oops->$meth;
say $oops->typo;

The rules that you've drummed into your head about what Perl allows as an identifier or this or that are mostly rules about what you can get past Perl's parser. After that, you can do almost anything you want. Because Moose attribute declarations don't have to pass the normal parser rules about identifiers (if you specify them as strings, as I did), you have a great deal of freedom, if you're willing to use that indirection throughout the rest of the code.

The moral of the story for novices is this: the computer doesn't care what you name things, if you can somehow get the name past the parts which care.

The moral of the story for experienced developers is this: abstractions leak, and sometimes that goes to your advantage. (It's not that I've defined a private attribute, but it's a naming convention a little bit stricter than that with the leading underscore.)

The moral of the story for gurus is this: choose your coding style with caution, if you want to make bugs like this impossible.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide

Categories

Pages

About this Archive

This page is an archive of entries from December 2012 listed from newest to oldest.

November 2012 is the previous archive.

January 2013 is the next archive.

Find recent content on the main index or look in the archives to find all content.


Powered by the Perl programming language

what is programming?