August 2011 Archives

What is an Array Anyway?

Suppose Perl 5.20 let you write:

class IceCreamSundae;

method add_toppings(Array @toppings)
{
    ...
}

Ignore class and method. Concentrate on Array. What does that mean?

You can easily assume that this method adds zero or more toppings to an ice cream sundae. Again though, what does that mean?

Does the method access the Array as a queue? As a stack? As a linked list? As a double-ended queue? With standard iteration? With indexed iteration? Is it destructive? Does it hold references to the elements of the array, thus increasing their lifespan with regard to garbage collection?

Sure, you can make cheap and easy type checking work in this case (it's a big patch in Perl 5, but it's not an impossible patch), but what does this code mean?

If we're going to use languages where we don't have to worry about the precise layout of our data structures in memory (because C has the type system of PDP-8 assembly language), we should aim for something more specific and useful and denotative than "Everyone knows what an array is!"

Well-factored programs walk a tight balance between overspecifying their interfaces, thus too-tightly coupling themselves to specific representations, and launching architecture astronauts into space with the unuseful Gordian-knot complexity of too much genericity.

... but it's tempting to use the language's core primitives, because everyone knows what an array means even though no one knows exactly what you're doing with it, which is really the important part of what a good type system can specify.

(Yes, I've written Haskell code. I appreciate Hindley-Milner and consider myself privileged to have corresponded with Dr. Milner about language design in depth. That doesn't change the fact that I can write trivial mathematical code in Haskell which looks obviously correct both to readers and the type checker but which generates infinite loops due to simple arithmetic principles. If you choose the wrong types, you will get the wrong behavior.)

Would that our programming languages encouraged us to express the intention of code rather than making vague promises about its structure.

My little application (10 Years Later, Only 250 SLOC) has grown to almost 500 SLOC, and it'll grow some more as I add user accounts and login. I'm happy to use Catalyst for the dynamic portions, as the abstractions and plugins are useful and reduce the amount of code I have to write.

Using modern Perl for one portion of the project does not necessarily require the use of Perl for all of the project.

I used to be an advocate of Perl 6, the language. Was it worth using Perl 6 for the other part of the project? The big contender was obviously Rakudo, because it is the most active project and runs on a free software stack I can debug myself. (The other possible candidate implementation, Niecza, runs on Mono, which disqualifies it from my considerations. Your personal technology choices may not be mine. I have no interest in discussing Mono here.)

I posted the other day my list of requirements to use Rakudo for practical purposes. What do I need technically?

This little app must:

  • Communicate with a database. I don't necessarily need DBIx::Class for a project of this size, but I do need the ability to work with SQLite now and PostgreSQL later. Without DBIC, I'd have to do more work mapping tables and rows to objects, but the schema is simple enough it's not too onerous.
  • Communicate with websites. I do need the equivalent of LWP. While I do use at least one specialty web service consumer module from the CPAN, I could as easily perform HTML scraping.
  • Manage regular expressions, not grammars. Part of the project requires HTML scraping. If I were prone to overengineering, I'd write a full grammar for this. As it is, some quick and dirty data munging is more than sufficient right now for this proof of concept.
  • Work with a decent templating system. I know there's no Template Toolkit system for Rakudo, but I don't use all of TT's power. I need only a few features: variable interpolation, loops, conditionals, and subtemplates. If I had to reinvent a template system or work around it I could, but why would I want to? (I am in no mood to reinvent HTML escaping or to correct UTF-8 encoding either.)
  • Work with dates and times. I use Perl's DateTime. Never again will I reinvent date or time handling. Never.
  • Deploy easily. I use Dist::Zilla to manage bundling and testing. The simpler the work to deploy new versions and corrections, the more frequently I will do it. Again, Dzil does far more than I need, but what it needs I hate to give up.

Speed doesn't really matter for this application. It's a batch process which runs once a day and takes 10 seconds in Perl, mostly because I haven't done any work to exploit the embarrassingly parallel nature of the processing. If Rakudo ran it in ten minutes, that would be fine. (With that said, the data set will probably eventually scale by two orders of magnitude, so improving the Perl version's parallelism is useful but would likely help a Rakudo version less.)

Memory use does matter, because I've deployed this application to a shared server and want to be a good citizen.

The stability of Rakudo as a target platform also matters, because I've set up this program to run without manual intervention for days, weeks, and even months. I understand that one of Rakudo's goals is to produce new and better versions every month or so, and I understand that me babysitting a Perl 6 version of this little app would provide valuable feedback to Rakudo developers...

... but I can't justify turning a fire-and-forget tiny side project into the tip of a spear intended to open the tent for further, larger projects.

I could work around the lack of most of the few modules I need. (Database access is the only real sticking point.) Yet even for a small side project—a toy project, even—with minimal needs, I find myself not wanting to use Rakudo because I'd rather spend my small side project time working on the small side project and not trying to get the technology stack beneath that small side project to stay put.

The conventional wisdom in #perl6 has long been that P6 needs more people playing with it to find bugs and more people building its equivalent of CPAN to attract more people to play with it. In my experience, that misses the point. After over a decade of development, P6 hasn't even produced anything which provides what Rakudo Star was supposed to be. Even years after the first Rakudo Star release, it isn't that.

Maybe your project is different, and maybe your needs are different from mine. Maybe you won't find Rakudo buggy, incomplete, unstable, and slow. Maybe you don't mind babysitting code which won't run in a month because the specifications are still changing after a decade and a half.

All I know is that P6 is still looking for people who don't mind that level of pain. I think they'll be looking for a long time.

One of my small projects (10 Years Later, Only 250 SLOC) is a mostly static website generated by some backend programs. (Some of the UI is JavaScript, but it's all data-driven.)

I've spent plenty of time tweaking the page generation, and as such I need to know how things look after running through all of the data-driven templates. While I could deploy to a live web server after running the "Regenerate the desired pages" program, I pulled out the Plack hammer.

Remember how stupidly simple Plack is? A Plack application is just a function. Plack comes with some demonstration applications which are a page or two of code apiece—including the almost exactly right Plack::App::File. You can even run it as a Plack one-liner.

I wanted something a little more: a program I could run with a single argument, the name of the directory root from which to serve files. Something like:

$ plackfile .
$ plackfile root/static/

... would do. This took only a few lines of code:

#!/usr/bin/env perl

use Plack::Runner;

my $app    = Plack::App::IndexFile->new({ root => shift })->to_app;
my $runner = Plack::Runner->new;
$runner->parse_options( '--access-log' => '/dev/null', @ARGV );
$runner->run( $app );

package Plack::App::IndexFile;

use parent 'Plack::App::File';

sub locate_file
{
    my ($self, $env) = @_;
    my $path         = $env->{PATH_INFO} || '';

    return $self->SUPER::locate_file( $env ) unless $path && $path =~ m{/$};
    $env->{PATH_INFO} .= 'index.html';
    return $self->SUPER::locate_file( $env );
}

Plack::Runner is the module at the core of plackup, and Plack::App::File is a Plack core application which serves static files beneath a root directory.

Unfortunately, Plack::App::File doesn't default to index.html when the user requests a directory, so I had to override its locate_file() method to add this behavior. It's a touch fragile, as a better approach would be to let P::A::F take an extra optional coderef constructor parameter to use as a last resort when the user has requested a directory. Still, this works.

The only other interesting part of this code is the argument handling. Plack::Runner takes the same arguments as the plackup file, but this code defaults to redirecting the access log normally printed to STDOUT to /dev/null. This is quieter, which matches my normal purposes, but it's overrideable if necessary.

Two minutes of work on my part (plus 10 writing this post) will save me a tremendous amount of time. Hopefully it will do the same for you.

No Policy Can Save Wrong Code

| 2 Comments

I've written before that software projects need sane, published deprecation policies, and I still believe that...

... but listen to a tale of two intertwined projects with a seemingly-sane and published deprecation policy that sounds great but doesn't actually work.

Parrot and Rakudo used to be much closer together. The Parrot repository had a subdirectory languages/ which held several language implementations running on Parrot. This was well and good (How did a lion get rich? It was the olden days!)—whenever Parrot underwent an API change, a simple ack of the source tree could find most places that needed to change, and running the test suites of the projects under languages/ could find the rest.

(Getting people to run the full test suite is a different story, but in that case at least guilt and a good bisection tool work post hoc magic.)

Then one day, Parrot left the Perl-centric world of perl.org and kicked languages/ out of the nest. languages/perl6 went to Rakudo.org and the troubles began.

You see, back in the olden days when a lion could get rich and the Perl 6 VM was like the Perl VM but a little bit better, the way to extend Parrot was, obviously, write a whole wad of C code. Want to add a new data type? Write a wad of C code. Want to change how a core wad of C code works? Write a whole wad of C code. If hacking on Perl had taught anyone anything (and I can't believe I can write this with a straight face), it's that finding hackers ready, willing, and able to write big wads of C code to work around or customize or extend other big wads of C code so that someone else doesn't have to write C code is a really easy prospect.

Anyhow, the Parrot execution and extension and embedding model has always assumed, as does Perl, that the VM should expose a big wad of C functions that extenders and embedders should use to manipulate big wads of C structs and other data types defined in C.

On one hand, customization and flexibility is good. On the other hand, people like me who know both C and Perl write in Perl when we can and C only when we have to for several very, very good reasons.

Imagine a ventriloquist with a Parrot puppet, except he doesn't stick his hand in the puppet to pull levers and flip switches and wiggle his fingers. He has a robot hand, operated by a joystick, and that robot hand is half robot and half organic and it tickles actual organs inside the puppet and the bird goes off and does its thing. Oh, and he's operating that joystick with his toes. Plus he's underwater, wearing a blindfold.

In other words, Rakudo has an intimate knowledge of the guts of Parrot and does some strange and bizarre things. (If you look through the revision history of those parts of Rakudo, you'll see I've fixed some of those strange and bizarre things and I've perpetuated others.)

When Rakudo and other languages left the nest, a deprecation policy came about. Imagine this vaudeville act:

HLL Developer: "Hey, you changed this API!"

Parrot Developer: "You weren't supposed to use it."

HLL Developer: "No one told me that, but regardless, you broke my project."

Parrot Developer: "Oh, sorry. Well here's how to fix it."

HLL Developer: "Wish you'd told me that months ago."

Parrot Developer: "Yeah, we'll do better next time. By the way, what in the world were you doing with that?"

HLL Developer: "No one told me to do it differently, so I just threw together something that works."

You can see where this is going.

Thus began the great Parrot version numbering wars. The end result was that Parrot would have two supported releases every year, one coming every six months. The initial deprecation policy promised no backwards incompatible changes without notification in at least one supported release.

In other words, if a project which was previously in languages/ had moved somewhere else where a well-meaning Parrot developer didn't notice that a simple change broke backwards compatibility, that project had the ability to request a reversion of the change, a documentation of the change as a compatibility change, and a delay of up to six months before making the change...

... depending on when the project noticed.

I've mentioned the idea of technical friction before, but get out your calendar.

This policy didn't only protect languages from unintended changes, it protected languages from desired changes. For example, if Rakudo wanted a new feature in Parrot, and if the best way to add that feature in Parrot meant breaking backwards compatibility (changing the order of search paths, for example), both Rakudo and Parrot developers had to mark their calendars.

The deprecation policy period is now three months, for hopefully obvious reasons.

Even so, now there are at least two projects on separate time tables with separate goals for features and improvements and deprecations. When Rakudo wants a feature from Parrot that Parrot doesn't yet support (or Rakudo developers believe it doesn't support fully or at all or in the precise way Rakudo wants), Rakudo tends to toe the joystick. When Parrot makes a change that exposes the fragility of tickling bird gullets with robot fingers, Rakudo gets to complain. The deprecation policy followed to the letter prevents Parrot from adding the features Rakudo demands at the same time that it prevents Parrot from improving the features that Rakudo uses.

You might read this and think that the problem is the deprecation policy. It's not—at least, that's not the root cause.

I hate the polite fiction calorie-free slogan that "Perl 6 is a research project", because it's a polite way to suggest that Perl 6 is an embarrassing pipe dream no true Perl hacker takes seriously, but implementing Perl 6 has required a lot of research and fits and false starts. (Before you complain next that it's taking a while, you write a performant grammar engine which allows lexical overriding of any grammatical rule or category.) When Parrot began, no one knew exactly how a Perl 6 VM machine should implement several important features. That was also true in 2005. That was also true of Parrot's 1.0 release in March 2009.

Making an effective Perl 6 means continually thinking and rethinking and simplifying and inventing new things.

Now tie an anchor around the feet of a live bird and pretend you're a ventriloquist.

Granted, Parrot's also long had the goal of running other dynamic languages efficiently and effectively. Its endgame is more than merely Perl 6 (and Perl 5). Yet a Parrot that won't run Perl 6 well has failed at its primary goal, and subsequent goals are less interesting.

The right solution is to invent a time machine and not kick Rakudo out of the nest. (The rightest solution is to invent a time machine and start implementing Perl 6 on Parrot from the first day as a first-class citizen and not a rat's nest of Perl parsing madness which had, to my count, at least two replacements before Rakudo.)

Rakudo's developers—who had commit access to Parrot, of course, so they could fix things if they wanted—declared that the only real solution would be to add another layer, another project external to both Parrot and Rakudo, which will sit in between the two and abstract away the Parrot specific parts in the hope that someday someone will add another VM backend so that Rakudo doesn't have to be tied to Parrot forever. In other words, throw away all of the work that went into Parrot, start over from scratch, and hope that this rewrite will be the one that sticks.

That went over about as well as you'd expect. (Chasing away Parrot's developers set Rakudo back by another couple of years.)

If the notion of working around a broken deprecation policy—broken because of too little coupling between two projects so intimately intertwined—is to add another layer of coupling to separate those two projects seems a little strange to you, you're not alone. By my count, that's three projects in a driver's seat built for one. If you can figure out which project is upstream and which project is downstream and how to debug with reasonable each which change where broke which test and who gets to fix it, you're a far, far better developer than I am.

Any decent web (or GUI application designer) will tell you that the separation of concerns recommended by patterns such as Model-View-Controller is critical to the maintainability of any project with business logic and some sort of UI.

This, of course, leads to all sorts of mistakes by people who follow the letter of the pattern and not the spirit of the pattern. (The spirit of the pattern is "put things where they belong, like with like, and mix the two only along well-defined encapsulation boundaries".) That's why abominations such as XSLT exist: by creating a Turing-complete programming language in which no sane programmer would voluntarily perform anything more complex than simple substitution more than once, you've ensured that a template will never contain anything as complex as iteration. ("Iteration? Isn't that business logic? Shove it on in the model then!")

Me, I've used a different slightly different take on this pattern on a few recent projects, and I've found it exceedingly useful.

Consider the TT2 code:

[% META title = 'some page' %]
[%- USE Math -%]
<div id="fullcontent">
<h1>Some Page</h1>

[% INCLUDE 'components/copy/some_page_text.tt' %]

...
</div>

Where I'd previously have included components where they perform parametric template abstractions (that is to say, where I'd use a function in code), I've started to use components for the text portions of pages.

I first noticed this as a useful technique when I had to add a legal disclaimer and click-through in a user registration system. My customer told me it'd probably change in the next several months. Rather than explain how Template Toolkit works, I decided to explain "Edit this single file as HTML to change the license" and be done with it.

Since then, I've used it on a couple of other projects where the presentation of data and the explanation of data may change separately. Rather than searching through an entire tree of templates for a wording change, I can search through a subtree of copy. (I suspect this technique may also lend itself to localization, but I have no direct experience of that.)

If I were to hew to a strict reading of MVC or another pattern governing the separation of concerns, I might have stuck with the fallacious idea that all view code is view code and all model code is model code, and any and all mingling in a layer is okay. (I exaggerate for effect.)

Fortunately, using a good foundation or libary or toolkit allows the separation of concerns within a layer.

10 Years Later, Only 250 SLOC

| 3 Comments

I had a crazy idea last Thursday, and I reached for Perl.

I used Dist::Zilla to create a new project. I used Git over ssh to set up a new repository.

I built a small database schema for SQLite and used DBIx::Class::Schema::Loader to build DBIx::Class schema for it.

I found a custom CPAN distribution for a web service I wanted to use and added a very small piece of code using WWW::Mechanize to scrape the rest from another website.

I added a couple of Template Toolkit templates and a loop to turn my database objects into nicely-formatted HTML. Then I added a bit of JQuery to add some UI niceties.

I wrapped this all in a slightly modified design from Open Source Web Design and the silly little project has gone from "Hey, how would that work?" into something I can show my friends and family to explain just what I do.

I'm a better programmer than I was a decade ago certainly (some of my design decisions made a lot of code I'd have had to write then completely unnecessary now), but the tools and code available now are amazing.

David A. Wheeler's SLOCCount says I have ~250 lines of Perl 5 in this, and I could cut that down a bit further. This is truly a great time to be a programmer, especially in a community like the Perl community.

Ben Hengst gave a talk at Portland Perl Mongers last night about dependency injection and inversion of control in Perl. He said something completely true. "You've probably already done this, even though you didn't know it was called this."

At its core, dependency injection is a formalization of the design principle "Don't hard-code your dependencies." Consider the code:

sub fetch
{
    my ($self, $uri) = @_;
    my $ua           = LWP::UserAgent->new;
    my $resp         = $ua->get( $uri );

    ...
}

That's not bad code by any means, but it's a little too specific and a little less generic due to the presence of the literal string LWP::UserAgent. That might be fine for your application, but that hard-coding introduces a coupling that can work against other uses of this code. Imagine testing this code, for example. While you could use Test::MockObject to intercept calls to LWP::UserAgent's constructor, that approach manipulates Perl symbol tables to work around a hard coded dependency.

An alternate approach uses a touch of indirection to allow for greater genericity:

use Moose;
has 'ua', is => 'ro', default => sub { LWP::UserAgent->new };

sub fetch
{
    my ($self, $uri) = @_;
    my $ua           = $self->ua;
    my $resp         = $ua->get( $uri );

    ...
}

Now the choice of user agent is up to the code which constructs this object. If it provides a ua to the constructor, fetch() will use that. Otherwise, the default behavior is to use LWP::UserAgent as before.

Adding one line of code and changing one line of code has provided much more flexibility.

An alternate approach is to allow setting ua through an accessor instead of a constructor, but as far as I can tell the only reason to do this is if you're stuck in The Land of the Java Bean Eaters.

While the existing literature on inversion of control and dependency injection tends to throw around big words which have the effect of obfuscating rather than enlightening, the basic concepts is simple. You've probably already done it. Now you know what it's called and why it's useful, and probably also can find ways to use it where it helps.

(See also: how you select a DBD with DBI.)

Everything is a Compiler

| 2 Comments

With multiple new Perl books in progress as well as some fiction and other books, I've spent a lot of time lately working on our publishing process. Part of that is building better tools to build better books, but part of that process is improving the formatting based on what those books want and need to do.

It's a good thing I know how compilers work.

(If you're reading this and you're a self-taught programmer, you're in good company. I took a BASIC programming class in 1983 and a typing class in high school, and that's all the formal education I have. Everything else I've learned through stubborn experimentation over the course of several years. I can't promise that you'll have the same opportunities I've had, but I can promise that the learning material exists.)

I've spent a lot of time customizing Pod::PseudoPod to enable additional features, and much of that time I've spent working around the limitations of its processing model.

POD is a line-oriented format. It's reasonably simple. A heading is =head1 in the left-most column. A paragraph is a chunk of text separated by one or more newlines from another chunk of text.

Then you get to things like lists:

=over 4

=item * A list item

=item * Another list item

=item * A third list item

=back

The lack of ending tags means that a parser has to have heuristics to decide when something ends. Sometimes that works, and sometimes it doesn't.

P::PP its emitters tie together the acts of recognizing the start and end of semantic elements (headings, text, lists, list items) and emitting a different representation of those semantic elements (HTML, XHTML, LaTeX). That is to say, the method called when the parser determines that a list item has ended may well emit a chunk of XHTML.

When extending and modifying this emitter, you must be very careful not to rule out possibilities you hadn't thought of (can you nest lists in table cells?) while not breaking existing code. Eventually the rules for handling literal sections, where whitespace is significant, set flags read in the rules for emitting code to delay emitting until the literal section has really, truly ended—the system which was always a state machine has become an ad hoc state machine with its rules spread all over.

In other words, it's like just about every other ad hoc parsing system that's grown too big for its initial design. The mechanism of emitting a transformed representation dictates too much policy of how that representation should work.

P::PP has an advantage in that emitters are objects, so you can override specific methods and customize behavior, but as the state machine transitions spread like non-native kudzu in the American southwest, overriding becomes as much about not doing something as about doing something. That's a really bad sign.

It's a good thing I know how compilers work.

A compiler works by parsing source code. Every character in the source code is part of a lexical component: an identifier, an operator, a declaration. A grammar groups together lexical items into rules which determine which programs are valid and which contain syntactic errors.

The result of running source code through a (modern/efficient/effective/non-toy) grammar is some sort of tree structure. A tree has the interesting property that you can traverse it from its single root node and evaluate it to a result. That is to say:

say 'Hello' . ' world!';

... could become a tree where the root is the say operation, its only child is the concatenation operation, and its children are the two constant strings. To evaluate the program, you descend depth first until you can descend no further, then evaluate the leaves of that tree in a defined order and walk back up the tree.

You can represent a document in the PseudoPod sense in the same way. A book contains chapters which contain sections. A table contains rows which contain cells. A cell may or may not (depending on the formatting rules for your document) contain headers or lists or other formatting.

If Pod::PseudoPod built a tree of objects in a document object model and emitters visited those nodes and produced their transformations, producing the correct transformations would be easier.

You can still get the correct results with the existing approach (or a whole suite of regular expressions or XSLT or... whatever), but one of the nice parts about learning a little bit more about computer science is finding that the solution to one problem helps solve many, many other problems, even in domains you might have thought were unrelated. If you don't believe me, read Steve Yegge's Rich Programmer Food.

The only problem for me is that I hate writing parsers.

Christie Koehler published an insightful comparison of codes of conduct and censorship in technical communities. Christie's experience as an organizer of the excellent Open Source Bridge conference—especially the Open Source Bridge Code of Conduct is instructive. In particular:

... we are entitled as a community to exclude a few in order to welcome the many that have been marginalized time and time again.

... but also perhaps more important:

We wanted a document that emphasized the idea of open source citizenship.

When you use, patch, document, recommend, create, test, or otherwise participate in the development of free and open source software, your actions take place within a community. As well documented in many places, the social actions of the community are important to the current and long-term health of the community...

... and so are your technical actions.

Consider this: I have at various times in my career written a templating system, an object model, and a test framework. I no longer use nor maintain those projects, and you will not find them in anything other than the Internet's vast elephant graveyard (if they even exist there).

These projects are long gone in part because other projects are (now) obviously better and I have exceedingly good taste in predicting the future, but (more seriously) because the value of those projects to the community was far less than the value of competing projects.

I have the right to publish a new templating system to the CPAN, but without a staggeringly compelling reason to do so (such as that it uses a different technical approach or that it provides the most useful subset of features of an existing system with far fewer requirements or much better resource usage), the community is probably better off if I refrain from doing so.

See also Plack and its ascent from "Hey, that's an interesting idea. What did Python call it again?" to "Of course we use Plack, just like all good-hearted people!" in just over a year. (Plack is even more interesting in that it builds on a standard, PSGI, which allows a project such as Mojolicious to build its own Plack-alike which supports and interoperates through PSGI while following the Mojolicious technical decision of reducing dependencies.)

In the same way that codes of conduct and community values and standards of behavior encourage good citizenship, open doors to all participants, and cooperation over conflict, perhaps our communities can find ways to promote technical collaboration and cooperation. Rather than promoting the creators of projects as übercoders, perhaps we can recognize maintainers, documenters, releasers, speakers, bug-reporters, and users. Rather than individual fame and glory, perhaps we can celebrate re-use and adoption and collaboration. Rather than dozens of semi-compatible object systems on the CPAN, perhaps we can coalesce to a standard MOP in the Perl 5 core upon which multiple object systems can interoperate.

There's room for invention and reinvention and customization to meet unique deployment or development concerns. May we always meet those needs! Yet may we also consider more often the good of the community above our own desires.

A Blooming Garden of Codenames

| 6 Comments

Hopefully you don't remember this (success breeds complacence), but a couple of years ago the Parrot project had an extended debate about version numbers. What's a major version? What's a minor version? What does a simple set of numbers encode and communicate about API changes and backwards compatibility and security updates and the need to upgrade?

Those are all good questions, if misguided. The intent behind those questions—and the desire to cram all of that information into a couple of integers and a couple of dots—comes from a genuine desire to communicate effectively with users.

For all of the problems with stuffing extra conceptual weight in a couple of numbers, everyone can agree that version numbers can communicate one thing effectively: which version is newer. While not everyone can remember that Perl 5.8.0 came out in July 2002, it's fairly obvious that Perl 5.14 is newer than Perl 5.8.0. The Perl 5 numbering scheme satisfies that single important criterion, but it could be better.

Consider the case of Ubuntu GNU/Linux releases. While it's obvious that Zealous Zebra is much more recent that Bowdlerized Bonobo, it's not as easy to tell when either one came out (and what happens after the Zebra stampedes off to a nice quiet pasture somewhere as Autonomous Asparagus appears fresh and new?). Fortunately, these releases have code numbers as well: 10.04 is older than 10.10, and 11.04 is newer than both. The additional tag of LTR indicates that certain releases will have longer support periods than others, but that numbering scheme has several advantages.

Apple has a similar codename scheme for Mac OS X releases, where most of the talk seems to prefer the code name (Lion, Pancake, Squirrel) to the version number. One problem with this system is that comparing version names is difficult because Pancakes and Squirrels belong to such different ontologies.

Perl can't easily change to a yearly numbering scheme (the gyrations version.pm already undertakes are nothing if not heroic), but adopting code names in addition to the standard 5.16, 5.18, 5.20 numbers might improve the ability to see at a glance how new or old any release of Perl is.

If I were to do this, I'd use an alphabetical scheme named after flowers or something equally non-offensive: Alyssum, Bluebonnet, Crocus, et cetera. By the time Perl 5.68 rolls around, the alphabet can start over. This alphabetical approach also offers interesting branding possibilities for enhanced distributions such as Strawberry Perl and Task::Kensho.

(Another option is to include the number of the year of the first release of a new major series when talking about the name in a context where compatibility and obsolescence is relevant, but what fun is that?)

Sharpening Your Saw at Work

| 1 Comment

Sometimes it seems as if there are two kinds of working programmers. The first kind clocks in at 9 am, puts in a day of work, and then goes home and doesn't think about programming. That's all well and good—some days I don't want to think about my work outside of working hours.

The second kind of programmer puts in a day of work, then practices more programming (hopefully not work) outside of work, whether reading books or essays, attending user group meetings, or contributing to free software projects.

These are stereotypes and generalizations and two foci of an ellipse which contains all working programmers, but they're useful as far as they conform to accuracy.

The danger of the second type of programming is burnout. I find that I'm a much better designer/developer/documenter when I have a range of interests and activities which help me rest and rejuvenate and widen and improve my perspective.

The danger of the first type of programming is a hyperfocus on a specific domain and very specific techniques within that domain. (You can see this with Google. When every product or service must fit within a highly scalable, highly available, big data, huge support framework which absolutely must produce single-identity, Internet-scale tracking of users and their activities, you get a bunch of mediocre products held together by the desire to sell eyeballs rather than help users solve problems.)

The first group might benefit the most from sharpening the saw.

I've helped companies improve their design and scheduling and problem solving skills by working through exercises unrelated (and partially related) to their work during brown bag sessions at lunch. I've rarely heard of companies which encouraged programming challenges or exercises at lunch or on Friday afternoons or whenever.

(With that said, Google deserves a lot of credit for Testing on the Toilet — while it demonstrates design flaws of Java more often than you might imagine, it's a creative approach to solving institutional problems.)

How does your company encourage developers to improve their skills? Is there a systemic approach? Is there training? Are there books?

Full disclosure: I'm interested in getting books like Modern Perl read more widely, and if it takes producing questions or exercises suitable for brown bag sessions, I'm interested.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

affiliated with ModernPerl.net

Categories

Pages

About this Archive

This page is an archive of entries from August 2011 listed from newest to oldest.

July 2011 is the previous archive.

September 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.


Sponsored by Blender Recipe Reviews and the Trendshare how to invest guide

Powered by the Perl programming language

what is programming?