February 2013 Archives

I like to build applications in layers. You've probably heard of MVC, which is popularly misunderstood to mean that you have business objects, a UI layer (even if it's web templates), and some logic to connect them.

Unit testing purists suggest that you need to test each of these layers in isolation. I find that silly; I prefer to test my applications in terms of the behavior they provide to various users. If an asynchronous process without a user interface manipulates business objects, I want to know that the operations necessary work as intended. Similarly a web application that doesn't ever test that the HTML forms provided to real users match the parameters processed by the controllers is a web application that's going to break sometime.

One drawback of this integration-style testing (as unit testing purists might call it) is that debugging a problem in one layer isn't always as easy as it would be if you had comprehensive tests for that layer in isolation. The tradeoff is almost always easy for me to make; I know that the behavior I want my applications to support is tested in terms of every layer. Furthermore, I don't have to pay the cost of writing and maintaining unit tests to enforce this isolation.

Instead, I occasionally pay the cost of debugging.

For example, one codebase has a database-backed persistence mechanism provided by some Moose metaprogramming. It's a little bit clever, but it has a very nice interface and it's simplified most of the rest of the code around it dramatically. The persistence mechanism goes through SQL::Abstract::Complete and eventually the DBI.

My task list includes the ominous task "Write a better error handling mechanism", but I haven't made it there yet.

When a query fails, we don't yet get good information about what fails. (Fortunately, the part of the code which tends to exhibit test failures when we change something has been the NoSQL component, for various definitions of the word "fortunately".)

In cases like this, I do what most programmers probably do. I resort to print-style debugging.

Most of the persistence calls end up going through a method called _make_sth, which does exactly what it sounds like it does:

sub make_sth {
    my ($self, $sql, @vars) = @_;

    my $sth = $self->prepare( $sql );
    $sth->execute( @vars );

    return $sth;
}

I usually end up adding a line or two of debugging code:

sub make_sth {
    my ($self, $sql, @vars) = @_;

    my $sth = $self->prepare( $sql );
    $sth->execute( @vars );
    ::diag( "<$sql> [@vars]" ) if $ENV{DD_DB};

    return $sth;
}

... which, as you can see, gets toggled by the truthiness of the environment variable DD_DB.

In my tests, I simply local $ENV{DD_DB} = 1; just before the failing test case and examine the output.

(Yes, I should run the debugger and figure out an easy way to toggle it to break on the given line, but I'm insufficiently lazy for that. Yet.)

The nice part about using a dynamic environment variable—where the value of the variable is scoped to the enclosing block—is that I can run an entire test file and get only debugging output for the code I want. Yes, it's a global variable. Yes, it's lazy. Yes, it's bad in the sense that all print-style debugging is bad, but it does the trick far too often for me to give it up without an amazing replacement.

Unicode versus Passwords versus Digests

| 1 Comment

"Hmm," I found myself thinking the other day. "I've found and fixed quit a few potential bugs in this client application related to Unicode. Allison and I just went through and normalized user input so as to avoid casefolding errors. I wonder what happens if I try to register with a UTF-8 password."

Like all applications with a decent security policy, this application immediately hashes user passwords (it uses SHA-1 hashing instead of Bcrypt, but one thing at a time). When it creates a new user record, it uses Perl's Digest::SHA to hash the password before storing it in the database. When a user attempts to log in, the application performs a database query to look up the provided email address and the password, with SQL something like:

SELECT person_id FROM person WHERE primary_email = ? AND passphrase = sha1(?);

The assumption seemed reasonable; because SHA-1 is an algorithm with its details widely published and implemented, both PostgreSQL and Perl should provide the same hash, given the same input.

I took Tom's example from the Perl Unicode Cookbook's casefolding recipe (because I felt like this work was the data equivalent of rolling a boulder up a hill) and added a case to our registration tests with a password of Σίσυφος.

Boom.

Digest::SHA1 croaked, complaining about wide characters.

I looked over the code again. I'd enabled UTF-8 literals. I'd saved the file with the proper encoding. We'd fixed the encoding of input and we were normalizing all input to the NFC form. Everything looked right.

Then, buried in the documentation of Digest::MD5, I found a reference that suggested that that module explicitly does not handle wide characters—that it only works on strings of 8-bit characters. Anything outside of Latin-1 is just out.

The documentation suggested explicitly transcoding a UTF-8 string to Perl's internal octet-based encoding, then performing the digest...

... but when I did that, Perl and PostgreSQL disagreed about the resulting hash.

The super nice thing about standards is what they don't mention about the assumptions they make, and how they leave those assumptions up to implementations, and how when people try to do the right thing and run right up against those assumptions, sometimes they find out the difficult way that competing implementations have chosen very different approaches.

I spent the rest of the afternoon chasing down every place in the source code which hashed passwords in the Perl layer and changed them all to hash passwords in the database layer. All tests passed.

This bothers me for two reasons. First, I don't know which of Digest::SHA or PostgreSQL is doing the right thing, because I don't know what the right thing is. I can make a case for both behaviors, depending on whether I care more about doing what the user intends or being strict about the data at the interface. I've argued it both ways explaining it to co-workers.

Second, I went to all of this work to prevent bugs from occurring and to do the right thing for people who'll probably never notice that our code does the right thing—and I'm sure almost every website I've ever used in my life gets this wrong, including (especially?) banks.

That's only slightly horrifying.

Most of my work lately has me developing and supporting a suite of tools for a group of users spread out all over the world, especially Australia, Taiwan, and London. Our goal is to deploy new versions of our software every week, though sometimes (yesterday!) something comes up that requires immediate attention and other times (December!) there's nothing pressing and our usual maintenance and cleanup takes a priority.

Having to ship and support software used by end users—people who wouldn't know what to do with the source code if they had it and who don't care about that anyhow—offers a different perspective from the world of free software, where source code talks and often the best way to figure out how to use a library is to read the code itself. I do that regularly.

Having the success or failure of a business which employs multiple people depend on your ability to deploy and maintain working software offers a valuable perspective, too. (I've never worried about my own future, but the livelihoods of other people weighs more heavily on me and the decisions I make.)

That pressure to ship and maintain working software is valuable. Not only does it help me and my team take the quality of our work and our culture and our software very seriously, but it guides us with an eye to the long term stability and sustainability of our work.

One of my first tasks was to turn a test suite of about 60 assertions, most of which failed, into something usable and useful. In four months, we've grown that test suite by two orders of magnitude and have prevented countless bugs from reaching end users. We've also used the test suite as a guide while refactoring and making more disruptive changes, to demonstrate that cleaning up parts of the internals have no deleterious effects on our users. The combination of the will to make these changes, the time in the schedule to perform them, and a test suite to verify them has improved our code measurably.

One ongoing topic of refactoring is the naming of similar methods within our data access layer. A previous reorganization extracted a distinct layer and moved a lot of code to that area. Unfortunately, it also exposed inconsistencies in how we refer to things. Naming is important within that section of the code because names are a big part of the API. With good names and useful abstractions, you can skim code and understand what it's doing because the names give you semantic hints as to the intent of the code. Inconsistencies make you slow down while reading and really slow down while writing because you have to memorize special cases. (Lest any Lisp fans think I'm praising homoiconicity, I'm not—syntax also helps you skim. That's why we don't write telegrams to each other anymore.)

That naming is hugely important for the maintainability of our product, but it's only relevant to our users insofar as if we fail to provide good software, the business suffers.

That's how I see the "Yeah, we should probably consider doing some marketing sometime" discussion in the Perl community, especially the part which manifests itself as "This version number thing is really a mess, isn't it?" every six to eight months. (Alternately, maybe the Perl community suffers from SAD: Sub-version Affectation Disorder.)

Project management should be a familiar concept to most of the people reading this (many of you write software, right?), but the discussion has spun so bizarre this time it's almost as if project management principles have defenestrated themselves. If I were trying to manage this project, I'd have a lot of questions about a name change:

  • What is this trying to address?
  • Does it meet the needs of real users?
  • If so, who are those users? Have you talked to them?
  • When can the users take advantage of the change?
  • Who is available to do the work? Are these people willing to do the work?
  • What will it cost to do the work?
  • What are the opportunity costs of doing the work?
  • When will we start seeing the benefits of doing the work?

All of the suggestions I've seen so far to change Perl's naming or numbering scheme fare poorly on this checklist. (One imagines the mental gyrations necessary to argue that adding yet another way to enable a feature set in Perl 5—besides use feature; or use 5.016;—will go over quite well with novices, who just want to get something done, not beg a compiler to help them write decent code.)

To my mind, these proposals make the problem worse, not better.

Remember Ponie? Way back in 2003 (I think), when Perl 6 seemed like it was really taking a while and no one was sure if they'd be able to use their Perl 5 code, or what kind of a future Perl 5 would have if Perl 6 came out before 5.10 or 5.12, at OSCON the Perl 6 developers decided to announce a serious effort to port Perl 5 to run on Parrot, even going as far as to give grant money to work on the code.

The announcement was big and popular. It did get attention, and much of it positive. The admission a few years later that Ponie had gone nowhere was somewhat more subdued. Tomorrow—16 February 2013—is apparently an auspicious anniversary. Tomorrow, the Perl 6 effort is as old as Perl 5 was when Perl 6 was announced.

I can forgive you for suspecting, as I do, that the last thing Perl needs these days is another big announcement without working code to back it up. (I would gripe much less about Perl 6 if any implementation met my goals of "I can use this to build code I want to deploy to paying customers".)

The announcement of Moe was different, because it's clearly labeled as an experiment. If it fails, it doesn't hurt Perl. How could it? It's an experiment. It's okay if it breaks code, because it's an experiment. It's okay if no one uses it, because it's an experiment. The same goes for other flights of fancy and Perl 5 forks, such as Topaz and Kurila. (I haven't decided how "Ponie" fits into this neat little box.)

Once you get past all of the questions about "Is this useful?" and "Will this help real users?" and "Is this really the most valuable thing to do now?" you get to practical questions such as "Who's going to do the work?"

Some readers might remember that I tried to make Perl 5 enable strictures by default in 2009, with something called strictperl. That patch created a new Makefile target which built a binary named strictperl with that simple change.

The patch has two problems. First, it's not necessarily the cleanest way to do things (and it may not even work anymore). That's fixable. The second problem is bigger. Almost none of the core libraries work with it enabled.

To make that patch work for a new shiny Perl fork, with a new version number or codename or whatever, someone or someones will have to go through the core libraries line by line to fix them up in a new world of strictures, or else you won't be able to use the CPAN. At all. (And even then, you'll probably have to patch quite a lot of the CPAN to get modules even to pass their tests, even if all of the work is adding no strict; at the top of all code, at which point... yeah. Some progress.)

Who's going to do that work?

Who's lining up to do that work?

Will all of the volunteers please take a step forward?

(I wish I'd learned this a long time ago, before I said some really stupid stuff to Rafael, for example. Wishing for something awesome doesn't work. Telling other people how awesome it would be if it happened doesn't make it happen. Even rolling up your sleeves and hacking on something for most of a decade doesn't necessarily make it happen, but that's a different story.)

You can change names and numbers and issue press releases and pat yourself on the back for how wonderful this brave new world you've created in your head is, but without people who are actually willing to do the hard work of making that vision usable and useful, you're not going to fool anyone.

I understand the desire to blow off steam, and I understand that people are trying to find real solutions to perceived problems, but it would be nice for people to keep the "practical" part of Perl's preferred backronym in mind.

If that hasn't convinced you, consider that you're trying to solve the problem of "People haven't heard the serious marketing message of the past couple of years that Perl 5 and Perl 6 are separate languages, and Perl 6 isn't going to supplant Perl 5 any time soon if ever" by introducing a new marketing message targeted at people who don't really pay attention to Perl's marketing messages to not pay attention to. I get even less enthusiastic about its chances of success when I consider that the point of the proposed marketing message depends on the recipients understanding several nuanced points about it. Remember, the apparent target audience for this message are people who see the name "Perl" and think that "5" or "6" is a version number. You want a two-by-four, not nuance.

Goodnight, Parrot

| 1 Comment

I stopped working on Parrot in January 2011. I'd started sometime in late 2001, probably around September, around the same time I submitted my first patches to p5p. Nine years is a long time.

I didn't have the fortune to attend the infamous mug-throwing meeting which launched P6 in 2000, but by 2002 I found myself attending P6 design meetings and taking copious notes and even contributing ideas here and there. (It's not easy to speak up when Larry and Damian get going. Throw in people like Allison Randal and Dan Sugalski, and it can be downright intimidating.)

I started working on Parrot because it sounded like an interesting project, because I wanted to play around and learn, and ultimately because I wanted to contribute to something that other people would use. That was the same combination of feelings I get when I extracted Test::Builder and made it a real thing. Countless people use that code millions of times every day, and it makes their jobs easier and it makes the experiences of immeasurably more people better.

That is a powerful thing.

Of course, I'm a pragmatist at heart. Software has to be used. That's why making Test::Builder felt more satisfying than getting a C patch in the Perl core, because the former gets used all the time and the latter fixed a very simple bug in a very unlikely code path in a Unicode system that's since been rewritten and very few people ever ran the code. (I have a few patches in the Linux kernel too, which ought to feel really cool, except that they're not very interesting patches in a not very interesting driver for a piece of very uninteresting hardware that not many people have. Still, both owners and the business manager of Onyx Neon have code in the Linux kernel, so that's something.)

I kept working on Parrot because I believed it would be the official-ish, primary-ish platform for P6. (It took quite a while before Larry gently convinced me that having multiple P6 implementations is a benefit, in some ways. I mostly believe him now, though that's a complicated thought for another time. I'll leave it at this: developer interest is not fungible. You can make suggestions and you can offer game theory and economic analyses to suggest that people focus one direction or another, but volunteers will do what they want and won't do what they don't want and you'll have a better time in life if you can accept that, or at least not waste your time fighting that, because you will lose that battle every time.)

Parrot had its problems from the start. You can explain away many of those problems as "People don't always make the right choices" and you can explain some of those problems as "People want different things and don't always communicate effectively with each other" and you can explain still more as "A little forgiveness goes a long way, but a lack of forgiveness stings for even longer."

I won't address the personal side. (I made plenty of mistakes. They're fairly well documented.)

On the technical side, Parrot suffered from the start from its initial design. I'm not the only one who's referred to it as "A great VM to run Perl 5.6." If you took what Perl 5.6 needed and cleaned up the internals, you'd end up with something that looks a fair amount like Parrot at the high level. That made a lot of sense in 2000, when Perl 5.6 was new and there would be no distinct 5.10 because it would have become a P6 slang and both would interoperate on Parrot and then everyone would either keep their old 5.6 or slowly migrate to P6.

Things didn't work that way. (In retrospect, that would be easy to predict, but remember yourself a decade ago and see how well your predictions turned out. Then expand that to include dozens of other people. It's enough to make you a quantum physicist.)

Parrot also suffered a lot from the idea of merging experimental code and having to maintain it. (In retrospect, that describes a lot of the mess around the Perl implementation, from XS to the bytecode backend, to undocumented behavior ossified thanks to the CPAN and the DarkPAN. The Perl world isn't always great about enforcing boundaries and it's definitely not great about sticking to the consequences of violating those boundaries.) I don't remember the exact details, but I do remember at one point there were at least three or four competing implementations of at least one subsystem, all at various stages of incompleteness, all messy code.

Everyone's least favorite part of Parrot—the IMCC system which implements the PIR language—was one of those experiments crammed into the core system and left to stagnate and spread its rot over the objections of its developer, whose reaction as I recall was both "Wait, that was just an experiment! I didn't intend for this to happen!" and "I was going to clean up a lot of stuff. It's not ready!"

After the great shakeup in 2007, the pendulum went the other way: a rush to finish specifications and implement them to the letter without a practical vetting of whether those features actually worked either in theory or in practice for users of Parrot.

That was also the time the deprecation policy (the other most hated part of Parrot) came about. The motivation seemed sensible, from one point of view. Parrot was always intended to be more than just "The virtual machine for Perl 6". Even as far back as the Programming Parrot April Fools hoax, the notion of cross-language interoperability was on the minds of the developers. Forcing Parrot to act like a mature project which didn't pull the rug out from under the feet of languages developed atop it would make the platform more attractive to developers of other languages.

That was the hope.

As things turned out, this wasn't enough to attract other language developers. Rather, it wasn't sufficient in practice when combined with the state of Parrot to attract and retain developers of other languages. Parrot had never offered great tools to build good languages, and Parrot's features didn't quite work well enough together for most purposes.

The Rakudo developers to their credit did build decent tools, but two problems worked against that. First, they did iterate on those tools several times, but those changes weren't always backwards compatible. In other words, to use those tools to build your own language on top of Parrot, you had to keep up with the needs of Rakudo as well as changes in Parrot. Second, because Parrot had its deprecation policy in place, Parrot had to maintain a stable enough interface that Rakudo would keep working as much as possible, which meant providing a stable interface for Rakudo's compiler tools, which meant not updating those compiler tools regularly in the face of breaking changes.

Granted, in the desire to rebrand Parrot as something not completely tied to the fortunes of a P6 (which by then had seemed to have been in development forever and delivered very little and not even Pugs had saved), Parrot had strongly suggested that the Rakudo project find its own repository, thank you very much, without letting the door hit it in its little backside on the way out. (The door did hit it in the backside on the way out.)

Prior to that point, any language within the Parrot repository which had a working test suite would automagically get updated for any Parrot incompatibility which occurred. That is to say, if I added a new parameter to a C function (often for performance or correctness reasons), or if I renamed a function (for cleanliness reasons), I could change the code in Rakudo itself to match that change and no one would notice that any incompatibility had occurred. These deprecations and changes were atomic. In one commit you had one version of Rakudo and Parrot which worked together nicely and in the next commit you had another version of Rakudo and Parrot which worked together nicely.

Hooray!

Because volunteer time and interest and skills are not fungible, some of the people working Parrot had goals very different from mine. I wanted a useful and usable P6 which allowed me to use (for example) PyGame and NLTK from Python and (if it had existed at the time) a fast CSS traversal engine from a JavaScript implementation. Other people wanted other things which had nothing to do with P6.

I won't speak for anyone else, but I suspect that the combination of a deliberate distancing of Parrot from P6, including separate repositories, the arm's length of a six month deprecation policy, and an attempt to broaden Parrot's focus beyond just Rakudo created rifts that have only widened by now.

I offered repeatedly to perform optimizations for Rakudo, to add features specific to Rakudo's needs, and to fix bugs that were holding back Rakudo. For the most part, these offers went unheeded. (I can think of a few cases where I was told repeatedly "Rakudo doesn't need that feature"—see Patrick Michaud's admission that Rakudo had told me not to make several improvements to Parrot—and then after I stopped working on Parrot, someone would sigh loudly and publicly, write a flurry of patches to Parrot to implement exactly what I had offered, and then someone would post a congratulatory blog suggesting that memory use or speed had improved, and isn't it all wonderful. Goodness knows I spent a lot of time fixing bugs in Rakudo's own C code which used Parrot underneath.)

Why didn't I fix those myself? Two reasons. First, the idea of opportunity cost. I didn't want to spend time writing code that Rakudo didn't need in the hope that it would eventually be useful, because I wanted to stay available to fix immediate problems. Second, because many of the things I would have had to do to make improvements would have violated the deprecation policy, if not by the litter of the policy in its spirit. Do that a couple of times and get yelled at and called a horrible person and you move on to other things.

My primary goal was always to get a usable and useful P6 implementation as soon as possible.

By the end of 2010, the Rakudo developers had decided to redesign their compiler tools again. I implied earlier that PIR wasn't a great language in which to write a compiler. Let me state that more strongly. PIR is a mostly terrible language in which to write a compiler. It's better than C in many ways. That's not high praise in the 21st century.

Almost no one in the Parrot world was satisfied with PIR then either, but it was a language under Parrot's control (and under Parrot's deprecation policy) and not a language developed somewhere else under separate design goals for separate purposes. Parrot did include a copy of Rakudo's compiler tools for historical reasons, but Rakudo used its own copy because Parrot would only use old versions due to deprecation policy and... oh, you see the madness.

Anyhow, in late 2010, Rakudo's developers decided to work on another version of the compiler tools. (The third version? Fourth? I forget—at least the third version.) This version would have the new goal of eventually being completely agnostic about the underlying virtual machine.

I thought this was a goal not worth pursuing at the time. My goal in working on Parrot was always to help realize a useful and usable P6 implementation as soon as possible.

My reaction at the time was a combination of "You're rewriting your compiler tools yet again?" and "That's going to take a lot of time that you could otherwise spend making Rakudo more usable and useful now" and "There's no benefit to Parrot in supporting those tools as the primary language for developing compilers on Parrot." I may have been a little more blunt in expressing my opinion then, and certainly reasonable people could have disagreed about the second point (but the so-called "nom rewrite" of 2011 didn't take only a couple of months as promised—it took most of a year before Rakudo had reached feature parity from before the rewrite).

I decided I wasn't having fun and silently stopped contributing to Parrot and P6 by the end of January. (Okay, someone had removed my P6 commit access too, but that's a different story.)

(I've mentioned before that my company was developing a product to sell to actual customers—a product based on Rakudo Star. The intent was to have it ready by April of 2010, which was Rakudo Star's initial release date, but that pushed back to the official release of Rakudo Star. Unfortunately, Rakudo wasn't stable enough that I could in good conscience ship a product based on it, so I waited until January 2011 for the project to mature. Then I decided to re-evaluate through the year to see if the nom rewrite would help things. By January 2012, I scuttled the project altogether. Trying to keep it up to date was no longer worth the costs.)

After YAPC 2011, I did spend a little time working on a prototype of a smaller, faster core for Parrot, but that went nowhere. I couldn't do the project on my own and had zero help, so I dropped it after a couple of days. Parrot limped along for a while, slowly losing contributors. (I don't mean to suggest that because I left, everyone else did, but it certainly seems like Parrot stopped being fun for everyone around the same time period. Funny how being told that everything you're doing is wrong and that you'd better not fix things is demotivating.)

Over three years ago I suggested that building on a foundation under corporate control makes you a sharecropper. You see it with web services—do you really want Twitter or Google or Facebook to have the power to pull the plug on your business? I know that's not an argument which convinces everyone, though.

I still have grave technical concerns about the direction of Rakudo, but I've expressed them often enough by now.

By the end of 2010, the Rakudo developers had decided that Parrot wasn't going to meet their needs such that any eventually useful and usable implementation of P6 called Rakudo would have to run on some other VM. Three years later, they're making steps toward seeing what's technically possible, and there's apparently an explosion of interest in porting Rakudo's compiler tools to all sorts of backends.

Of course, if you want to run the most mature and most useful Rakudo implementation, you're still tied to Parrot for the foreseeable future...

... which is kind of a problem, because there's an active question as to whether anyone's even interested in maintaining Parrot as a project anymore. The Parrot Foundation will shut down and transfer what it holds (copyright, trademark, legal stuff) to an interested entity, if one exists, and then... well, as Allison says "[We] had a good run."

The obvious idea ("Wait, because Parrot's still the foundation for the only Rakudo implementation even close to anything useful and usable, why not let Rakudo take it over and mold it to Rakudo's needs?") didn't convince Rakudo hackers Moritz Lenz and Carl Mäsak. They decided to lecture everyone who was still interested in Parrot that Rakudo doesn't want to take on Parrot until Parrot can demonstrate that it's not a dead project and that it has real benefits for Rakudo.

(After years of Rakudo developers telling Parrot developers not to work on performance improvements, that scolding scoured away any interest I might have in contributing to P6 ever again. Open source development—the meritocracy always works so much better with passive aggression.)

I know volunteer time and interest and talent isn't fungible, but what I wanted was a P6 implementation that was useful and usable sooner rather than later. Because I thought I wouldn't get that in the foreseeable future, in 2011 I stopped contributing to P6 and Parrot altogether in favor of things I enjoyed more. Everything I've seen from P6 since then has demonstrated that that decision was correct.

Parrot 5.0 came out last month, and there'll probably be a Parrot 5.1, but there's an irony to realizing that Parrot will peter out as of Parrot 5 and that there will probably never be a Parrot 6.

Update: Okay, I was wrong. Parrot 6.0.0 came out.

Project "Facepalm"

Oh boy, the version number debate has popped up again. The only sensible words on this subject come from RJBS:

Re: Perl 7 or Perl 2013?

... and from Dave Rolsky:

[It's] hard not to be frustrated when it feels like people with a significant interest in the future of Perl 5-like languages are told that all future version numbers belong to a project that has significantly fewer users, developers, and mindshare than the existing Perl 5 language (and community).

Re: Perl 7 or Perl 2013?

... and from John Napiorkowski:

When I talk to recruiters and CTOs and Directors, or to venture capitalists and related investors they have heard of Perl. Perl, period. Version 5 to 6 is not particularly relevant. Changing the version number is not going to impact how people outside our community see Perl.

Who Cares What Version Number Perl Is?

I normally try my best (such as it is) to avoid putting out unnecessary Stop Energy, but in this case my opinion is clear and proud:

Changing the name of Perl or changing Perl's version number to something which isn't 5 to solve Perl's marketing problem will not work.

Rakudo isn't the answer. Every year that Rakudo only produces toy programs on Rosetta Code, microbenchmarks, and annual programming puzzles that get broken within a few months makes Rakudo less and less relevant. Sometimes not every part of writing software you want people to use is fun. I certainly didn't spent countless hours fixing segfaults and memory leaks in some of the worst code I'd ever seen because it was the most amusing thing I could think of doing.

Perl's Real Problems

Perl has some real problems:

  • Lots of people think you can only write terrible code in it. (I wrote Modern Perl to help counter this perception. Ovid wrote Beginning Perl for the same reason.)
  • The internals are hard to hack on. (Getting rid of XS may help, but that's a herculean task. So is reimplementing Perl.)
  • The defaults are wrong. (The feature pragma is evolving to be much better about this, which heartens me. I didn't expect it to work out.)
  • Some features are missing, and they're difficult to add for both technical and non-technical reasons.
  • For all practical purposes, Perl 6 is not worth anyone's time.

Perhaps the most important reasons that no one ever talks about: Perl first made it big with system administrators before Perl 4 in the late '80s and Perl made it big again with web developers and Perl 4/early Perl 5 in the late '90s.

Even though Python and Ruby both came out shortly after that, even though people were already using JavaScript on the server side in the '90s, Perl seems old because it had already hit its peak of popularity and greatest share of total market share before 2000.

If you're not exceedingly confident that you understand every word in that final independent clause in precisely the way I meant it, read it again very carefully. (Spoiler: when you start with 90% of the mindshare of a new market, it's easy for people to think you're falling apart even if you're growing. (If that doesn't make sense to you, take an economics class or two or work it out with a pencil and paper.))

I suspect the solution to that is twofold:

  • Get used to a polyglot world. Competitors exist. Some are better in some ways. Some are worse in some ways. (I have trouble with languages which can't get lexical scoping right just as I have trouble with languages which can't get well-tested library ecosystems or first-class functions right.)
  • Spend more time building software people can and want to use than worrying about marketing concerns like version numbers.

Nobody used Rails back in 2006 because it had great documentation or a good version number or a coherent language design. They used it because it was a lot easier to build a decent website than most of the other technologies out there.

Note well that this is a beginner concern, the same way that is it easier to slap a few PHP tags in a web page and FTP it to a server rather than mucking about with Unix permissions and /cgi-bin directories.

I feel the same way about Moose and Plack and DBIx::Class and the Perl testing ecosystem and the CPAN and, lately, the relative ease of working Unicode. I'd love to have sane function signatures, of course, and core multidispatch and a core MOP, but I can solve a lot of problems with not much code. (Lightweight parallelism and immutability and laziness too, but at some point Haskell has to be better than Perl for certain problems.)

(One might also examine the timeframe for Python 3 replacing Python 2, especially the dominant sentiment which seems to be "I'll migrate when the projects I depend on migrate." Some people might like to believe that the Python community is composed entirely of humorless grownups of the type who see a larger version number and gravitate toward that, but if bigger numbers aren't obviously better even in the Python world, I think we can discard that notion in the Perl world as well.)

In Summary

Posting grand pronouncements about what Perl has to become or the new name it absolutely must adopt won't do anything. That's irrelevant.

The only relevant tasks I see are doing the hard work of:

  • Building cool and useful things with Perl and showing them off
  • Helping clean up the core of Perl in some fashion such that it stops holding back future development
  • Pointing people away from Dreadful Perl to more modern offerings. (Bonus points if you can drag RHEL kicking and screaming into the 21st century. I'll be over here, not holding my breath.)
  • Continuing to make CPAN an amazing ecosystem
  • Explain, with patience, good humor, and a relentless focus on measurable facts, that Rakudo is irrelevant to people who want to get things done. It's a toy project unburdened by any desire to ship working software and untroubled by adult supervision. It may someday produce something usable, but no one can predict that so do not rely on it.
  • Laugh in the face of employers who refuse to pay more than $20/hour for programmers

That's it. Look, there's no shame in being a polyglot. Goodness knows it's difficult enough to find a decent tech job. If you can't use Perl at work, don't fret. It's just a tool in your toolbox.

Metaprogramming with Moose

Even though Moose is pretty much the right way to do object oriented programming in modern Perl these days (and for the time being, Moo is the preferred lightweight alternative to start with and upgrade to Moose transparently when you need a full set of antlers), Moose isn't only an object system.

That is, Moose isn't just a way to hide the syntax of defining classes and attributes. Moose is an object system built on top of a metaobject system. That's a fancy of way of saying "You know how you have classes and objects? And you can manipulate objects by creating them, giving their attributes values, and calling methods on them? Yeah, you can do the same thing with classes."

When you write:

package MyClass;

use Moose;

has 'attribute', is => 'ro', default => 'my value';

__PACKAGE__->meta->make_immutable;

... Moose creates a new object of the class Moose::Meta::Class. That object represents your new class. Moose adds an attribute named attribute by calling the method add_attribute() on the metaclass instance. Moose also performs some bookkeeping and optimizations when you call make_immutable() on the metaclass. (That's the final line of code in the example.)

Moose hides all of this behavior behind the has() function it exports when you write use Moose;. It does something similar for extends() (use a superclass) and with() (apply a role). These exported functions are syntactic sugar around manipulating metaclass information directly.

Because Moose makes its metaobject system (or MOP, metaobject protocol) available, you can create your own metaclass directly and manipulate its attributes and methods and roles yourself. Just as Moose makes object oriented programming in Perl easier, Moose's MOP makes metaprogramming—creating your own object system—easier.

I'll show some examples in upcoming articles.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide

Categories

Pages

About this Archive

This page is an archive of entries from February 2013 listed from newest to oldest.

January 2013 is the previous archive.

March 2013 is the next archive.

Find recent content on the main index or look in the archives to find all content.


Powered by the Perl programming language

what is programming?