June 2012 Archives

Success Criteria


Programmers who want to succeed in their careers need to learn about the domains of their businesses. Working in health care in the US? You'd better understand HIPAA and insurance billing. Working in finance? Better understand cash flow and the time value of money. Working in biology? Better understand a little bit of chemistry and genetics.

Entrepreneurs who succeed in business eventually learn that running a business requires far more than understanding the technical aspects of plumbing or construction or programming. Growing a business means understanding business as business.

Programmers who really want to succeed in their careers also ought to learn a little bit about business too. That means cash flow. That means return on investment. That means the time value of money.

Here then is the source of my unease with both academic computer science and programmer blog culture: they seem to exhibit a deliberate uninterest in understanding the realities of business.

Why aren't people using formal methods to verify their software? For the most part, the expense of using formal methods in both time and resources is greater than the expected return of additional correctness. (Yes, the perception can be wrong on both sides, but the decision has rationality to it.)

Why is it often more popular to build a shared-nothing web site in a dynamic language which uses a lot of extra cycles for unnecessary things you could work around in C or C++ even if later on you have to spend more resources to make things scale to handle larger popularity? (Yes, I know you can make a list of people who've made foolish technical decisions. Does that prove the inverse of that set does not exist?)

I write tests for all of the code I care about, but I don't test absolutely every part of the stack in exhaustive detail. I don't always write the same code in the same way everyone else does. One part of my business is a shell script written in bash and tested only by the fact that if it ever stops working, I'll find out immediately.

The purity of an ideal or a unified language stack or a hipster-compatible architecture is one thing. Maybe it's your preferred success criterion.

Meanwhile, the relentless pragmatism of Perl and the CPAN means that the return on my investment (especially relentless automation!) is greater on most of my projects than it would be for any other language or ecosystem.

I'm not telling you not to use Clojure or Node.js or PHP. I assume you're a rational person who can see the benefits and drawbacks of each. (If not, at least try to be lucky.) But before you look down your nose at me for not using a language with dependent types and a modern Hindley-Milner algorithm or for not writing a single-page JavaScript application that invents its own relational algebra on a distributed key-value store in the cloud, let me tell you that your success criteria aren't mine.

It's not that I wouldn't do those things if they made sense, but for me, Getting Stuff Done in a way that delights clients and customers and helps my business's numbers continue to improve is my primary success criterion.

I take pride in my craft. I program and design to the best of my ability and continue to strive to improve my skills. When I write code, I write code with care and discipline.

Yet when I must choose between meeting your idea of purity or satisfying the constraints of my business, I'm happy to tell you to learn to cope that I don't care about your new shiny.

Good Enough to Learn


Most programmers would be more effective if they understood business better.

(When programmers talk about MBAs as "empty suits" and decry marketing as "convincing stupid sheeple to buy things they don't need", we can all imagine unscrupulous and ineffective people doing bad things, but we ought to acknowledge that building things only works if we build the right things and get it into the hands of people who value it.)

My YAPC::NA 2012 and OS Bridge 2012 talk is all about doing things wrong. Sometimes getting things done on time, or under budget, or within any of the other constraints you might face, means doing things differently.

I don't advise cutting corners on quality or security. I do mean figuring out your priorities and relentlessly getting rid of things that stand in the way.

For example, I don't use strict on most of my one-liners. I don't write robust test suites for little system administration scripts. Some code I never refactor because it just works as it is.

The more time I spend building and managing my business, the more cognizant I am of simplicity. It might be a fun technical exercise to build a big all-singing, all-dancing framework for all of my applications and spin out everything possible in a generic form to the CPAN, and I could spend the next six to nine months doing that, but that's a lower priority than defining my market, finding customers, and delighting them.

Something similar seems to apply when finding and teaching novice programmers.

We're not going to convince them to ditch Windows (or the Mac) for the one true Unix way before they're worthy of learning from us. We're probably not going to indoctrinate them into whichever branch of the Vim/Emacs debate we care about, nor should we. They won't be registering for PAUSE accounts within the first couple of days (or weeks or months) of typing perl -E "say 'Hello, world!'".

That's fine and that should be a beautiful thing.

Yes, they need to know about perldoc as soon as possible.

Yes, they should get in the habit of using a version declaration and strictures and warnings.

Yes, they ought to begin to explore abstraction and decomposition and how to break a problem into discrete steps on the way to a solution.

Yes, they must know that CPAN exists and is the first place to look when facing a new problem.

Yes, they will one day need comprehensive test suites and refactorings and good tools. They may one day need to understand XS and memory management and shared libraries and linkers and compilation.

Tell me they need that before they write "Hello, world!" or move a little frog around on the screen with Perl SDL or make a stateful little counter with raw PSGI—because they don't.

Perl has a learning curve. (Programming has a learning curve.) We can smooth out the start, maybe flatten it a little to encourage more people to take the first few steps, but we must be careful not to make it so steep that they get exhausted looking at it.

All of our tools and techniques and patterns and disciplines are guiderails. They're not ends in and of themselves. They can keep you from accidental trouble, but they can also block your path (and if you're determined to drive off the road, they will at best slow you down).

Not everyone should be a professional programmer, and that's fine. Not everyone will be and not everyone wants to be.

With that said, we really ought to celebrate the fact that typing perl -E "say 'Hello, world!'" is an accomplishment for someone who's never programmed before. He or she still has much to learn, but writing baby Perl is good enough for now. By all means do we encourage them to continue learning, but we owe it to them to keep practicality in mind.

First, solve problems. Then clean it up. Always do both, but always do both in that order.

Do you remember web programming in the late '90s? I started in 1998, after mod_perl had just come out and people were excited that something better than "drop this program in a specially configured cgi-bin/ directory and set the permissions and check your error logs" had come about.

Getting even a simple dynamic web page working meant either learning enough system administration on your own to set up your environment or being fortunate enough to have a system administrator willing to set things up for you.

This all presupposes you were using something sufficiently Unix-like that all of the instructions you could find all over the Internet in those days would work for you; woe to you if you were a Windows user without access to a Unix-like machine.

(How things have changed; see Promoting Perl to Dissimilar Users.)

In these enlightened times, we have options besides the cgi-bin/ approach and mod_perl. We have Plack and Plack::App::CGIBin. We have PSGI-aware servers which require only Perl to run—no Apache or nginx or IIS, just Perl. We have Padre and Strawberry Perl and perlbrew.

We have all of the tools necessary to bundle into one or two downloads. We can give novices—especially novices on Windows—a good version of a modern Perl 5 with a good IDE which includes a plugin (I couldn't find an existing one, but I found a couple of close options, including Padre::Plugin::Plack) to make writing and testing little Perl web programs trivial.

I know if you're reading this, you probably feel like I do: developing on Windows is like doing scrimshaw blindfolded while wearing oven mitts. You probably feel like I do: the CGI approach is a crazy mismash of ad hoc protocols ripe for making messes. You probably feel like I do: Plack is such an improvement for Perl web programming that we should have stolen it from Python and Ruby before they invented the inspirations.

You may even feel like I do: IDEs have advantages, but my fingers know Vim (or Emacs) far too well to feel comfortable switching.

None of that matters. What matters is that we have all of the tools and a couple of people could spend a couple of afternoons putting things together to smooth the onramp for novices.

The Padre CGI compatibility plugin needs to be able to:

  • start, stop, and restart a Plack server
  • serve static resources from one directory
  • serve .pl or .cgi files from a cgi-bin/ directory
  • publish access and error logs to a specific location

That's it. The code to add a menu to Padre is longer than the Plack-specific code. It's 90% convention, and then almost all of the tedious beginning setup stuff at the start of even extant Perl CGI tutorials goes away.

Yes, these novices should eventually learn about Plack and CPAN. They should probably eventually learn about Unix and Unix-like hosting. They deserve to understand enough of Unix culture to get some of the parts of Perl that codify and clean up said culture.

Before that, though, let's let them get a few successes under their belts so they feel like they're welcome and wonderful and capable and interested in learning more because they can get results without first having to beat their heads against a culture and a learning wall that is only necessary because we haven't removed it for them yet.

... because it hurts me none if someone learns how to write code using Windows and it hurts me plenty if someone gives up on writing code because we've driven her away with trivial bugs it would take an afternoon or two to fix.

If you're like me (and if you're reading this, chances are you're like me), you're of Caucasian ethnicity, college educated, over 25, and male. It's likely you were raised in a solidly middle class household with access to computers in the house or at school before the age of 12. You had encouragement from family, peers, and teachers to study subjects including math and science.

You're probably (though somewhat less likely) also comfortable using a Unix-like operating system, even if your job demands that you use a Windows machine if only to connect to the Exchange server through Outlook to send spreadsheets and .doc files back and forth.

While it's an even bet that you spend a lot of time on a mobile computing device such as a smart phone or a tablet, you probably have access to at least one unfettered computing platform (being able to root your phone doesn't count) where you can install any software you want, including a compiler and development environment. Maybe your system administrator handles it for you, but the principle is nearly the same.

You probably have broadband at home.

When I say "Install XCode" or "Run aptitude to install build-essentials" or "Download mingw or Visual Studio Express", you know what I mean and have no trouble doing it.

I won't call you privileged (even though we are).

I will say this: if we want to attract more users and contributors to the worlds of Perl and free software and ethical computing, we ought to consider how to make the onramps easier for people who don't look so much like me. Yes, they will eventually need to learn how to use some sort of CPAN tool, but does that mean they have to learn how to use the command line before they can do anything productive with Perl? Yes, they ought to understand some of the better pieces of Unix that Perl puts together, but does that mean they have to install their own Linux distribution before they can do anything with Perl?

Jacinta Richardson from Perl Training Australia and John Napiorkowski made this point quite clear at YAPC::NA last week: while we're great at making software for ourselves and people like ourselves, we could do much better making software easier for people not quite like ourselves.

If we're serious about promoting Perl and free software, and if we're serious about making communities which include people by default and take advantage of the different perspectives and talents of different types of people, how can we make it easier for new people to join us? (Or do other people who resemble me even want this sort of thing?)

I have some ideas that I'll share over the next few posts. I'd like to hear yours.

Bringing Together Perl Writers


At the YAPC::NA 2012 Perl Books BOF, five of us came up with two interesting pieces of action to improve the ecosystem of Perl books and articles and writers:

  • Set up a mailing list for writers, editors, contributors, and reviewers to discuss books and articles in process or under development
  • Make and maintain a list of people interested in contributing to books and articles along with their expertises.

For example, I might sign up on a wiki page as an interesting expert in testing, such that someone writing an article or a book or training about testing in Perl might contact me about reviewing the work.

Curating the mailing list might be interesting in trying to avoid a rush of publishers begging for books (or to keep one company or another from trying to dominate the conversation—I'm aware that my interest in this could appear like a conflict of interest), but with caution it could work well.

What do you think? Would you participate in either or both?

Perl without IRC


By the time I stopped working on Parrot and Perl 6 last year, I'd already cut back on my time on IRC. I find it a distraction and an impediment to the kind of clear focus I need to be productive.

After a year and a half of almost no IRC use (I may have used it half a dozen times when I had an immediate question I knew someone could answer immediately), I've come to realize a few things about my relationship with the greater Perl community.

Technical Disconnection

I'm not sure what the Perl QA or toolchain people are up to these days. I see occasional blog posts and emails, and I commiserate with Eric Wilhelm in person every couple of months, but only big announcements (or big disagreements) cross my desk.

Similarly, while I read most of the Perl 5 Porters mailing list, it's clear that the #p5p channel has longer discussions than make it to the list. I miss out on some of these discussions by not idling or backlogging.

The ecosystems of large and active projects such as Moose and Catalyst evolve and change much more frequently than you might expect if you only read mailing lists or blogs. I don't browse CPAN Recent Uploads frequently, so I don't hear about new extensions and plugins and components and techniques until someone writes about them in public.

Social Disconnection

My social contact with other Perl users tends to be with people who write and comment on blogs, participate on PerlMonks, and read and react on various social forae. The latter tend to divide themselves into either "Here's some interesting news" or "Why doesn't this code work?".

Face to face interaction is nice, but even some sort of real time chatter about a problem or interest is useful. (Because most of my work is solo these days, I miss waking up to see that someone else has checked in code that fixes a problem we talked about the day before.)

Github Isn't Enough

It doesn't have to be Github, but the combination of easy forking and pull requests with easy module updates (thank you Dist::Zilla makes collaboration and quick bug and typo fixes much easier. This is also more fulfilling than the fire-off-a-patch-and-forget-it approach of only a few years ago.

It's not a substitute for real communication, though. It's not even a substitute for emailing an author. (Bug reports and feature requests are qualitatively different for some reason a better sociologist might identify.)

Twitter Isn't It Either

I'd tell you why, but I hate typing things on phones and it takes too long to compress my pithy wit into 140 characters.

The Iceberg

Of course I do some things because I'm stubborn. One reason I stay away from IRC is the time sink. Another is because I don't need to argue with a few bozos who helped make a couple of projects not fun anymore. Yet I also wanted to know what it was like for the 95% of people who write Perl but who aren't on IRC or mailing lists or community forums.

It can be a vast wasteland.

Sure, there's finally a new Camel out to help them write code like it's the 21st century, and some of them are hearing about things like Modern Perl: the book, which attempts at least to suggest community resources full of experienced and helpful people, but people who go it alone are in for a bumpy road. They will write awful code because they don't know any better and most of them won't fix it because they don't learn any better. They will copy and paste really awful code because that's what shows up in search results.

They will accidentally use a Catalyst plugin or technique or component that seemed like a great idea two years ago for a period of about six weeks before the flaws showed, and while everyone in the know has moved on, there's no documentation about the new approach because it's institutional knowledge which infects everyone via the vector of IRC and it hasn't ever jumped the air gap to the rest of the world.

(Not to pick on Catalyst: I don't actually know if this is true. I do know that it took several hours to figure out the recommended form processing module and I'm still not sure I have it right.)

The Empath

The solution isn't simple, but I see a relatively simple path to helping us figure out solutions. Imagine that you're a novice with regard to your project. Try to figure out the right approach, or at least a right approach, to a problem. Imagine you do not have you sitting by your side to offer suggestions. Imagine you don't know exactly which code to read to figure it out. (Imagine that you're not confident enough reading code to figure things out.)

What do you find? How long does it take? How right or wrong is it?

Yes, it would be nice if we were all in the same room and could tap each other on the shoulder whenever we had a question. (Maybe those agile folks are onto something!) Unfortunately not every programmer has that opportunity.

O(1) Is Bad for Sharing

Maybe this is only a problem for me because I'm so very stubborn, but maybe we have a chance to reinvent Perl and reinvigorate a community and reinvite a lot of disenfranchised users by spreading this institutional knowledge much further and wider than we do now.

(I assure you, there's little more professionally satisfying than knowing that a new programmer or a frazzled administrator or a student in India or Belarus or South Africa or Chile can stumble across my book and find a good solution to a tough problem even while I'm asleep, or walking in the park, or baking, or playing with my family. Sure, it's less direct than typing glowing green words on a black screen on an IRC session in screen, but it multiplies knowledge.)

The Reluctant Perl Programmer


Before you can solve a problem in a repeatable way, you must first understand the problem.

If you want to recommend a programming language, tool, library, or technique to someone, you must understand what your friend wants to accomplish.

The first wave of Perl programmers adopted Perl because it occupied a powerful and large niche somewhere between the Unix command line and C. It was easier to write small programs (especially for text-munging) than C and it scaled better from one-liners than shell scripts with sed and awk piped together with Unix utilities.

The second wave of Perl programmers adopted Perl because the system administrators had already installed it everywhere that mattered (this was before Windows realized the Internet came on computers) and because deploying a program that understood CGI was as easy as copying and pasting some code and plopping it in a cgi-bin/ directory with the execute bit set. Text munging was still easy. You didn't have to wrangle a compiler on the server. You don't have Unix pipes to string together, at least not very easily.

Today's reluctant Perl programmer:

  • Just wants to get something done
  • Has some sort of data-munging task, whether extracting data from a biological database, text from an XML file, or prices from multiple spreadsheets
  • Doesn't know that the documentation exists, or how to read it
  • Probably doesn't know how the CPAN works, or how to configure a client
  • May know about the strict pragma
  • Doesn't want to read the error messages
  • Just wants to get something done

I hesitate to characterize a third wave of Perl adoption, because there's no obvious single third driver. The difference between reluctant Perl programmers and enthusiastic Perl programmers is that the latter sub-group embraces Perl as a whole. We take advantage of the CPAN ecosystem. We adopt new features and techniques. We refine them, keeping what works and discarding the rest. We revel in the social and engineering achievement of creating an unparalleled distributed testing and verification system which gives us Perl reliability.

We also get things done.

Perhaps the scope of the problem has changed. While it's easy to see that providing access to a few vital POSIX functions in Perls 1 - 4 was sufficient to solve the problems that most of the target audience needed to address and that adding a CGI module was sufficient to simplify web programming in the early days of Perl 5, how far can you go to address the divergent needs of reluctant programmers who are biologists, statisticians, linguists, financial analysis, attorneys, marketers, automation specialists, and testers?

I don't know if a biologist needs Moose in her base installation, but I know I'd rather have that than BioPerl. We are both likely to benefit from the presence of DBI, where a statistician isn't.

Not every reluctant Perl programmer will go to a YAPC or register for a CPAN author account or even ever read a programming book. Yet some will, if we encourage them.

We're great at building tools, and we're great at handing out free copies of our toolboxes. Sometimes we're pretty good at including instructions with our toolboxes.

Now how do we hand the right tool to a reluctant programmer such that he can just get the job done, understand that the toolbox exists, and learn just enough to make his next foray into problem solving even easier? Yes, new programmers will make messes. Yes, that's okay—they're likely to be so small that we can help them clean them up, such that their second programs are a little bit better and a little bit more ambitious and help them become stronger programmers.

It's not a question of education or deployment or bundling or installation. It's not a single question, anyhow. Maybe that means our best option is a guided tour of individual success using Perl. We can't find every individual reluctant Perl programmer, but those we can, we can do much better helping solve problems in the small and the large.

At its heart, science is a way of discovering the world by making small, controlled, testable hypotheses, testing them, and seeing what happens.

At its heart, a lot of programming is the same way. (So is running a small business selling a new product or service.)

At its heart, test-driven development is a scientific process. When done well, it makes and verifies assertions about the reality of the software and the needs it meets. You might even call some of these assertions axioms. If every substantive program writes its own rules of physics and reality, our tests exercise and demonstrate these rules just as the well understood experiments of gravity and light and motion demonstrate our understanding of the physical world.

Unlike the nature of reality (unless you're a solipsist, in which case you already know what I'm going to write next, so close this browser window and go outside), we control both sides of the experiment in our software. We write the tests and we write the code. This gives us a disadvantage, in that we're all optimists and rarely expect things to go wrong, but it also gives us an advantage, in that if we let the tests drive the low-level design and implementation, we get very quick feedback on accuracy, utility, and usability.

We also have the tremendous advantage that we can change the world to meet our expectations just as we change our expectations to reflect the world.

Many people claim one of the benefits of a comprehensive test suite is the confidence it gives you that you can change the design of your software without inadvertently changing behavior. It's a safety net. However you want to meet your interface and behavior promises, you do that. If anything changes for the worse, the tests will warn you.

This often works very well.

I like the original idea of refactoring for two reasons. First, it made clear the distinction between changing behavior and changing design. It's like switching out one stable sorting algorithm for another. If they both produce the same results, you can choose which one you prefer based on other concerns, such as performance or memory use or clarity or maintainability. The other refactoring characteristic is that all refactorings are small and reversible. Just as you can extract a method, so you can inline a method. If you haven't changed the public interface, switch back and forth between the two until you find the right design for your software.

(Some days I feel the temptation to become a linguistic prescriptivist, and today because the perfectly good word "refactoring" has come to be a snooty synonym for "rewriting a chunk of code and probably its interface" among people who are in the know.)

If your tests break during a refactoring, either you weren't really refactoring (you were rewriting) or you made an assumption somewhere in your reality model and need to rethink that part of your testing to make it more robust.

While it's important to keep the definition of "refactoring" pure (lest we lose the notions of design change with retained behavior and reversibility), it's also useful to give ourselves permission to change our tests in small ways to force us to change our code in small ways.

I repeat the word "small" for good reason.

My rule is this: as long as all of my tests pass (every one, no exceptions), I can check in anything I like on a branch in my repository. The corollary is that I can't check in anything if tests are failing. No matter how small the change, if my tests all pass, I can check it in. If it's a one-line change and the tests pass, I can check it in.

Giving myself the freedom to change reality and my conception of reality in as many small steps as possible lets me work in the same style as refactoring while changing behavior. I don't steal the name, but I do borrow the idea.

This means that I have to keep giving myself the permission to introduce intermediate steps from where I am to where I want to be. I know the code I check in now won't be the code I merge back in an hour, but that's okay. The code I check in now passes the tests I check in now. With every step of the process I change the universe, but every change in the universe comes with a verification that that's the universe I wanted at that point in time.

It's not quite refactoring. Perhaps it needs a better name. It's an immensely powerful trick: start with a universe, change your experiments slightly, then change the universe to match. It's the tortoise versus the hare: small steps, independently verified, which add up to big changes.

All you have to do is give yourself permission to work in stages so small that you can verify them in a couple of seconds, then use that discipline as a lever with which to move the world.

Want to derail any serious discussion of programming language tools or techniques? Ask "Yeah, but does it scale?"

Sure, it's not science. It's alchemy and astrology, but you can demonstrate your world-weary superiority. Better yet, you can distract people from getting real things done.

Sometime between when I learned to program (when we counted processor speeds in megahertz and fractions thereof) and today, the question flipped. Back in the day, when BASIC didn't have a SLEEP keyword and cooperative multitasking still made sense (invoke a callback to your own joke here, but please don't block other people from moving to the next sentence) such that you could insert a do-nothing counting loop to delay things because even then computers were faster than human beings, we counted cycles.

We cheated.

Maybe we could have solved bigger problems if we were more clever, but we spent our time trying to cram as much program as possible into as few clock cycles as possible. If that meant rewriting a loop in assembly to get both the memory count down and to take advantage of internal details of the processor we'd read in one of the copious manuals, we'd do it.

Features were important, but the rule of the day seemed to be to use limited resources to their full amount. If that meant skipping a builtin feature because you wanted to unmap the memory it took and use it for something else, that's what you did. If you could save a few bytes by taking advantage of a builtin timer instead of writing your own, you let the screen refresh rate dictate what happened.

I don't lament that loss. (I liked the challenges, but there are always challenges.) I do find the switch fascinating though. Perhaps because I'm not writing silly little games or demos anymore, because I'm writing programs that are supposed to help real users manage their information and be more productive, maybe the switch flipped in me rather than in the world.

(Then again, I did learn to program by the osmosis of typing a lot of code, changing it, and eventually learning what worked and didn't. As above, so below.

The programs I write now care more about dealing with lots of data than they do about fitting in limited computing resources. (Sometimes resource limits are still important: I've had to change algorithms more than once to make the working set of at least one project fit in available memory.) In fact, the resources I have at my disposal are so embarassingly large compared to thirty years ago that I can waste a lot of processor time and memory to avoid waiting for things like speed of light latency accessing remote resources.

I didn't see that coming.

This all comes to mind when I see discussions of programming languages, techniques, and tools. The pervasive criticism flung and intended to be stinging is often "But does it scale to large projects?"

... as if the skills needed to manage a project intended to deploy to an 8-bit microcontroller with 32kb of RAM were so similar to a CRUD application running in a web browser used at most by 35 people within a 500 person company? (As if other skills are so different!)

Put another way, I don't care if you can't figure out how to make (for the sake of argument) agile development with pervasive refactoring, coding standards, and a relentless focus on refactoring and simplicity work with a team of 80 programmers distributed across four time zones and six teams.

I don't care if you think Java or PHP is the only language in which you can hire enough warm bodies to fill your open programming reqs because you think the problem is so large you have to throw more people at it.

I don't care if you think PostgreSQL is inappropriate because it's a relational database and they're slower than NoSQL if you have to scale to 50 million hits during the Olympics when I'm profitable with a few orders of magnitude fewer users.

Your large isn't my large isn't everyone's large, and the way you scale isn't the way I scale isn't the way everyone scales.

You're not doing science. You're not measuring what works and doesn't work. You're not accounting for control variables (could you even list all of the control variables necessary to produce a valid, reproduceable experiment related to software development tools and techniques?).

Conventional wisdom says "Don't optimize until after you profile and find a valid target to optimize and a coherent way to measure the effects of your optimizations." Is it too much to ask to come up with ways to measure the ineffable second-order artifacts of software development like bug likelihood, user satisfaction, safety, reliability, and maintainability so we can measure the effects of things like static typing, automated refactoring tools, the presence and lack of higher order programming techniques, and incremental design?

Otherwise we're stuck in a world of alchemy, before the natural philosophers clawed their way to the point where a unified theory of energy and matter and motion and interaction made any sense. Maybe someday soon the smartest person in the room will answer the question "How does this work?" with "Let's try and find out!" rather than donning wizard robes and hat and waving some sort of mystical wand about wildly.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Archive

This page is an archive of entries from June 2012 listed from newest to oldest.

May 2012 is the previous archive.

July 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?