December 2013 Archives

My buddy and sometimes collaborator Ovid has been publishing a series of articles about the test organization. In particular, today's Accidentally duplicating tests with Test::Class. I agree with 90% of what he wrote, and many of the organizations I know of would be better off if they followed his advice. (I won't tell you which 10% I disagree with, because it's subjective.)

It's always helpful to keep in mind someone's experience when judging his or her advice. At the BBC, Ovid worked on a large application with a large test suite which took far too long to run—multiple minutes. On a recent contract, he and I worked with a few other really good developers on a smaller application with a test suite which took about 90 seconds to run.

That's longer than I wanted. 30 seconds is about as much time I want to spend waiting for a parallel test suite to run, but 90 seconds was workable. It wasn't a crisis.

We did things wrong, according to Ovid's article; we used Test::Class with inheritance and we used one driver file per module file being tested. Like most test suites, the organization of our tests grew over time. Unlike many test suites, our organization improved gradually as we spent time refining and revising it.

We had two major causes of slowness in the test suite. One was the use of PostgreSQL transactions and savepoints to ensure data isolation at the database level. Some of the prevailing so-called wisdom in the test community is to use pervasive mock objects to avoid using a real database. That's not just bad advice, it's stupid advice. This application depended on real data with real characteristics and we unashamedly took advantage of real database features like transactions and triggers and foreign keys to ensure consistency in the database. Emulating all of those features in mock objects is silly busywork that does no one any good.

The other cause of slowness was in the application's design. The initial implementation of one critical business feature made one specific database usage pattern very slow. It was only a problem in practice in the test suite, and it wasn't that bad. (The contract ended before it was worth addressing it in general.)

Not every application follows this model. Not every development team has multiple experienced testers who deliberately make time to refine and revise test suites to make them effective and fast. (I always find that there's a balance between making the most beautiful and speedy test suite possible and getting enough coverage of the most important features to allow development to continue at its necessary pace. The article What "Worse is Better vs The Right Thing" is really about is a good summary of my thinking.)

With that all said, I'm not sure Ovid's approach would have helped this application very much. (It would have made a fine experiment for a couple of hours one afternoon, and if it worked, so much the better!)

I like using Test::Class to test reusable OO components. (Test::Routine is another good approach.) I don't always like the syntax, at least in the modern world of Moose, but it does the job. I don't like the explosion of little test runner files.

However, I do like two features of this approach:

  • The ability to run an individual class's tests apart from the entire suite
  • The knowledge that each test's environment is isolated at the process level

Both are laziness on my part, and that may be where I disagree with Ovid. I'm far too lazy to want to write:

$ prove -l t/test_class_runner.t -- --class=MyApp::Collection::PastryShops

... even though it's really not all that much to type.

Second—and this is more important—I like the laziness of knowing that each individual test class I write will run in its own process. No failing test in another class will modify the environment of my test class. No garbage left in the database will be around in my process's view of the database. Maybe that's laziness on my part for not writing copious amounts of cleanup code to recover from possible failures in tests, but it is what it is.

Like I said at the start, however, I agree with Ovid 90% of the time. You'll get a lot of value out of reading and thinking about and adopting what he says. Yet don't assume that the right answer to speeding up your tests is always to switch to a new test runner or test execution model. Sometimes a slow test suite is a sign that you need to profile your application.

Keep in mind that most of my experience writing and maintaining and optimizing test suites has shown that startup time—loading the application and compiling all of the relevant Moose classes—is insignificant to doing real work in the test suites. If you're working with an application that's poorly factored or that's doing a lot more work to start up than it probably should, you'll have different results. (I'm not above preventing the loading of Chart::Clicker, for example, when I don't need it to run any tests.)

Testing is difficult, but worthwhile. Testing in Perl offers you the benefit of great test tools written by a community of smart, experienced people who are willing to experiment to make better tools. Take advantage of that.

Perl is 26 Today

26 years ago today, on December 18, 1987, Larry Wall released Perl to an unsuspecting world. The earliest reference to the posting to Usenet's comp.sources.unix is Perl, a "replacement" for awk and sed.

Perl did make awk and sed semi-obsolete. (This is Unix, where you can never get rid of anything.) In the process of doing so, it gave system administrators more power than they had by combining shell and Unix commands with more ease than they had by writing C.

Then came the web and interactive web sites, and Perl found its way into a similar ecological niche: shells and Unix commands were a little too low level and a lot clunky to string together, while C's string handling and whipupitude were just too difficult.

Perl has been a pioneer in a lot of computing niches. It's excelled at zagging whether other languages would have zigged, and it's demonstrated that many problems have good enough solutions if you're willing to look at them in a different way. Along its journey, Perl has suffered somewhat from its success—not only has it allowed non-programmers to commit atrocities against maintainability and cleanliness while getting their jobs done on time and under budget, but it's been around and successful and ubiquitous for so long that it's easy to take for granted.

Perl's stolen from other languages and communities with glee and without shame (CTAN => CPAN, Henry Spencer's regular expressions => Perl regular expressions). It's also encouraged other languages to improve just to compete. (What would PHP look like without Perl? Ruby? Python? Go? JavaScript? Even Haskell?)

As Perl grew up into adulthood, the Perl community rallied around a new style for Perl programs and Perl programming. At various times it's called Modern Perl or Enlightened Perl. This style is descriptive, not prescriptive. It retains Perl's flexibility and malleability to let you solve problems your way. Yet it also embraces Perl's essential nature.

Perl may not be the only tool in your toolbox. It may not be in your toolbox at all. Yet its legacy after 26 years is so great that it's influenced not only the tools you use every day but the way technologists think about the world.

Happy birthday, Perl. Here's to 26 more.

I dislike administering systems. If all I ever had to do were to type apt-get update and have all of my system administration done for me, that would be fine. Unfortunately, I have to administer systems now and then.

Fortunately, the free software world has a lot of people in the same situation, and a lot of smart people have written useful software to manage their systems. As a case in point, consider fail2ban, which I'd have had to invent if it didn't already exist. fail2ban watches log files for suspicious patterns and sends traffic from the offending IP addresses to a blackhole. For example, if some malicious remote machine in a botnet comes knocking at your SSH server with a dictionary full of usernames, fail2ban will let the kernel silently drop all network traffic from that machine for an hour after the third failed login.

That's all configurable. In fact, you can configure all of the existing rules and add new rules yourself.

I did that the other day on a client's server. Somehow, the Internet at large had decided that a web-based system administration service called phpMyAdmin was running on the server. That meant thousands of attempts to find dozens of versions of phpMyAdmin. (I assure you—there is no PHP running on that machine. phpMyAdmin has security holes? Who would have guessed?) That meant a lot of wasted resources and a lot of useless entries in the log files. (We hadn't yet made it around to monitoring log files for reporting yet, so it was worse than it should have been.)

"Self," I told myself. "You should add a fail2ban rule to detect phpMyAdmin scans and drop that traffic."

I did. It was more difficult than it should have been.

fail2ban uses regular expressions to find individual entries in log files which represent suspicious access patterns. One line in a log file represents one event. This is the Unix way. This has been the Unix way for 40 years. It's been the Unix way for 40 years for one reason: it works pretty well, for the most part. (I like Unix, but I see its flaws sometimes.)

The web application I intended to secure has an administrative interface available from /admin. This makes sense. One of the places you can install phpMyAdmin is also to /admin. This also makes a certain amount of sense.

The routing system in the client's web application redirects all requests under the Admin controller (the code counterpart to /admin) to a catchall action so as not to expose internal details of what is and isn't available with or without specific authentication credentials. This makes sense when I think about it one way and doesn't necessarily make sense another way. (It's not entirely what someone might call RESTful and it's almost certainly a violation of the HATEOAS concordat. Then again, it's an administrative interface hidden from the Internet at large behind authentication credentials.)

The first version of my regular expression looked for all attempts to access /admin, /phpmyadmin, /PhpMyAdmin, et all which resulted in a redirection.

Of course, /admin also redirects real users with real web browsers to /admin/login to give them a chance to use a login mechanism that's not nearly as hateful as the basic authentication dialog that's been largely unchanged in web browsers since 1994. (You remember 1994. That's before PHP existed and before Windows machines were on the Internet in such droves that it made sense to gather a huge botnet of poorly secured Windows machines to search for phpMyAdmin vulnerabilities. Also you could have bought AAPL at a deep discount compared to now.)

Unfortunately, my first regular expression matched users going to /admin and getting redirected to /admin/login just as well as it matched bots going to /phpMyAdmin and getting redirected to an error page.

I changed the regular expression. We could also have made /admin display a login form to an unauthorized user. We could have done a lot of things. I changed the regular expression.

The next day, I realized the problem was that the standard Unix mechanism of logging plain text in a well-understood format and parsing it with regular expressions (or even a grammar) threw away information and tried to reconstruct it badly. At the point in the web application where the router received a remote request and redirected it, the router knows exactly why it is redirecting the request. It knows that /phpMyAdmin is an invalid route. It knows than an authenticated user requesting /admin should get redirected to the administrative dashboard. It knows that an unauthenticated user requesting /admin should get redirected to /admin/login.

Unfortunately, none of that reasoning gets into the Apache httpd-style log file. It gets a datestamp, an IP address, the URL request path, and an HTTP status response code. From there, fail2ban and the regular expression guess at why that log entry is there.

Guessing what semi-structured data means is unreliable.

Fortunately, fail2ban is a good Unix program and is flexible about which log file it scans. I could add another log file to the web application to write entries only when something makes a request for a path that's completely unknown; if there's no controller mapped to the request path prefix /phpmyadmin, write to the log. That's only slightly more difficult to create and to configure than it is to explain. You probably already know how to do it already.

Unfortunately, writing a separate log file only works around the problem. I still have to write a regular expression to parse lines in that log file so that fail2ban will handle them appropriately. That's the Unix philosophy at work. It works pretty well and it's worked pretty well for decades. Sure, there are ambiguities, but you can work around them pretty well too.

Sometimes, though, I tell myself what I think I want is the ability to send structured data as events to a centralized event listener system to which other processes can connect as listeners. I know there are things like systemd and D-Bus in the freedesktop.org specification, but I rewrote the regex because pretty well gets the job done now and I don't expect this system to last 40 years.

(In fact, that sums up Unix pretty well too.)

Several years ago, my friend and co-worker Rael wrote Blosxom. He had two goals: to write a very simple blogging platform and to write it in the fewest number of lines possible.

Rael's one of the best dabblers I've ever met; he's really good at finding an interesting new technology or pattern, writing something small but clever in it, and then moving on to something else. He asked me for Perl advice a couple of times with Blosxom, mostly to find golfing techniques to keep the software under 300 lines long.

That might not have been my most proud moment in writing maintainable software, but I appreciated it as an artistic exercise. (I've experimented in writing very short fiction, some of it 500 words or fewer and some of it only about 100 words long. It's difficult, but constraints help me make better decisions about what's really necessary.)

A couple of years later, Rael started a virtual private assistant as a service company, where his goal wasn't to write the least amount of code possible. His goal was to parse as much arbitrary human-generated text as possible to find useful information, such as times, dates, addresses, and contact information. (I'm sure his ultimate goal was to build a successful company with users who adored his software, and he did build up a group of users who adored his software.)

In my most recent client contract, the system had a few constraints:

  • import as much data from the data sources as possible to achieve a 100% accuracy rating while throwing out no more than 1.5% of the data as unimportable
  • process all user data change requests within 48 hours
  • never violate data sharing rules while allowing those rules to change within 48 hours
  • respect the native languages of users and their data (in other words, handle Unicode properly)

Blosxom uses plain text files with minimal formatting. It doesn't use a database. The last time I looked at its searching mechanism, it shelled out to grep. (When Rael first told me that, my reaction was "That's a waste of... wait, that's very clever. Spawning a new process is relatively cheap, and if you have the disk cache to spare on the searchable files—and there aren't going to be multi-megabytes of those—then it's a reasonable decision.") Blosxom itself is a single-file program you can drop anywhere and run from a web server, from inetd, from an SSH session manually, or even from cron or incron.

The client project uses multiple servers with PostgreSQL, a database change management system, a job queue, a copious test suite, an indexing system, a configuration management system, a geolocation service, and a reporting system. If something goes wrong, it should be easy to revert changes and redeploy the right thing.

My team spent a lot of time understanding what that system had to do, where the business wanted to go, how the software helped the business meet its goals, and what was likely to go wrong technically. Even though we didn't have the time or resources to do everything perfectly from the start (who does?), we took time to refine our processes and our code and our design as we and the business discovered more about what the whole system needed to do.

That's what I expect of any good team that's building something new. (If you're building the same software you've built many times before, good luck to you.)

One of the biggest risks we faced was getting Unicode right. If we hadn't been very careful (and knew what we were doing both with Unicode in general and Unicode in Perl in specific), we'd have wasted a lot of time and effort going the wrong direction and caused lots of bugs for users. Code written for dabbling doesn't have to go to the lengths we went to. (When was the last time a one-off script you wrote had to worry about Unicode normalization and collation order? Unless you're Tom Christiansen, Nick Patch, or a contributor to p5p, probably not that often.)

That all might explain my immediate reaction to Herd Thither, Me Hither. If you read that post as sincere (and I don't), you might take it as arguing that dabbling in new technologies and techniques is so important that you should stake your career on it.

That's probably overstating the argument, but then again, you've probably heard the bleating of the thunderous herds questioning why you're not using their most favorite, shiniest of all technologies. After all, Node.js's cooperative multitasking event-driven callback system is so fast and Haskell's type system makes it impossible to compile incorrect code and Clojure is basically Common Lisp for the 21st century and Python is the new BASIC and Rust will make D look like C++ and Ruby on Rails invented web programming. No, what you are using is so ooooooooold and outdated! Get with the times!

I wonder how many startups fail because they mistake what they can get out of dabbling in technology with developing for business. (That's a spectrum, by the way. A couple of days of exploring database deployment systems here and there paid off later by reducing the amount of work we had to do to support the system.) I've been doing this long enough that I no longer believe that technical superiority in and of itself guarantees success. Instead I believe that the ability to deliver working software that gets better over time and doesn't regress in quality, maintainability, and stability is a stronger indicator of success.

That is not a feature of technology newness. No, if success is an iceberg, the superficial technology choice is above the waterline. Unless you're aware of the rest of the constraints of the business, judging the development team's choices on that little part is silly and shortsighted.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide

Categories

Pages

About this Archive

This page is an archive of entries from December 2013 listed from newest to oldest.

November 2013 is the previous archive.

January 2014 is the next archive.

Find recent content on the main index or look in the archives to find all content.


Powered by the Perl programming language

what is programming?