Speeding Up My Test Suite by 25%

| 7 Comments

One of my work projects has a test suite that's grown a little too slow for my liking. With only a few hundred tests, I've seen our test runs take over 30 seconds in a couple of cases. We're diligent about separating tests into individual logical files, so that when we work on a feature we can run only its tests as we develop and save running the whole suite for the last verification before checkin, but 30 seconds is still too long.

(Before you ask, our test suite is fully parallelizable, so that 30 seconds keeps all of the cores on a development box at full usage.)

I realize that one fad in software of the last few years is continuous integration, where you submit your commits to a box somewhere for that to run the full test suite and email you if something goes wrong, sometime in the future, hopefully less than an hour from now. Hopefully.

We can't afford to wait that long to know if our code is right. (I like to push to our production box as soon as we have well-tested fixes that we know work reliably. That's often more than once per day. I'm the only one working on the site today, and I've already pushed three changes, with at least one more coming.)

When I worked on Parrot, I spent a lot of time profiling the VM to sneak out small performance improvements here and there, using less memory and far fewer CPU cycles. I'm at home in a profiler, so I picked a very small test program and ran it through Devel::NYTProf.

I've mentioned before that this application has a Catalyst frontend which takes full advantage of Moose throughout. It uses DBIx::Class as its persistence layer.

As you might expect, startup time dominates most of the tests. We have 21 individual test files, and startup took close to two seconds for each of them. No matter how much we parallelized, we were always going to spend no less than 40 total seconds running tests. (The non-parallel test suite runs in ~67 seconds, so parallelization already gives a 55% improvement.)

When I looked at the profile, I saw that most of the startup time came from a couple of dependencies we didn't need.

In fact, even though I use HTML::FormHandler quite happily on another project, we don't use it on this project at all. We did use a Catalyst plugin which brought it in as a dependency, but we never used that feature.

I spent an hour revising that part of our application to remove that plugin (which we used to use, then customized our code so that the plugin was almost vestigial) and the parallel test run went down to 23 seconds.

We trimmed 10% of our dependencies, just by removing one plugin and the code it brought in.

(Tip: use Devel::TraceUse to find out which modules load which modules. It's a good way to get the conceptual weight of a dependency.)

I don't shy away from dependencies when they're useful, but I ended up revising one file and ending up with less code overall than we had before I started.

With that said, there's clearly room for improvement. I traced through some Moose code which handles initialization (especially of attributes) and it seems like there should be some efficiency gains there. Similarly, both Catalyst and DBIC go through some contortions to load plugins and components that in my case were unnecessary. (I have a patch to consider for the former, but not the latter.)

Two things I learned do bother me somewhat. If you use Devel::TraceUse on a project of any size, you'll see a lot of inconsistency and duplication, thanks to years of TIMTOWTDI. Catalyst brings in Moose, but DBIC eventually ends up bringing in Moo. No one can decide whether to use @ISA or use base '...'; or use parent '...';. A lot of larger projects (DateTime, Moose extensions) have lots of helper modules spread out in lots of tiny .pm files which each need to be located, loaded, and compiled.

Worse, all of the nice declarative syntax you get with some of these modern libraries has a cost behind the scenes every time you run the program even if nothing has changed.

If I were to pick an optimization target for future versions of Perl 5, I'd want a good way for Moose or Catalyst or whatever to precompile my nice sugared code into a cached representation that could then load faster on subsequent invocations. (I know you're not getting an order of magnitude improvement here: you still have to set up stubs for metaclasses and the like, but even getting attributes and methods in place would be an improvement!)

I'd settle today for a preforking test suite that doesn't require me to rewrite all of my tests.

Then again, that ~25% improvement in test execution from removing one dependency isn't bad for the week either.

7 Comments

I found a similar speed-up today. I noticed a core base class was taking > 1 second to load, when it should have "light":

time -p perl -MBase::Class -e 1

I found that it was "use'ing" a plugin which was "use'ing" Net::Amazon::S3, which uses Moose. I set the plugin to lazy-load Net::Amazon::S3 when it actually needed, using 'require'. I made sure Net::Amazon::S3 was in our modperl start-up script, so it would still be pre-loaded in mod_perl.

The base class now loads in 0.14 seconds-- almost a 10x speed up, and a win for our unit tests.

Odd that you mention this now, I set my MX::Types plugins to dynamically load their dependencies just this week. I also wrote a patch for MXT::NetAddr::IP to do the same

btw, I just want to say that I prefer Class::Load load_class because it checks to see if the module is already loaded, I believe require will compile every time at runtime (could be horribly wrong)

Yuval Kogman, mst, and I have all looked at various Moose compiler options. It's kind of tricky, but it's probably possible.

Basically, there are a few things it should do ...

1. Take all the generated code (attribute accessors, constructors, etc.) and inline it in the file.

2. Resolve the roles a class uses and stick those methods in the file too. This is really kind of hard with Perl 5 since there's no built-in decompiler for subs. We can use B::Deparse, sort of, but it breaks in the presence of closures and such. So maybe we compile the roles too and stick the methods in at load time, but in a more efficient way.

3. Avoid loading any Moose modules or generating any metaclasses until ->meta is called.

Your contribution of 10,000 round tuits will be gladly accepted ;)

#3 makes sense. How difficult is it?

I thought about #1 and #2 today and almost had a workable system until I tried to figure out how advice would work in a reasonably correct way. Then I gave up. Weaving together closures from different lexical environments is far too difficult for one day.

It's possible that sticking to a simple, declarative subset of Moose would be reasonable enough to compile, but as soon as you write arbitrary procedural code, all bets are off.

It's not really possible to do #3 without doing #1 and #2. It's the Moose metaclasses that do the code generation and role application. The only way to avoid loading them is to create a compiled form of the module that includes the result of the metaclasses doing their thing.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide

Categories

Pages

About this Entry

This page contains a single entry by chromatic published on August 31, 2012 2:48 PM.

You Don't Get to Choose How Other People Feel was the previous entry in this blog.

The Advantages of Declarative Exporting is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.


Powered by the Perl programming language

what is programming?