Testing Templates (or User Experience and the Seams of Architecture)


One of the drawbacks of little languages for things like configuration and database management and state transitions and templating is that they don't often avail themselves of the tooling and ecosystem that full languages boast. (Does your little language have a debugger? Does it have a working module system? How do you test things written in that language?)

When I work with templating in Perl, I often use Template::Toolkit, because it's well-established, well tested, well documented, and popular. I often play fast and loose with the output of the templates, in part because I try to keep my templates very simple but also because I've never found it easy to test templates and their output.

I haven't found the ultimate answer, but some code I wrote in the past couple of days works well enough that I have a lot more confidence in the quality of my output now.

This project performs financial analysis of publicly traded stocks. It looks at certain facts and figures and compares them and categorizes each of them based on their quality. For example, a company which pays no dividend gets in the "Does not pay a dividend" category, while a company with a dividend yield better than the aggregate dividend yield of the S&P 500 index gets a high dividend score. (Of course there's a disclaimer that chasing only a high dividend yield gets you to some risky stocks with very low prices or weird financial tricks trying to raise the price, but here I write about testing, not financial analysis.)

The method which rates a dividend yield is part of a role which performs these types of analyses. It looks something like this simplified version:

sub dividend_yield_rating
    my $self  = shift;
    my $yield = $self->dividend_yield;

    return 'NONE'        if $yield == 0;
    return 'SPECTACULAR' if $yield >= 3.0;
    return 'HIGH'        if $yield >= 2.5;
    return 'MEDIUM'      if $yield >  1.74;
    return 'LOW';

All of this textual analysis is present in a special template component devoted to displaying this analysis for a stock. It looks something like:

    dividend_yield_rating = stock.dividend_yield_rating

[%- IF dividend_yield_rating == 'SPECTACULAR' -%]
This company has a large dividend yield!
[%- ELSIF dividend_yield_rating == 'HIGH' -%]
This company has a high dividend yield.
[%- ELSIF dividend_yield_rating == 'MEDIUM' -%]
This company has an average dividend yield.
[%- ELSIF dividend_yield_rating == 'LOW' -%]
This company has a low dividend yield.
[%- ELSIF dividend_yield_rating == 'NONE' -%]
This company pays no dividend.
[%- ELSE -%]
Oops! We haven't figured out this company's dividend_yield yet.
[%- END -%]

How would you test this?

The design here is deliberate; data drives the behavior. A stock has a dividend yield (another part of the system, verified with model tests). My Catalyst stack knows how to find stocks by name or symbol and display their analysis pages. I have separate model tests for the method in the display role:

sub test_dividend_yield_rating
    my $stock = shift;
    my %values =
        map( { $_ => 'NONE'        } ( 0,    0.00        ) ),
        map( { $_ => 'LOW'         } ( 1.74, 1.00, 0.01  ) ),
        map( { $_ => 'MEDIUM'      } ( 2.00, 2.49, 1.75  ) ),
        map( { $_ => 'HIGH'        } ( 2.50, 2.99        ) ),
        map( { $_ => 'SPECTACULAR' } ( 3.0,  5.00, 12.00 ) ),

    while (my ($amount, $rating) = each %values)
        $stock->update({ dividend_yield => $amount });
        is $stock->dividend_yield_rating, $rating,
            "dividend_yield_rating for $amount should be $rating";

... where $stock is fixture data I don't mind modifying in place (every test file gets its own in-memory SQLite database, thanks to DBICx::TestDatabase). That tests one part of the system.

Testing the template is as easy as testing the whole stack. Effectively this is an end-to-end test (or an integration test or a customer test), because if this works, I know everything fits together correctly:

sub test_dividend_yield_rating
    my ($schema, $ua) = @_;
    my $stock         = $schema->resultset( 'Stock' )->find({ symbol => 'AA' });
    my $spectacular   = 'This company has a large dividend yield!';
    my $great         = 'This company has a high dividend yield.';
    my $medium        = 'This company has an average dividend yield.';
    my $modest        = 'This company has a low dividend yield.';
    my $none          = 'This company pays no dividend.';

    my %rates =
         12.0 => $spectacular,
         2.50 => $great,
         1.75 => $medium,
         1.00 => $modest,
         0.00 => $none,

    while (my ($rate, $description) = each %rates)
        $stock->update({ dividend_yield => $rate });
        $ua->get( '/stocks/AA/view' );
        $ua->content_contains( "<p>$description</p>" );

Again, I don't mind manipulating the fixture data in place. It may feel a little dirty, but it's a lot simpler than working up some sort of mock object framework that doesn't actually tell me anything interesting about the system as a whole.

I can be less rigorous about the test data values I use for the dividend yield because I've been more exhaustive about the corner cases in the model tests. (That's what the model tests are for, after all.)

This code could be more robust, though. In particular, it'd be nice to specify some XPath or CSS selector to say "The textual contents of this DOM fragment should contain this literal string or should match this regex", but I haven't needed anything more than this yet. (Debuggability would improve dramatically in that sense.)

I wrote this rating code in full-on TDD style with this approach, and it helped me catch and fix two real bugs I would otherwise have deployed. I didn't spend any time flipping between my code and a web browser, refreshing things, to make sure that they work. I know this template will always behave to the degree I tested it.

It wasn't even as much work as I feared it might be.

Sure, I could test templates in isolation, figuring that individual templates are separate entities on disk, thus they need separate and unique tests, but what would that gain me? I really care that when my father looks at this page I've made for him, he can see a textual description of some of the numbers my program has produced, and that they make more sense to him this way.

In other words, I've tested the behavior of the code from the point of view of the user experience, because that's what matters most. I still verify correctness, but it's correctness of the aggregate of use, not the seams of architecture.

That's subtle, but it seems important.


This code could be more robust, though. In particular, it'd be nice to specify some XPath or CSS selector to say "The textual contents of this DOM fragment should contain this literal string or should match this regex", but I haven't needed anything more than this yet.

This is why Mojolicious comes with a UserAgent and DOM parser (with CSS selectors); to promote testing! I have to get better at using it of course, but its there! See more at Test::Mojo.

Mojolicious was in my mind as I thought about how to make this code more robust. I'm using Test::WWW::Mechanize::Catalyst for its niceness, so it's not quite as easy to wedge things in, but it's an inspiration.

I wrote Test::XPath to test template output. Works great.

Modern Perl: The Book

cover image for Modern Perl: the book

The best Perl Programmers read Modern Perl: The Book.

sponsored by the How to Make a Smoothie guide



About this Entry

This page contains a single entry by chromatic published on July 5, 2012 10:33 AM.

Whipupitude Versus Very Serious Perl was the previous entry in this blog.

Perl Shop Maturity Checklist: Technical Concerns is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Powered by the Perl programming language

what is programming?