Categories
Uncategorized

What makes a good test?

I’m currently putting the finishing touches to a seminar on building simple automated tests for webpages with Selenium. As part of the session, I’ve got a section which talks about the elements of a good test. I’d actually finished writing it and was pretty pleased with the result when I suddenly realised that I’d pulled the whole thing out of my arse.

I’m pretty opinionated on QA matters. One of the biggest reasons I ended up leaving my last job was because I couldn’t get them to accept that I knew more than they did about testing and they ought to let me trash all their crappy existing infrastructure and spend lots of happy hours building them something newer and shinier. I’ve learnt a lot about tact, respect and incremental change since then, but I’m still a very outspoken advocate of continuous process improvement.

I’ve actually ended up in a fairly similar situation to last time, in that I’m surrounded by people who have been doing things in a certain way for a very long time and aren’t particularly interested in changing. It’s easy for me to see the huge glaring inefficiencies and unnecessary risks in the way that we work, but I’ve learnt that other people won’t take my word for it that things could be better, and that to all intents and purposes, if I can’t prove that the problems exist, they don’t.

Which is probably why I’ve become a huge fan of evidence-based debate instead of opinions. If I’m saying “I think we should have automated testing”, then there’s no reason why that’s more valid than another developer saying “I don’t think I want to”. But if I can send round a paper from ThoughtWorks on the subject of process maturity and demonstrate that we’re on the bottom rung of the model, that carries some weight.

So anyway, when I realised that I had no basis for my testing wisdom other than my own experience and (gasp!) opinions, it was obviously time to strap into Google and see what other people thought.

What I learnt is that there isn’t really any consensus on what makes a good test, and there’s very little research in this area. I found a few papers, most of which seemed to be just more opinions but from people with lots of letters after their name (and some of which I found myself violently disagreeing with, so obviously they were wrong anyway).

The acronym “SEARCH” (Setup, Execution, Analysis, Reporting, Cleanup, Help) is popular in some circles, although I personally feel that it sacrifices coherence for memorability (do you really need to separate analysis and reporting? And is the help system really part of the test, or even necessary if you wrote the thing properly in the first place?). I did find one description of a good test which I really loved:

The essential value of any test case lies in its ability to provide information (i.e. to reduce uncertainty).

Context-Driven Testing

Which is poignant, meaningful and inspiring, but of little practical help. However, with determination, pluck and a certain amount of fudging, I’ve put together a list of criteria which is not entirely just stuff I came up with myself. It’s my opinion, backed up by a couple of other people on the Internet, that a good test case should contain the following elements:

Setup / Initial State

Whether your test is manual or automated, it ought to be clear what state the system is expected to be in before the test starts. This helps to prevent differing results when the test is run under different conditions or by different people. This could include information about the tools to be used (e.g. which browser) as well as about the system itself (e.g. which version). I’m also including any required data for the test in this section.

Steps

This is the actions to be taken to carry out the test. They should be clear, unambiguous and repeatable in the case of a manual test. For automated tests, the steps should be well documented and clearly written. No matter how much you love your clever one-liner, when someone else has to figure out what the test does in 6 months’ time, they’ll hate it. Each step should have a clear purpose, and be necessary to get from the inital state to the expected state.

Expected Results

Without this, the test is pretty much pointless. There should be a simple way of telling whether the test has passed or failed, based on how the system responds to the test steps. Some of the worst tests I’ve ever seen simply printed out pages of numbers, and you had to look at the numbers and figure out whether they were ok or not. That sort of thing is a terrible idea – always go for pass/fail. In the event of a failure, your test should give a clear explanation of why the result is wrong and what it means, so that someone looking at the result can figure out whether it’s a showstopper or just an inconvenience.

Other Things

After scouring the Internet, I was forced to agree that there are a couple of other hallmarks of a good test:

  1. Purpose – a really good test has a single purpose, and will reveal some information about the system under test. If you can’t describe what your test is for within a non-waffly sentence, you may not have this.
  2. Meaningful Title – this is more applicable to automated tests, but a well designed test will have a title that means something, so that went it pops up in a list of failures, you can tell at a glance what it’s testing and what the failure is likely to mean. test_customer_3 is a useless title. test_add_customer is a bit better. But test_add_new_customer_to_contact_list gives you a good clue as to exactly what functionality is being investigated.

Leave a Reply

Your email address will not be published. Required fields are marked *