Bookmark and Share

Want efficient automation? Get over the over-checking!

efficient automation

The following article is a guest post to Zephyr from LogiGear. LogiGear is a partner of Zephyr, who provides leading-edge software testing technologies and expertise, along with software development services that enable our customers to accelerate business growth while having confidence in the software they deliver.

Efficient test automation is often seen as a technical challenge. Organizations believe that selecting a tool is the most important step to make automation successful. Some will understand that adding an experienced automation engineer is also a good idea. And in some cases keywords might be used in the hope that those will make test automation achievable by non-technical people.

These factors help, but in my experience, they are not the key elements for automation success. I don't consider successful automation as a technical challenge. It is a test design challenge. Tests need to be well organized and have a clear scope. Their description using keywords should be at the right level of abstraction, where unneeded details are hidden in those keywords.

One of the aspects of this way of reasoning are the checks. Verifying outcomes of tests are obviously a key reason to have tests in the first place.  A tradeoff is that they can also be a major source of maintenance sensitivity. Tests can be unnecessary impacted by changes in the application under test. Avoiding unnecessary checks is therefore a good idea, even though it is not always readily accepted by everyone.

Apart from being harmful to automation, unneeded checks can also spoil metrics coming from a test execution. Let's say a test has 100 checks, of which only 5 fit the scope of the test. If those 5 fail, but the other 95 still pass, the result looks like 95% success, while in reality the success is 0%.

Two typical sources of over-checking are "Expected Result" columns in management tools, and "on the fly checking".

"Expected Result" columns

Many tools, including even our own TestArchitect, have a mechanism to define tests as a sequence of test steps. The test developer puts the steps in a table, which includes a column for the expected results of the steps. A diligent test developer will typically populate those cells, "why not", and the automation engineer will have no choice than to implement them into checks. A majority of these checks however may not be needed in the scope of the test, and become a form over-checking. Use steps sparingly and only specify checks that are really needed.

We prefer to specify tests are sequences of keyworded lines that we call "actions". In this approach a check gets its own line; with its own keyword as the action name. This way checks are more clearly visible as the result of design decisions, following the scope of the test.

On the fly checking

The "on the fly" checking is the habit of adding detailed checks as part of the navigation of higher level tests. Something like "since we're here, let's do a check". For example the implementation of a "login" action might include a check whether the main window is visible after the log in.  This sounds like a meaningful check, but in my view it isn't. Whether the log in works should be the target of a separate tests, and should be executed separately.  Additionally, checking if a log in is successful as part of another test can have a negative impact by exposing a test to more impacts of application changes than is needed. Test automation is very much an exercise in the art of minimization-- the "less is more" approach is key.

The essence in my view is that each check in a test should carry a burden of proof: does the check fit the scope? The objectives of the test and otherwise the check needs to be reconsidered. In other words, get over the over-checking.

Hans Buwalda is an internationally recognized expert in test development and testing technology management and a pioneer of keyword-driven test automation. He was the first to present this approach, which is now widely used throughout the testing industry. Originally from The Netherlands, Hans now lives and works in California as CTO of LogiGear Corporation, directing the development of what has become the successful Action Based Testing™ methodology for test automation and its supporting TestArchitect™ toolset. Prior to joining LogiGear, Hans served as project director at CMG (now CGI) in the Netherlands. He is co-author of Integrated Test Design and Automation and a frequent speaker at international conferences.