Guest Blog: Combining Automated and Manual Testing to Maximize Visibility

Insight Main Image

Guest Author: Matthew Baker, Software Developer, Rough Stone Software.

Testing has historically been a heavily manual process, but to the benefit of software quality everywhere this is becoming less acceptable to organizations, and less feasible overall. Today’s systems tend to have a great deal of functionality and with that comes more testing. Continuing to add value to these systems while ensuring that the existing value stays intact is a balancing act for management and a challenge for test teams. The desire for constant improvement with the need to keep existing functionality working indefinitely takes test automation from a stretch goal to an essential skill in your Quality Assurance teams.

The reasons for this can be articulated with a simple thought exercise. Consider that we have a new software development effort. We begin this effort by implementing and testing Features A, B, & C. In this first iteration QA only has to test the features that the development team implemented. Next, Features D & E are implemented. Now the development organization is faced with a dilemma. Do we:

  1. Try to keep our pace by only testing the new functionality.
  2. Hold up everything (including new development) until all new features are tested, and all existing features are regression.
  3. Begin to stagger our development and testing cycles by moving to a gated or “over the fence” type of approach?

I can say with confidence that the only sustainable answer is #2. While it sounds extreme, this is the only way for teams to maintain a consistent delivery pace with predictable quality. However desirable, this is difficult in practice. As shown in the table below, while development effort over a short time horizon (say 2-4 weeks) stays fairly consistent, test effort over the same period (including regression testing) continues to expand. The water just continues to pile against the dam. In a QA effort that only performs manual testing, attempting to execute to #2 above is not achievable without either longer iterations or adding more testers to the team with each iteration. This is illustrated in the table below. Note how test effort balloons with each iteration while development effort stays basically the same.

Phase Development Effort Test Effort
1 A, B, C A, B, C
2 D, E A, B, C, D, E
3 F, G A, B, C, D, E, F, G
 

Enter test automation. If your QA teams can offload some of this preexisting test effort to machines to do what they do best (repetitive tasks with known good outcomes) your team can continue to add value indefinitely while the machines will let you know if something goes wrong with old functionality. Combine this test automation with a Continuous Integration system and you discover problems almost as soon as they are introduced.

That’s all well and good in a perfect world, but what about those tests that can’t be automated? In these cases you still have the same classic problem as described above with expanding test effort, but the solution comes in how you manage it. One of the greatest advantages of automated testing, particularly automation that is triggered every time your system changes, is that you get feedback of all that prior knowledge of your system within the context of a specific change and against a specific version of the software.

This is where good configuration management becomes essential. If you want to achieve the kind of visibility you get nearly for free with automated tests and continuous integration with your manual tests you must be able to point to a specific version of your software and say: “tests X, Y, and Z have been run against this, but Q, R, and S haven’t”. Sadly, this also means accepting the reality that every single time your software changes (for me this usually means the MD5 hash has changed, since it’s now not the same system) all prior results have been invalidated.

However, this doesn’t have to be terrible. If you have good decomposition of components and capture precisely which version of the software you’re running a particular manual test against you now have a powerful set of data. Take the results of the automation and the results of the most recent run of all your manual tests and you can perform a risk analysis prior to your next deployment to decide whether or not it is worth it to run any missed/not-run-lately manual tests or just go for it. The choice is yours, but now you’ll have data to back your decision.

Bio:
Matt is a Software Developer at Rough Stone Software. But don’t hold that against him, he has spent his entire career focused on quality and testing, and he strives to help his teams turn out their highest quality work from beginning to end. Read his blog.

Related Articles: