Please upgrade your browser!

Bookmark and Share

AI and test management

There is a revolution today in how industries make decisions based on data mining.  This revolution is happening in every industry, including retail, education, health and banking. When it comes to software quality, however, industries have still not been able to take advantage of all the data they're currently harnessing to be able to make intelligent decisions about software quality.

The terms Software Quality Management (SQM) and Test Management are often used almost interchangeably in the IT industry but they mean different things. Quality Management refers to quality activities that support the entire software development life cycle: from collecting requirements, designing and implementing the solution, to testing and change management. Test Management has traditionally been focused on reactive measurement and concerned about either the quality of a product after it has been built or the “in-flight" progress of every release. Questions that a Test Manager usually ask include:

  • Are the sprints on track?
  • Are the defects being resolved in time?
  • Will the upcoming release be on time?
  • What are the burn down charts telling me?

Because of new software technology that simplifies how large data sets are collected and analyzed, it's now possible to mine software defect data from several past releases and to exploit this data to improve your organization's software development testing practices. This approach employs artificial intelligence (AI) techniques, including statistical methods and machine learning, which allow for better and more intelligent testing decisions right from the start.

Rather than asking ad-hoc, 'in-flight' questions, your queries can be more in-depth and intelligent, such as:

  • Have we run the most important test cases at the start of the cycle?
  • How critical are the open defects in the software we're about to ship?
  • When QA teams say that it is 70% done, have they covered the most important test cases that were a problem over the past several releases?
  • Are we making adjustments with proactive steps?
  • How many defects do we have? How many test cases have been written and executed? What’s our defect closure rate? What does our complexity look like?

Machine learning helps your organization answer questions like these because it allows you to examine data from a much wider range of scenarios that impact the quality of your deliverables.

The Zephyr Platform

As mentioned earlier, software test management has traditionally adopted a reactive approach to measurement and analysis because these are easiest to implement. This usually means relying on common testing details, such as:

  • What’s our defect count?
  • How fast are we moving?
  • What’s the complexity of our codebase?

The Zephyr platform expands on these common questions with a more proactive approach that breaks the usual boundaries of data analysis. The goal of this type of analysis is to maximize the efficiency and minimize the cost and risk involved during recurring software lifecycles. It does this by developing prediction models based on extensive data mining and iterative learning techniques.

 

 

Process Stability Prediction (PSP) - This model provides a birds-eye view of the defect management from multiples releases, which gives a sense of product stability over different lifecycles. Mapping defect trends across releases helps you better manage resources in order to keep your deliveries on schedule. It also gives you a status on the stability of your point releases, the minor releases of a software project intended to fix bugs or do small cleanups rather than add significant features.

Process Improvement Predictions (PIP) - This is a dynamic model that provides data on the number of test cases that have been automated for every project release. Higher test automation coverage helps reduce costs and improve efficiency. This model also helps ensure that you're getting maximum gain on the automation attempted. For example, a team can use this model to choose areas to automate based on the defects/coverage across projects. It can keep the team from investing time and energy in an isolated area, which will ultimately help maximize its efficiency and productivity.

 

 

Efficiency Improvement Predictions (EIP) - This is another dynamic model that looks at the pattern of relationships across all test cases in the system. By mining historical data and analyzing similarities between tests, this model keeps customers from wasting time by running duplicate test cases.  It does this by continuously trying to find tests similar to those used in the current scenario, which helps teams optimize their execution strategies by reprioritizing  tests based on data provided by the model.  This allows a team to save time by using an optimized and pruned test list that still provides full coverage in place of an entire test suite.

 

 

These three predictive engines that power the Zephyr Platform are all designed to help you automate the risk analysis and test management that are at the heart of software quality management.