ITHAKA is a not-for-profit organization that helps the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways. ITHAKA manages two large digital library and preservation services – JSTOR and Portico – in addition to a research and consulting service called Ithaka S+R. ITHAKA works with a wide range of organizations in the academic community: foundations, universities, libraries, colleges, scholarly and learned societies, publishers, and others, including individual researchers.
Today, I had a chance to speak with Alison Enright, ITHAKA’s QA Manager for JSTOR and Portico.
Q: Can you tell us a little about what your organization does?
We preserve digital content reliably for the long term, including converting millions of pages of print materials to digital format, and create trustworthy, affordable web-based access platforms so people around the world can use this content in their research and teaching.
Q: Can you tell us about the test management challenges you faced prior to Zephyr?
We have been using another product for years and had a lot of challenges with it. For example, anytime we tried to introduce bulk actions like creating a large test set, we always got errors. This was a recurring theme. It became so problematic that our internal users lost faith in the system and turned to other means like Excel for test management. We also have a mixed environment where some testers use Macs and some testers use Windows machines, which was difficult to manage with the old system. Other limitations included: poor integration to non-proprietary tools, insufficient reporting, and unresponsive customer support.
Q: What effect did these problems have on your project team?
Each time there was an issue, it caused hours of delay. Basically when things would error out, we didn’t know what was complete vs. incomplete. Hours were lost, people were frustrated, and we risked missing our deadlines due to starting our testing over. We have over 14k test cases that we run quite often, so you can see where starting over would cause delays. We also spent time exporting and compiling reports in Excel, things that we believed should be automated within a system.
Q: Out of all the options in the market place what made you select Zephyr?
The system seemed very easy to use, the cost was right, and the fact that you could run it in multiple browsers was appealing to our users. Even trialing the product, creating test cases and executing bulk actions was very simple. The reports were intuitive, and we could customize dashboards, reducing the amount of reports we had to export to Excel. Zephyr offers a lot of reports that we use on a regular basis, whereas with our last product we had to create graphs and charts for the different project we were working on.
Q: With all those challenges you were previously having, did Zephyr alleviate them?
Absolutely! We don’t have any of those concerns anymore, and Zephyr has done a great job responding to our needs. One of the features we really liked about our existing product was the ease of filtering. This was one of the key features and a make it or break requirement that the new vendor had to have if we were going to consider switching. Zephyr addressed this issue in the 4.5 release. The new enhancements make setting up test cycles easy. We can add a subset of test cases from anywhere in the repository as well as from search results and/or previous test cycles. Zephyr also offered the best integration with JIRA, our bug tracking system, and APIs for integrations to automation tools we use like Cucumber, ruby in a git repository and Jenkins. And, Zephyr’s support—not only compared favorably to other test management vendors—it is one of the best we have experienced from any software provider.
Q: Can you elaborate on what was on the check list of functionality you needed?
Bulk actions, sorting, filtering and reporting, being able to create the test sets so that we can execute them and be able to show progress. We are an Agile environment so when we finish with sprinting, there is a regression cycle that needs to be completed. Here’s our test suite, here’s how much we have executed, here’s what we’ve got left. We need reports to help us increase transparency and visibility with all stakeholders of the project team. Now everyone uses one system and data is not stored in different places.