Companies have heard the saying “test early and test often” more times than they can count, but in a DevOps environment, testing is an important aspect of the software development process. SD Times spoke with some experts about how much testing can be done in a Dev-Test-Ops environment, and how companies can determine how much testing is enough.

Matthew Brayley-Berger, worldwide product marketing manager at HPE
This is a tough question, because like all things software, it depends. Ultimately in most IT environments, the technology is designed to support a business function, so the criticality of that function should be the driving factor behind any decisions.

The majority of organizations undertake a continuous release strategy to provide faster support for the business, so it makes sense that quality needs to be an important consideration. It doesn’t take many high-severity defects or late-stage integration issues to undo any speed gained, and that’s kind of the point. Teams need to have a holistic view of how quality is measured and what likely vulnerabilities…exist so that they can deliberately plan a remediation strategy. Such strategies could include planning higher levels of core-architectural automated testing, or ensuring that end users are available earlier in the process.

The key in any Continuous Delivery environment is to shrink the gap between integrations and tests, to ensure that any corrective action is minimal and consumable within the team’s velocity. Many larger organizations struggle with what they call “a hardening sprint,” and if that works for the organization’s timelines/business needs, there isn’t an issue. But for many organizations, this is your early warning sign that the team is taking on too much work, or aren’t sufficiently validating quality.

So how much testing is enough? It does depend, but I always like to recommend that organizations have a solid life-cycle management platform to help them better scope what needs to be tested, and to deliberately have people on the team focused on architecting and validating “quality” throughout the release. Testers, I argue, absolutely do exist in an agile world, and they have a critical part on any team by helping everyone to catch errors and validate core business capability as early as possible.

In fact, it’s these testers—QA professionals—working as a part of the core team that can help ensure that we are testing enough, thinking about and building automation, and help course-correct early in the life cycle, when warning signs begin to emerge.

The other secret that I’ve seen really help teams, with respect to quality, is using service virtualization to help ensure that we can actually test earlier and more frequently. I’ve seen far too many high-velocity teams fall prey to broken dependent services, or infrastructure issues (e.g. access to a mainframe). In this day in age, virtualization is such a no-brainer, there really isn’t an excuse for not using it.

Dan McFall, vice president of mobility solutions at Mobile Labs
The trickiest part of a Dev-Test-Ops environment is that too much testing can be as “dangerous” as too little testing. The challenge people have in leaving waterfall and going to Continuous Delivery/Deployment is that it won’t be months or even weeks before you can deliver updated code to clients. As a result, it can be easy to get caught up in trying to make the code perfect and not release anything. Automation is key in the Dev-Test-Ops environment because more automation allows you to perform more rapid testing.

On the other hand, customers have functionality expectations you must meet, and you cannot always get a second chance. As a minimum in a Dev-Test-Ops world, you should have a full regression suite ready to run before each release. Then you need to understand the interoperability and non-functional components of the application as well. Performance and UX are also crucial, but if you have solid monitoring and feedback processes, these can be handled in production if you are committed to rapid responses.

Ultimately, more testing is always better than less. The more testing, results and correction processes you can automate, the better off you will be. Just don’t forget that part of the reason for Dev-Test-Ops is to get new code out the door. It can still be tempting to naval gaze, but then you might as well go back to waterfall development.

Jason Hammon, director of product management at TechExcel
In a Dev-Test-Ops environment, it’s often more difficult to ever have “enough testing” because implementation cycles are shorter and QA has less time to prepare and test the code before it’s deployed. Test teams need to test smarter in a Dev-Test-Ops environment by utilizing tools that allow them to prioritize test areas that may be new or have increased risk.

Test-management solutions make it easier to determine what coverage has been completed and what areas still need to be tested. When test management includes, or is integrated with, requirements management, test teams can also ensure that all of the requirements have been successfully tested, ensuring that each delivery matches expectations. While a Dev-Test-Ops environment can present challenges for testing teams, careful planning and execution tracking will still lead to successful deliveries.

Tom Lounibos, CEO of SOASTA
At SOASTA, we believe that testing is never complete—it’s continuous. And it has to include more than in the past when functional testing was sufficient to sign off a release. Today, with desktop and mobile customers accessing your website or mobile application, great performance is an imperative. Poor performance creates a bad user experience that will drive your customers away, likely to your competitor, and tarnish your brand. Performance issues are often caught in load testing, typically on pre-production staging servers setup for load testing. Whatever scale that system is capable of reaching is extrapolated to what is required to meet the desired production load.

When the production system is not an exact match of the pre-prod system, the load tests aren’t a reliable measure for production, nor what performance can be expected. This is due to the differences between prod and pre-prod, including network, load balancers, firewall, plus any server configuration differences. Testing the production system is the only reliable way to measure the performance it will deliver under load.

Waiting to test in production is too late. A performance baseline must be set in development, where poor-performing code is not promoted into the release. Some engineering teams already use tools like JMeter to load test in development. Adding performance testing into the Continuous Integration system institutionalizes these tests.

While performance testing moves left, functional testing also moves right into production, where the application is verified not only to deliver great performance at load, but also correct user experience and functionality. In production, with all the integrations there from your team, and from all third-party suppliers including Content Delivery Network partners, plug-in code partners (including social networks) and tracking codes. Not to mention the myriad of third-party programmatic advertisers, the true performance of the full, integrated app can be tested and measured.

SOASTA was founded simultaneous with the public cloud, and we have always advocated for testing in production from the public cloud. This is where scale load testing is available at low cost through cloud service partners, including Amazon Web Services, Google Cloud Platform, Micro­soft Azure, and others.

When developers are building tests that run in development and through to production, and performance engineers are building tests that run in pre-prod, production, and in development, with automation executing these tests continuously, then the whole team can have confidence that there is enough testing to deliver the solution to market, avoiding the risk of missing serious defects while delivering great performance.

Rod Cope, CTO of Rogue Wave
Here’s the most overused and disliked answer in the book: It depends. As with any effort in development, there’s a tradeoff between what you’re willing to invest and willing to risk. Some bugs wreak havoc on a few customers, and some bugs affect many customers, but not severely. This is where development and product management discuss the possible impacts and determine [whether it’s] necessary to avoid, correct or ignore the bugs.

So who owns this? In DevOps, everyone is responsible for testing, from the developer to QA to the IT director. This doesn’t mean more tests to reach some impossible-to-reach goal of “enough testing”; rather, embedding the tools, processes, and training across all teams to support rapidly shifting requirements, features and release cycles. QA is no longer the gatekeeper; they act as enablers for automated testing and reporting that needs little to no human intervention.

Naturally, relevant standards and compliance come into play to assure both internal and external mandates are met. These are typically brought into the process early via user stories and the various acceptance criteria or definitions of done, and can be addressed with specificallym tuned checkers during static code analysis.

Delivery dates, even in the Agile Age, are often immutable, so testing may simply reflect the amount of time or effort that’s been put into it, rather than the desired end state. Business decisions and market forces dictate when products are released. When does development alone get to specify a launch date? Rarely, if ever.

Back to the fundamental question: There’s no way to determine the right amount of testing. But there are ways to ensure development teams have a consistent and realistic process, supporting tools that don’t slow it down, and a shared understanding of what’s expected.

Tim Hinds, product marketing manager at Neotys
Many people make the mistake of measuring test coverage by number of test cases versus risk coverage. Testing is like insurance: You want to have as little as you can without exposing yourself to major risks. The truth for performance testing is that if your app isn’t business-critical, you can get away with not testing much.

On the other hand, if the app performance is critical to your business, you want to test scenarios that are as close to real life as possible, and use tools that can do this in an automatic way. Otherwise, you’re forced to make the compromise between speed of delivery and reliability of app performance.

Alon Girmonsky, CEO of BlazeMeter
I would say that 120% test coverage is about enough. Why 120%? Although it’s challenging, testing all anticipated usage scenarios gets you to 100% coverage.

The problem is that even with 100% test coverage you are only covering the predefined tests. In production, you are likely to find additional scenarios and user flows that the planned testing didn’t account for. These flows that real users travel may be the very operations that lead to unexpected system behavior.

By testing these common user flows and scenarios on top of the 100% test coverage, you arrive at the 120% figure. Without the “What do users actually do in production?” component, you can’t be sure you are ready to deliver without surprises.

See here for more information on the growth of Dev-Test-Ops, and here for a roundup of Dev-Test-Ops offerings.