Static Analysis

Subscribe to Static Analysis: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Static Analysis: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

By Jason Schadewald (Product Manager, Parasoft) and Adam Trujillo (Technical Writer, Parasoft)

clouds saasAccording to technology research firm Gartner, the SaaS market was worth $14.5 billion in 2012 and expected to reach $22.1 billion by 2015. The rapid growth Gartner identifies should come as no surprise, considering SaaS’s potential to sharply reduce time to market and overhead associated with tracking and managing myriad versions and configurations of packaged software.

By rolling out a single version of the software, SaaS also enables organizations to centralize control. Many enterprise consumers prefer SaaS because it ensures consistency across the business, which reduces the cost of installation, maintenance, and hardware. Organizations also avoid the inefficiencies and hidden costs of installing incompatible versions of the same software on multiple machines.

A New Set of Business Risks

Despite these benefits, SaaS’s promise of a shortened time to market and a reduced overall cost of ownership comes saddled with its own set of risks, such as a limited ability to mitigate risks associated with a failed deployment, and the possibility of organization-wide disruptions to business operations. To be clear, these are not the types of risk that can be managed by cloud and IaaS strategies, which primarily protect against hardware failure. SaaS introduces the risk of delivering incorrect or incomplete business functionality on the scale of your entire business or enterprise--a risk that affects vendors and consumers alike. Because SaaS deployments are managed primarily by the vendor, consumers are left with few options to mitigate the consequences of a failed deployment, such as downgrading to a stable version or allocating additional resources to develop workarounds.

SaaS goes live to all users at once–the entire world in public cloud or an entire company in private cloud—meaning that the consequences of significant and insignificant defects are amplified. A simple logical flaw in features, function, or architecture can be revisited thousands of times each second, causing a cascade of errors, a mountain of data corruption, or a series of successful security attacks shortly after the release of a new upgrade or patch. SaaS makes failure an all-or-nothing proposition and necessitates that we evolve more diligent testing and prevention practices to address these business risks.

Introducing Change-Based Testing

Testing SaaS presents a unique challenge because the work of ensuring the security, reliability, and performance of the application can easily wipe out the short time-to-market and minimized costs of ownership a SaaS strategy affords. Change-based testing, a method of verifying business requirements affected by changes to code or interfaces, can help organizations maintain the benefits of delivering SaaS while ensuring software quality. In a change-based testing environment, changes to the code automatically trigger processes that streamline the prioritization and execution of automated and manual tests.

Rather than requiring development teams to sift through thousands of test failures, many of which are the result of upstream failures, a change-based testing strategy prioritizes test results based on business value and delivers them directly to the relevant tester or developer for immediate resolution. By cutting through the mountain of results and delivering only those of highest relevance, change-based testing ensures SaaS quality while maintaining time-to-market sensitivity through measurable decreases in time spent reviewing and resolving test failures.

The key to a maximizing the benefits of change-based testing strategy is to implement it on a development testing platform. Development testing is the continuous application of software testing activities, such as unit testing, static analysis, and peer review, throughout the development lifecycle. A development testing platform, such as the Parasoft Development Testing Platform, consistently integrates and automates defect prevention and detection practices while accurately and objectively measuring productivity and application quality.
The intelligence engine aboard the Parasoft Development Testing Platform drives the determination of test relevance in change-based testing, which streamlines the development team's workflow by quickly providing actionable data about the risks associated with releasing the software in its current state. The process intelligence enabled by change-based testing on a development testing platform helps stakeholders and managers make smart decisions based on predictable outcomes and a high-level view of the business risks of release at any given moment. Stakeholders can use these insights to manage risks by realizing the impact of reallocation of resources, delaying a release to make room for a fix, or moving a feature into the next release.

Automated Regression Testing

Any implementation of change-based testing must rely on the foundation of a strong suite of regression tests. In the strictest view, Development teams must test every unit of every application after every change to guard against unintended consequences, and anything less represents a calculated risk. Until recently, thoroughly managing and maintaining regression tests has been cost-prohibitive due to the well-known brittleness of regression test suites and the time it takes to run them.

The first step to reducing these costs was automation. Automation alongside IaaS allows for execution of tests within any defined time period. In addition, automation is often used to generate many types of tests, further reducing costs and time investments. But until now, Development teams still had to manually review test results, virtually negating the productivity gains made by automating tests.

Change-based testing allows for prioritization of test results so that the most important results–results from tests closest to the changes and representing the greatest business risk–are automatically prioritized and delivered to the developer or tester. By incorporating change-based testing into an existing automated regression testing practice, SaaS development can achieve its low-risk, high-reward promise.

Simulating Dependencies to Test Highly-Dependent and Error-Handling Code

In order to regression test, development teams need to be able to simulate dependencies. Simulating dependencies, either through application virtualization or code-level function stubbing, is a critical aspect of testing highly-dependent code as well as error-handling code. Development teams often determine that testing highly-dependent code requires too much effort to be timely and cost effective. Many developers also underestimate the probability of failure associated with error-handling code, resulting in untested aspects of the application. If left unchecked, highly-dependent and error-handling code pose significant risk to the business. But what organizations may not realize is that the effort required to test them is actually quite low with modern dependency simulation technology.

One example of highly-dependent code is the business logic. Testing business logic is essential because it drives revenue and customer satisfaction. But because verifying the business logic requires processes from various points of the entire application, organizations have traditionally waited until the integration phase to test it. The result is that developers and testing teams may find defects much later in the process, when the cost and time to remediate is high. Being able to simulate dependencies throughout the SDLC enables development teams to test business logic and other highly-dependent code before the integration phase, which ensures that organizations realize the benefits that a SaaS strategy affords.

Testing error-handling code is also critical because it functions as the recovery mechanism from failures caused by external dependencies. When implemented incompletely or without regard for business needs, error-handling code can cause unintended consequences ranging from minor service interruption to severe systemic failure. For this reason, error-handling code is also one of the most commonly exploited security vulnerabilities attackers target.
Developers and testers must be able to easily simulate dependencies to address this gap in the testing process. A state-of-the-art development testing platform enables dependency simulation either through application virtualization or code-level function stubbing.

Furthermore, a development testing platform reduces the time and effort required to complete development and testing by

  • Automatically generating tests;

  • Automatically generating simulated dependencies and error injection;

  • Facilitating association of tests with requirements;

  • Tying tests and dependencies into automated regression testing runs; and

  • Prioritizing results according to risk and relevance.


Of course, there is no silver bullet. Ensuring the quality of any application, SaaS or otherwise, requires a combination of testing and prevention technologies, and the process driving the use of those technologies must align with business goals. Automated Change-based Regression Testing with Dependency Simulation is the solution to addressing the unique risks of delivering and consuming SaaS applications, and they must be included alongside the more common practices of static analysis, peer review, functional testing, and performance testing.



For Development Testing white papers, articles, and videos/webinars, visit our Development Testing Resource Center or watch the brief video below:


Read the original blog entry...

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.