In the abstract, it’s easy to think of testing a piece of software as a single set of actions. Within the industry, however, it has become common practice to look upon performance testing as a multifaceted task. The process includes:
- Load testing
- Stress testing
- Endurance testing
- Scalability testing
- Volume testing
- Spike testing
Each phase has its distinct requirements and goals, and it’s important to be aware of them before moving ahead with a project. Likewise, it’s prudent to know which tools and processes are most suited to the job.
Load testing is intended to look at performance under two sets of conditions, normal and peak loads. An organization needs to model what it feels is likely to be normal usage of software. For example, a cloud-based photo storage system might expect to handle a certain load during particular parts of the year. Conversely, specific annual increases, such as during the holidays, would also need to be anticipated.
The aim of load testing is not to overload the system. This can be done by using software to create virtual users and have them interact with the software. The goal is to see what performance looks like when an expected load is regularly hitting the system. Bottlenecks have to be identified, and notes need to be passed along to developers to see what can be done.
Taking things to the next logical step, we arrive at stress testing. This is a deliberately intense process that’s intended to find out where the breaking points of operational capacity are. It should only be conducted once reasonable load testing efforts have been made and remedies have been implemented during that stage.
The objective is to identify safe usage limits, and it’s particularly important to spot vulnerabilities that may be triggered when the system is operating under stress. If a database implementation suffers a buffer overrun during excessive loads, it’s good to know that in advance.
It may seem a fine distinction to make, but the question of how a piece of software will hold up over a long period of load is important. Anyone who has ever watched a desktop program’s memory usage balloon over the course of several hours of normal use can appreciate the difference. Just as issues often occur when a system is overwhelmed during a peak test, similar problems may begin appear only after a prolonged run of normal usage.
Maintaining any project over the course of years will present issues as the user base grows. This calls for a degree of guess work, as you’ll find yourself often trying to determine how 1,000 users today might grow out over five years. This can lead to unanticipated failures, if not addressed early on in a non-functional environment. No one wants to see a production database run out of space for entries because the index was built using INT11 and the system ran out of assignable unique IDs.
The throughput of any user base is likely to grow as the popularity of a product increases. To get ahead of these problems, it’s also wise to perform volume testing. The goal in this case is to identify where problems might exist based on the volume of usage. For example, read-write issues with critical interface files, such as settings stored in XML, may create volume limits that can be adjusted by minor tweaks.
Sudden increases and drops in usage can lead to issues that are difficult to predict. If an entire block of internet addresses loses connectivity, a high-volume site might experience a dropoff that’s both massive and instantaneous. These interruptions may even occur mid-operation. Spike testing allows you to identify specific potential issues and see the system fails elegantly.
Moving to Performance Testing
Devising a way to engage in testing while developers are still working on a specific generation of software takes a lot of planning. A lot of companies are turning to Agile methodologies in order to handle their testing needs. The goal with Agile processes is to see that orderly efforts are made to advance products into testing, make notes of issues, implement changes and confirm completion of work.
Software performance testing work tends to call for a large degree of automation, and it’s wise to keep this in mind when choosing what to use. Many software development environments, such as the Enterprise editions of Microsoft Visual Studio, come with their own performance testing components. Those looking for an open source solution designed for web applications might wish to check out Apache JMeter. IBM Rational Performance Tester and HP LoadRunner are also popular choices for Licensed solutions.
There are several questions to look at. For example, JMeter, by virtue of being open source, doesn’t offer the same sort of scalability that the Visual Studio tools do, especially in terms of being able to buy more virtual users instances in order to keep loading up. If you’re looking for a system that offers cloud-based solutions and simple Agile integration, IBM Rational Performance Tester is a solid option.
If you have questions about getting started with Performance Testing or want to push the toolset further, give us a call. We’re always happy to answer any questions.