10 Considerations for Mobile App Testing (Part 2)

The following is Part 2 of our article on the top 10 Considerations for Mobile App Testing. Click here to read Part 1.

6 – Device Constraints

In general, mobile devices whether they be phones, tablets or wearable devices have severe constraints in terms of power, processing and storage compared to an average PC. Furthermore, their form factors, screens sizes, layouts, platform capabilities and connectivity capabilities vary widely even within a single manufacturer.

Testing any mobile app to its fullest requires pushing the limits of how low you can go in terms of platform resources. Creating an application that can adapt to the richness or paucity of its platform is far better in terms of user acceptance than simply throwing up an error message when resources are inadequate to support nominal usage expectations.

7 – Network Constraints

Other constraints on device performance are less predictable than its physical configuration. The most valuable apps are network-aware and respond consistently regardless of whether the user is connected via 3G, 4G or WiFi even in the face of network jitter and packet loss. If it makes sense, these apps should offer offline functionality as well.

Besides testing connectivity channels, tests to evaluate app functionality and usability across different carrier networks, may expose subtle differences that impact performance. Furthermore, the effects of network congestion, interrupts from text or voice messages, calls and sharing of connectivity with other apps must be assessed.

How well a mobile application responds and preserves state across networks and under varying network constraints is a vital part of the user experience. You can be certain that users will first blame the app for any deficiencies rather than the network.

8 – Back-end Constraints

Any mobile app that has a server side component must take account of latency issues when testing. A series of graded test cases can evaluate the effects of degradation in client-server communication. For hybrid and web-based apps, various browser loads should be imposed also.

If the web server or DB server is implemented by your organization, then separate instrumentation and testing on that side is a requirement. If these are not under your control, then it is still important to test for unacceptable latency and deal with it.

You can take a page from Google’s experience that a latency of ½ to 1 second results in a 20 percent traffic drop to calibrate your understanding of how impatient your app’s users can be. Often latency can be improved by reformatting data, reducing the amount of data over the connection or removing intervening agents or proxies.

9 – Security and Privacy

Gartner projects that 75% of mobile applications available in 2015 will fail fundamental security tests. An even greater number carry privacy risks as well. Most of these issues arise from access management, cross-site scripting and data storage leakage.

Due to the importance of security and privacy to your end users, you owe it to them to incorporate security testing end-to-end in the development process rather than saving it for the final phases or, worse yet, for real users post-release.

Tests cases for five common security defects must be covered:

  • User authentication and the constraints put on usernames, passwords or pin numbers
  • Ensuring data transfers are never done without encryption
  • Exposure of user identifying information to other apps or 3rd-party ad networks
  • Avoidance of storing unencrypted data on the client device
  • The robustness of server-side security including verifying API calls

10 – Regional Differences

The impressive diversity of the mobile app marketplace along the dimensions of OS types, device capabilities and network configurations will hopefully not lead you to de-prioritize supporting the diversity of your end users. Even for enterprise workforce business apps, attention must be given to testing for regional differences in language, culture, currencies and social network preferences in order for your app to gain international appeal.

Many aspects of localization can be contracted via third parties, but time must be allocated for verification of the results. If possible, have native speakers review interface labels and error messages to avoid unintentional misunderstandings. In locations where language is right-to-left, the effects on the UI must be evaluated also.

Business Benefits

Taking the above considerations to heart and applying them even before the next mobile application development project begins pays immediate and long-term dividends to a business’ competitive posture:

  • With agile methodologies, designers, developers, testers and IT all improve their abilities to collaborate and produce the highest-quality products at lower cost
  • Realistically managing the testing scope within the ever-increasing complexity of the mobile device ecosystem is an essential organizational skill for focused effort and profitability
  • A greater emphasis on automation lowers costs and increases the business’ responsiveness to customers and changing market conditions
  • Carefully evaluating all aspects of mobile app testing including functional, performance, compatibility, network, UI/UX and security tests improves the organization’s decision-making dexterity while reducing risk
  • Thorough and efficient testing processes free up resources that can be used for increased product innovation
  • Ensuring end user satisfaction by providing a functional, responsive and safe app experience is a nearly immeasurable aspect of application adoption and the reputation of the company’s products

 

Summary

Mobile device testing presents an array of challenges well beyond those for traditional desktop application testing due to the unparalleled diversity of products, operating environments, networks and inherent device constraints.

Despite these formidable challenges to producing and testing high-quality products, businesses must accept that mobile app users have little tolerance for apps whose functionality, performance or safety do not meet their heightened expectations. The mobile app world is a highly competitive one in which end users always have alternative products available to them.

Businesses up to the challenge of producing the best, most usable mobile app products and highest customer loyalty must adopt a mindset in which app production is an end-to-end, highly collaborative process that comprehends 10 key considerations and incorporates solutions to these at every step of the way. That is a tall order, especially for enterprises that have until now used a manual, waterfall approach to development and accepted gaps in testing coverage.

The benefits to organizations that are able to adopt highly collaborative, continuous delivery models of product development and can embrace test automation to the fullest extent are difficult to overestimate.

Besides producing higher quality software with faster time-to-market, improved quality and customer responsiveness, they will see overall costs decrease. Additionally, companies are able to free resources to advance the most innovative initiatives within the enterprise to further increase its reputation and profits.

10 Considerations for Mobile App Testing (Part 1)

As the world decidedly transitions from desktops and laptops to mobile devices, application developers and testers face enormous challenges to ensure that their products meet basic metrics of functionality, performance and usability. These are requirements for desktop apps too, but they are compounded for mobile apps due to the immense variance in platform resources, input methods, screen sizes, operating systems, operating system versions, networks and application architectures on mobile devices.

Furthermore, end users, who now have access to more platform and application choices than ever before, are completely unforgiving of faults in mobile applications. Failures result in almost instantaneous loss of customers according to most studies. A disappointing mobile application experience causes a quarter of users to immediately delete the app. A third will instead use a competitor’s application and nearly half will be skeptical of trying other apps from that supplier.

Getting it right for your customers, whether they be the captive audience for your enterprise workforce apps or the public, requires a particular mindset in regards to planning, development, testing and support that may be unfamiliar to teams oriented toward PC software.

Approaches to mobile application testing require bearing in mind several considerations that ensure your company’s mobile app testing is on target and successful.

10 Key Considerations for Your Mobile Application Testing

1 –The Potential for Combinatorial Test Case Explosion

It is easy to be carried away with the idea that your next great app will conquer the mobile device world. In order to accomplish that, you must go live on at least Android and IOS and probably Windows 10 Mobile.

Supporting more than one OS presents a significant cross-platform development challenge, but the reality for testing is even more onerous. The testers must assess the application across devices from a dozen manufacturers, each with a portfolio of products with different screen sizes, platform capabilities and resource configurations.

If the app is HTML5 based, testing can use a standardized browser, but hybrid and native apps require platform-specific testing especially with regard to the UI and on-board sensors. There will be additional test cases for network interoperability and security as well.

Unless your organization already has deep experience in developing cross-platform mobile apps, you are better off reducing the scope of both development and testing to something manageable by choosing a single OS and a subset of manufacturers and devices supporting that OS in order to produce the greatest value to your business.

2 – Your Current Design-Code-Test-Support Methodology

Testing within an organizational structure that utilizes a throw-it-over-the-wall approach between developers and testers is not going to cut it in the fast-paced mobile app ecosystem. Agile development methods greatly reduce lag times between coding, testing, delivery, defect detection and defect remedy. Not only do such methods increase team efficiency and allow for faster time-to-market but they improve customer support and team cohesion too.

If your development processes are not agile, then your efforts to deal with the challenges of mobile testing can be easily doubled. Though the introduction of agile methodologies within ongoing projects is likely to be counter-productive, pilot projects should be started sooner rather than later to begin a gradual switchover to methods proven to be more effective. Be careful not to fall into the trap of simply accelerating waterfall processes and thinking you have improved agility.

3 – Real versus Virtual Device Testing

The bottom line when choosing whether to use real or emulated devices for testing comes down to the fact that all of your end users are going to be using real devices. Although it is impractical to use only real devices for all test phases, no application must be released without full testing on actual devices on which the app is intended to run.

Especially during earlier development phases, emulator testing is a must for efficiency reasons. Some IDEs include emulation capabilities, which is critical for unit testing and UI evaluation. In general, it is easier to automate test suites using emulated devices, though that is changing with the advent of cloud services aimed specifically at mobile device testing.

Cloud testing services allow both developers and testers to interact in real-time on live networks with nearly any real mobile device on the market. Tests are accomplished via online dashboards that realistically display the device UI. Tests can be developed manually and saved to a script for regression tests or to demonstrate a bug to the developers. As this technology continues to improve, the need for an organization to acquire, maintain and plug into a test harness a multitude of actual devices diminishes.

4 – Automation

Whether or not your teams use real or emulated devices, the need for test automation should be paramount. It is not possible to automate everything of course. Most automated tests will have first been run manually because they originated from a defect found by a user or because new test case suites are developed. To whatever extent possible, broad and deep automation is essential for an agile, continuous delivery organization if it is to keep up with the mobile app markets.

There are increasingly sophisticated tools for test automation available. The best support end-to-end testing from requirements to test cases to regression testing and defect testing. These can streamline the entire testing and support process. However, simply finding the right framework is only half the challenge.

Automation requires increased cultural awareness of its benefits and a willingness to seek out and implement automation wherever it is practically possible. An organization supports this mentality via team incentives and designating an Automation Engineer. The occupant of that role drives automation strategy, ensures automation tools and initiatives are used correctly and measures the effectiveness of automated processes toward producing frequent, timely, high-quality app releases.

5 – User Experience

The ultimate testing challenge may be testing User Experience, or UX. While the functionality of most mobile application UIs can be tested thoroughly by the best automation frameworks, even those fall short in the face of sophisticated interactions such as gestures, hand waves and eye movements. An even greater imposition is automated testing of interactions via intelligent agents such as Siri or Google Now. Throw in contextual information such as tilt, acceleration, geolocation and ambient sound and even the best test automation efforts will struggle.

Plan for UX testing with real users under lab conditions or by field beta testers. In addition to testing for smooth operation, the app’s workflow, responsiveness and aesthetics should be reviewed. Application instrumentation that tracks how users interact with the app often reveal surprising insights for the designers and developers.

Keep in mind overall that the test with the most value is to ensure that the user receives intended and valuable results from the application.

5 Common API Testing Mistakes

APIs may not appear as end-products to clients, but they play a vital role internally in an enterprise’s products or workforce applications. They often represent valuable assets for creating several top-level products around them. As such, they deserve respect and the best way to show that respect is to thoroughly test them. Unfortunately, through either neglect or inexperience, organizations often make several common mistakes in this regard.

Lack of Developer Responsibility for Tested Code

Many software organizations realize that testing starts even before the first line of code is written. The wise ones include thorough, early testing in the scope of developers’ responsibilities. Otherwise, sparse, happy-path, unit tests are produced that leave testers to fend on their own, which leads to longer defect report-repair cycles.

The most enlightened companies use agile methodologies that include a leftward shift of testing that virtually joins developers and testers at the hip. This move ensures many bugs are caught when their rectification costs less. Simply put, when developers take on more testing responsibility, API quality rises.

Insufficient Code Instrumentation

There are testing tools for instrumenting API code, but often it is more cost-effective for developers to include basic instrumentation themselves. This practice significantly reduces the defect location effort during testing. It also contributes to measuring API coverage and performance bottlenecks.

Such instrumentation may simply be composed of inline assertions, progress indicators, API call echoes or offline logs. To reduce runtime overhead and possible masking of bugs or performance issues, compile-time flags can selectively remove the instrumentation during runtime execution.

Not Testing API Coverage and Usage

The collection of endpoints within an API often encompasses a complex hierarchy of relationships that is difficult to get right. There may be duplication of functionality, awkward, error-prone sequences of related calls, overly fine call granularity or too many paths to accomplishing a single high-level task.

Passing an ill-designed API all the way through production not only ignores the cognitive load that testers must deal with but risks creating an API whose frequent design changes create headaches for its consumers. Such pitfalls are avoided by tracking and analyzing API call patterns, which may reveal the contortions apps must perform to utilize the interface.

Using the GUI to Test the API

Testing APIs via an application GUI presents many problems:

  • First of all, this approach is the least efficient way to test APIs. Because no GUI test automation tool can completely overcome the problem of consistently locating GUI artifacts in test scripts, there are high script maintenance costs.
  • GUI testing also adds an additional layer of test indirection, which masks API errors or creates false defect leads that suggest the error is in the API when it is actually in the GUI.
  • Additionally, the range of data inputs and outputs possible when using the API directly is typically severely restricted when employing a GUI as the API test framework.

For all these reasons, it is more effective to test the API directly with programmatic scripts, which can be in a language different from the API itself that matches the skills of the testers.

Overlooking the Effects of Environmental Failures

Since APIs are increasingly important layers affecting an application’s response and reliability, it is vital that performance, load and stress testing be part of their test plan. Such testing should be done directly whenever possible. However, even with increased testing rigour, many organizations overlook testing APIs in the context of fundamental environmental failures within the infrastructure upon which the API operates.

Even relatively trivial applications have dependencies on platform capabilities, networking resources, back-end databases and so on. If API testing in situations where these components are inadequate, slow or missing is not done, then ways to compensate for them will never be implemented. Unfortunately, for API developers, they cannot count on end users drawing the correct conclusion as to where the blame lies for such an API/application failure.

APIs are at the core of most software applications and services. In fact, one API is often used in multiple products. Getting their functionality, performance and maintenance requirements right is a significant challenge, but such efforts are vital to the creation of well-performing and responsive apps and services.

Due to their importance, but indirect nature, it is often easy for software developers, testers and operations staff to diminish the importance of thorough, direct testing, which leads to mistakes of omission that cost companies dearly in terms of development, testing and maintenance costs or even the loss of customers. Therefore, avoiding even the most common pitfalls in API development is crucial.

Using Testing to Improve API Performance

For application programmers, APIs are the user-interfaces upon which they build their own APIs, services and applications. They have been in use almost since software programming began. Until recently, their main acceptance criteria were their ease of use and functionality. However, in today’s world, they are on the critical path for determining application end-user performance, usually measured by how promptly they respond to user actions and requests.

Thus, API testing must encompass more than correct functioning of the API in terms of inputs and outputs or whether it fails gracefully in the face of errors. It hardly matters to end-users if the functionality is right or wrong if they cannot access it in a reasonable amount of time. API performance under load, therefore, must be measured and fine-tuned to remove processing bottlenecks.

Ensure Functional Stability First

API functionality must be verified first before meaningful performance testing can progress. This includes evaluating the API and its documentation to ensure that API calls and their descriptions line up and are self-explanatory. Start with happy-path test scripts that take the documentation literally. These tests might be enhanced versions of developer unit tests.

Next, stress the functionality by exploring border conditions, passing random or missing parameter values, large amounts of data, non-ASCII character sets and so on. Working with code that has compile-time instrumentation built-in greatly reduces the amount of time spent tracking down bugs that appear as a result of these tests. Be sure to log all inputs and outputs during test runs also.

Performance Testing Preparation

Once functionality is stable and before designing performance, load or stress tests, be sure you know what you are testing for. Examine the software requirements to determine important real-life performance metrics the API performance expectations. Metrics to look for include request throughput, peak throughput, the distribution of throughput per API endpoint and the maximum number of concurrent users supported.

With these data in hand, start general load tests that progressively increase demands on the API. These may shake out bugs in the API as well as the test environment. They will provide early baselines of the maximum load the API can serve without breaking.

At this point, apply tests that more accurately reflect the expected real-world usage of the API. If an existing version of the API is already in use, API call frequency distributions are invaluable in determining the most effective use of your testing resources. Alternatively, utilize previous production logs for the API and feed these to automated testing tools to recreate realistic scenarios.

Popular API Testing Tools

It is often useful to take advantage of cloud services, such as AWS, to provide a testing infrastructure that can expand and shrink according to the demands of your performance tests. Along with that, there are many testing tools available to create an effective testing environment quickly of which we cover a few.

Vegeta

This is an open-source command-line tool whose main focus is to produce a steady request per second rate specified by the tester. It is simpler to use than many of the other testing tools mentioned here and is ideal for setting up performance baselines.

Loader.io

The free version of this cloud testing service from SendGrid permits up to 10K requests per second, which is a meaningful load for most APIs. It is simple to set up and produces informative reports.

Wrk

This tool configures all tests via a command line interface. It is relatively easy to use for generating any target rate of HTTP requests you need. It is multi-threaded, so has a higher performance than other tools, but the reporting is non-graphical.

JMeter

JMeter is probably the best-known open-source performance testing tool. It uses a full-featured GUI for creating detailed test plans. The number of execution threads, parameters for HTTP requests and listeners used to display results are some of the parameters that can be specified. Its downside is its complexity and steep learning curve.

API performance as an acceptance criterion has become equal in importance to API functionality and usability. This is due to increased expectations of end users who are intolerant of delays and have many similar application options to choose from in a growing market of applications.

Thus, it is critical that testing gives API performance its due as part of the overall test plan. Fortunately, there are clear best practices and many tools to assist in performance evaluation and to help developers and testers balance both performance and functionality of the APIs being produced.

Test-Driven Development (TDD) vs Behaviour-Driven Development (BDD)

What is TDD?

As its name implies, Test-Driven Development puts the development of test scripts ahead of the actual code that will be tested. Using TDD, developers must ponder the end goals of their code first in a way that is more tangible than simply reading a specification. They are actually implementing the end behaviours in the form of tests, which is a process requiring much stronger cognitive involvement.

A developer starts with TDD by writing test scripts that cover the functional happy path in the code to be written. That is, the first pass of these scripts exercises the core functionality of the code not yet written without attempting to explore border conditions, performance, security issues and the like. Those tests might be included in the test scripts as the code develops, but they are more likely to be handled in later test phases.

The next step in TDD is executing the test scripts, which obviously fail, to ensure they fail gracefully and to verify that each test step is executed. At that point, code is written that fulfills the requirements of the test, which was originally written to fulfill the software specification. Once the code and its test runs successfully, the developer may refactor the code and verify it again with the test script until he or she is satisfied with the results.

As code development progresses, developers will be tempted to get ahead of the test script by adding more code before expanding the test script or creating additional scripts. This behaviour is the natural proclivity of developers who want to get on with the project, but it breaks the virtuous cycle of TDD and its cognitive benefit in driving development of the correct code.

What is BDD?

Behaviour-Driven Development can be thought of as an enhancement of TDD in that it, too, puts the implementation of the goals of the software, as represented by test scripts or scenarios, ahead of the actual code implementation. The overall purpose is much the same as TDD, which is to provide a powerful shift in point of view. This often results in producing the right code the first time or revealing defects in the requirements even before the first line of code is created.

The difference between TDD and BDD is subtle but significant. The main distinction of BDD is that the behavioral tests are written through collaboration by as many of the stakeholders in the software development process as possible including business, planning, marketing, sales, Ops and QA.

Everyone involved should be able to understand the test code/scenario as well as contribute to it. Thus, BDD tests express their actions in a natural language style, similar to what a tester might put in the comments of a traditional test script written in a programming language. Tools exist to assist this process of which Cucumber is probably the most well-known, which also automates the process of scenario creation and testing.

When Each Approach Is Most Useful

When deciding to use either TDD or BDD, consider the makeup of the team charged with turning the software requirements into a coding specification. If your team is composed solely of programmers, then using TDD makes good sense. However, if your project requires communication across organizational units, then BDD supports that higher level of collaboration better.

In fact, BDD almost demands that everyone from business analysts to production teams become involved in the earliest phases of software development. Bear in mind that starting with BDD may not be an easy process, since participants will initially speak in different terms and express concerns of varying relevance to one another. The result, however, is the removal of ambiguities and the creation of common understandings at the earliest stages of development, which is invaluable to ongoing project stability.

In short, both Test-Driven Development and Behaviour-Driven Development are “outside-in” processes utilized to reveal and test assumptions about the functional and behavioural aspects of software even before the first lines of code are typed out. The main distinction in their use is the level of collaboration present or desired in the organization utilizing them.

Both approaches provide opportunities for early defect detection in software design and implementation when the removal of such defects is typically less costly. Both result in creating requirements and code that are more testable with more stable interfaces. They also enhance the original software specification by documenting implicit assumptions made by various stakeholders including the end users.

The Tester’s Role in Continuous Integration

At first glance, it may appear that a Continuous Integration model of software development leaves testers out in the cold. One such feature is the increased responsibility for defect detection falling into the laps of developers. This so-called “left shift” of testing is intended to find bugs sooner, faster and facilitate their repair.

CI also significantly reduces the time between lines of code written to them showing up in production systems. Ideally, a well-done CI implementation compresses this cycle from months down to weeks or even days. In the context of waterfall-like environments, the demands of CI release frequency would be all but impossible to maintain for conventional testing roles.

How QA Maintains the Pace

Even given that more left-end testing is handled by developers, when the day is done they are still developers and not testers. Furthermore, that testing is mainly going to be of a functional flavor with perhaps some regression, performance, exploratory and UI testing thrown in. Loads of other testing, such as stress, compatibility, UX, BVT, API, security, scalability and so forth are unlikely to be taken on by developers lest they stop writing code altogether.

On the other hand, if any of those testing domains were to remain as sequential or manual processes, they risk the persistence of bottlenecks that undermine the fundamental benefits of Continuous Integration.

What changes for QA in the context of CI is not so much testing domains, but an increase in responsibility for creating and managing a more effective testing pipeline via various means:

  • Automation – Testers must become experts in automation from every angle. They do this under the direction of one or more automation engineers/architects. Opportunities for automation must be sought out aggressively.
  • Concurrency – This aspect refers not only to running test suites in parallel, but also to aligning testing processes to the same timelines that development uses. Test scripts or scenarios are developed from requirements before code is written, risk planning takes place as early as possible, testers participate in scrums, planning and business meetings, all test code is maintained within the same revision control system with the code plus testers are paired with developers as code is written.
  • Virtualization – In order to get the biggest bang for the buck via automation and concurrency, QA needs to increase their expertise in the virtualization of development, testing and production systems. They are in a perfect position to anticipate the need for development/test assets. In fact, it makes sense for QA to take on a partial Operations role in developing instant acquisition of correctly provisioned and configured systems for developers and anyone else in the CI pipeline.

Potential Pitfalls for QA in CI

CI itself, let alone QA’s enhanced role within CI, does not come about overnight. It requires several months or more for most organizations to get the hang of it. Meanwhile, software production cannot idle while it is put in place. In this sort of intense, highly focused effort, it is easy to overlook hard numbers regarding ROI. It is vital to keep in mind that if upfront and maintenance costs to QA to maintain pace with CI are not accurately weighed against the long-term benefits, the organization risks playing a zero-sum game or worse.

Furthermore, QA must recognize that automation, concurrency and virtualization all have their limits. They are not appropriate solutions for every testing problem that arises. There will always be a need for manual testing such as exploratory tests, UI shakeouts or live user test sessions. Automation also produces increased overhead, such as test script maintenance or the acquisition, processing and scrubbing of live user data at the production end of the pipeline.

CI adoption has proven time and again that it can have an enormous positive impact in terms of reducing the time to market of new software features while cutting down on production overhead. To make it work right, however, the test organization must become a concurrent partner alongside development, up their use of automation and take charge of producing effective, efficient solutions to slow-downs in the CI testing pipeline wherever they occur. Having done so, QA will continue to provide a vital function in ensuring that an enterprise’s software is delivered fast and to the highest quality standards.

QA’s Strategic Role in DevOps

Fundamentally, DevOps adoption is about streamlining an organization’s conventional Software Development Life Cycle to achieve faster time-to-market for company products. Other factors held equal, faster TTM imparts increased competitive value due to the ability to meet customer requirements more accurately and more frequently.

In the quest for speed, however, many organizations transforming to a DevOps delivery model are inadvertently squeezing out essential QA functions. A narrow focus on improving SDLC speed causes them to miss the point that DevOps is about improving software quality too. In a perfect DevOps world, everyone is responsible for QA, but not everyone has the tools, knowledge and expertise to do QA right.

QA’s Central Role in DevOps

Those companies who have successfully deployed DevOps realize it actually should be called Dev-QA-Ops, since quality processes must play a central role in attaining the ultimate goals of cost reduction, more frequent releases and improved customer satisfaction.

They discovered that a conventionally centralized, time-discrete QA function needed to convert to an environment where quality processes permeate the entire SDLC from architecture to production. To accomplish that, their QA moved from direct testing responsibility to providing oversight, consultation and education leftward toward developers and rightward toward Ops.

Raising QA’s Value Proposition

DevOps shifts some early defect discovery to developers, which allows QA more time to evaluate new testing tools and frameworks plus striving for complete end-to-end automation. Furthermore, QA professionals become the logical agents for assessing the organization’s continuous development/integration/delivery pipeline for obstacles to producing quality code in a timely manner. Additionally, they act on development’s behalf to test and qualify the infrastructure they acquire from operations.

In other words, QA’s value in a DevOps house reaches beyond direct testing of software under development toward responsibility for ensuring utmost quality in the processes surrounding that development. This transition is not without difficulties, but it is necessary in order to make DevOps a meaningful success.

Steps to Achieving Dev-QA-Ops

  • Everyone must make the mental adjustment that QA is a strategic partner. They must discard the viewpoint of QA as a separate, transactional center and replace it with the perception of QA as a distributed facilitator and equal partner.
  • QA professionals must be added to planning and technical teams to provide insights and guidance regarding how early decisions affect the quality of development and the final product in the eyes of the customer. They also identify areas where automation is most effective and can quantify time and resource costs of automation implementation and support.
  • QA should be present at business planning meetings, which take advantage of their expertise in evaluating the risks of product decisions affecting testing costs both in development and production. Attendance at these meetings helps QA calibrate their own expectations against business requirements.
  • QA must be proactive about educating developers and others about smart testing methods that are most effective and efficient at catching defects early such as TDD and BDD.
  • QA members must take personal responsibility to educate themselves as necessary to deploy, support and use the best automation tools available. Furthermore, they must improve their communication skills to educate others about how best to utilize test automation and instill quality principles in their work.

QA as a Center of Excellence

QA should strive to be a center for excellence with regard to testing and quality principles from which everyone in the organization can benefit. They become evangelists and advocates for quality in all aspects of the enterprise mission. The enterprise will come to realize that their expertise is applicable to other processes besides software development as well. Throughout their transition, it is essential for QA to act proactively to maintain relevance to the business.

It is all too easy for organizations transitioning to DevOps to diminish the role of QA in the new world order. It may appear that QA can be “naturally” distributed between Dev and Ops functions and that QA personnel can be reduced.

The reality is that without strong QA expertise, DevOps is an incomplete solution that fails to achieve increased release velocity without costly defects at the wrong end of the process. Software organizations that leverage their present QA expertise into a higher-level, more broadly defined role also raise their business’ ability to deliver the best software possible that satisfactorily meets customer requirements.

Ultimately, the QA organization is responsible for incorporating key DevOps principles of shared responsibility and continuous software improvement, while raising the level of their present expertise. In doing so, they will secure their place in any DevOps transition.

6 Test Automation Trends for 2016

Although the number of test organizations utilizing test automation has only incrementally increased over last year, the demands for automation overall are increasing significantly. As a result of new technologies, new approaches to software development, the increasing sophistication of cloud testing services and an enormous increase in the complexity of deployment environments, those using automation today are looking to automation more than ever to deliver productivity and quality.

1. Mobile Testing Remains a Top Priority

Gartner predicts close to 300 billion mobile app downloads next year. Thus, testing gravity continues to center on mobile app testing for both consumer and business applications. Every testing area from functional to performance to compatibility and usability is going to demand increased automation to deal with this deluge. The skills and innovation of test teams must also increase as mobile apps intersect with other technologies and new ways to make software.

2. New Framework Capabilities Supporting Automation

Supporting the onward rise of mobile app testing, the industry will see increased use of both paid and open-source testing frameworks, such as Selenium and Appium, plus specialized tools in the quest to lower costs while increasing efficiency and quality. Throughout 2016, many tools and frameworks will increase their capability to cover more of the software development process from inception to production.

3. Increased Reliance on Cloud Testing

Hand-in-hand with the increased demand for mobile app testing, more QA departments will further their use of cloud-based, automated testing services. These offer test organizations relief from upfront capital costs and ongoing maintenance and decommissioning outlays.

Other features, such as on-demand scalability and complete cross-platform test suites that can be accessed by geographically dispersed development and test teams, are now necessities. Furthermore, as mundane tasks, such as configuration and provisioning, shift to cloud services, test teams will enjoy the opportunity to work on higher-value tasks.

4. Testing’s Left-Shift Continues

Businesses in 2016 are continuing to increase the frequency of product releases. This situation has brought about an increased appreciation of early defect discovery using such techniques as TDD and BDD. It also leads to closer collaboration between developers and testers in an overall leftward shift in testing scope.

This provides testers opportunities to deliver more value to new projects, but it challenges them to automate early development tests, which, up to now, are perceived as being incompatible with the requirements for efficient automation in other development phases.

5. New Software Architectures Using Micro-Services

Software design is trending toward micro-architectures that divide functionality into smaller components. These message-based “micro-services” are developed, tested and delivered as part of larger applications or systems without disturbing enclosing or associated apps or services.

Micro-services increase a business’ ability to deliver products to market in even shorter cycles and accurately meet rapidly changing customer requirements. Testing organizations benefit from a smaller testing scope and reduced inter-dependencies. Automated API testing will become a primary technique to ensure inter-service compatibility even before code completion.

6. Complexity in Deployment Scenarios

With the rapid rise of smartphones, the increasing popularity of wearable computational devices and the rise of IoT, the complexity of cross-platform testing is difficult enough. Add to that the need to test apps in relation to their Social, Mobile, Analytics and Cloud environment.

SMAC testing is adding yet another dimension to mobile app testing and its automation. This new reality is already claiming additional resources and raising the ante on automation tools to deal with it effectively. SMAC, especially, calls for innovations in how to acquire and process sensitive user data that can help drive automated testing strategies.

The world of testing automation is facing an array of new opportunities in 2016 arising from fresh approaches to harnessing additional efficiency and agility from software development and deployment processes, especially in the mobile app domain. One effect of this has been testing’s left-shift to the earliest stages of development and the consequent difficulty in applying automation at this point.

Existing automation tools, frameworks and cloud-based testing services are also faced with novel challenges in how they will support an increasingly complex world of mobile/IoT interaction, micro-service/API testing and effective SMAC testing.

Taken together, these converging trends are spawning new ideas and approaches to exactly how, what, when and where software is tested let alone how such testing is to be automated in ways that increase efficiency and product quality.

Clearly, 2016 and beyond promise to provide a vibrant, vigorous and exciting atmosphere for the world of testing and test automation in particular.

The Impact of DevOps on Testing

A fundamental principle of DevOps is that the concept demands close collaboration between all the stakeholders involved in an end-to-end software production process from requirements to development to deployment and delivery.

However, because the portrayal of DevOps often exclusively emphasize melding development and IT Operations, other stakeholders may seem peripheral to the process. In particular, many outside observers assume that the continuous development, continuous integration and continuous delivery practices intrinsic to DevOps somehow reduce the need for testers.

In fact, DevOps testing is a primary contributor to assuring that the software delivered is of high quality and meets customers’ actual needs. It is also a fact that testing teams need to increase their contributions in the world of DevOps as well.

The Effect on DevOps without Integrated Testing

If an organization fails to recognize the crucial role QA plays within a successful DevOps transformation, it will be unable to achieve smooth operation of a continuous delivery model. Developers will continue to unit test with functional-only tunnel vision and hand over the remainder of testing to QA at their own pace. This situation creates an almost invisible disconnect between development and operations that creates undue friction toward reaching continuous delivery goals.

DevOps Testing Shifts Leftward …

A DevOps shop meeting its goals involves testers at all phases of the development process. The first step must be removing legacy boundaries between QA and development organizations in order that their processes merge and operate in parallel. Testers must integrate themselves tightly within development teams by working hand-in-hand as code is produced instead of waiting at the other end of an artificial pipeline. Dedicated test resources embedded in scrum meetings is a good, though insufficient, starting point.

A leftward shift in testing supports a DevOps philosophy of fail early, fail fast and fail often with respect to functional defects. It also enable verification of customer requirements at the head of the development cycle by the test team in collaboration with the production end of the CI/CD cycle.
To be completely effective, however, early testing responsibility must be shared by both testers and developers. QA’s role in this is to research and acquire the latest, most effective techniques, tools and technical expertise to make effective early testing processes happen with minimal disruption.

… and Rightward

Test teams in DevOps have a clear charter to increasingly use automation to streamline their own testing efforts. Additionally, test teams can play a logical role of integration between development and IT Operations to proactively increase system stability in the later phases of continuous delivery.

Beyond responsibility for automating builds and integrations, however, they should extend their automation skillsets to include automated tools for development infrastructure acquisition, configuration and provisioning. They are in the best position to evaluate time and resource requirements from both Dev and Ops, and how to serve these needs in the most efficient manner.

Furthermore, since their new role in DevOps extends beyond defect discovery into evaluating deployment/customer requirements, they can facilitate a leftward shift of Ops by adopting or facilitating other processes as well such as requirement management and production monitoring functions.

Continuous Testing Is Paramount

Regardless of how far left or right testing teams expand their responsibilities, their overarching obligation is creating a continuous testing approach that spans the entire CI/CD cycle. This means testers must be cognizant of early design and requirements decisions and provide feedback based on their expertise and experience, plus respond to later-phase testing and production requirements.

Early involvement enables tester to prepare their own requirements and implement test scripts even as the first lines of code are written. As development proceeds, involved, knowledgeable testers are key to ensuring that developers receive immediate feedback on defects and understand the impacts of their implementation decisions on later stages of a CI/CD pipeline, such as performance testing, infrastructure scaling and disaster recovery.

Clearly, to effectively complete a realistic, productive DevOps transition, testing plays an enormous role. In fact, its role expands and tester technology expertise, especially with respect to automation, significantly increases. Additionally, testing is uniquely positioned to further increase its value by stretching itself into areas traditionally allocated to Ops in a general leftward shift of shared responsibilities.

Far from being diminished, in a DevOps context testing becomes an ubiquitous function and tightly integrated into all the myriad “continuous” processes central to a successful DevOps conversion.

Some may argue that extending the testing organization’s reach both leftward into Development and rightward into Ops blurs the distinction of testing objectives and may lead to a dilution of its core value.

To view it as “blurring,” however, is a glass half-empty stance that overlooks key tenets of DevOps, which are reducing specialization, increasing collaboration and broader sharing of responsibility across artificial process boundaries. Without those principles in place and extended to test teams, the real benefits of DevOps are impossible to achieve.

5 Software Testing Trends for 2016

As long-term transitions continue around the increasing market dominance of mobile and cloud apps and the adoption of faster software development life-cycles, testers in 2016 are taking on an increasingly critical role in software organizations. Several trends this year are building on the contributions of the testing to enhance both quality and efficiency.

Test Automation Continues to Increase

Test automation has been on the rise for years as enterprises realized the benefits of finding more defects earlier and faster. Although over half of testing is still accomplished manually, that fraction is dropping as companies expand their test teams into both development and operations in the quest for a more efficient, continuous delivery model of software production.

The most mature testing environments achieve up to 80 percent automation, but many organizations still struggle to reach that lofty level even as automation frameworks and practices continue to advance the state-of-the-art:

  • Increasing use of automation even when code is in a high state of flux
  • Automation tools that support full end-to-end testing
  • Increased penetration of automation into both development and operations
  • A rise in the technical skill levels of testing organizations in support of automation engineering

Increasing Use of Mobile Device Test Farms

App stores are setting higher acceptance thresholds, which places greater demand on companies to increase the breadth and depth of their mobile app testing, especially with regard to device and platform compatibility. “Winging it” by passing on post-production testing to clients and end-users is no longer tolerable.

Therefore, a key trend in mobile app compatibility testing is the rise of mobile device farms in-house or in the cloud. The latter is increasingly popular as the matrix of models, OS versions, carriers and platform configurations continues to expand exponentially. Maintaining a highly dynamic infrastructure in-house is simply untenable for all but the largest enterprises.

Furthermore, cloud-based mobile device test farms typically have supporting hardware and software for instant configuration, provisioning and collaboration between testers and development, which makes their use far more efficient than in-house setups. This trend is confirmed by the recent arrival of Google’s Cloud Test Lab and Amazon’s AWS Device Farm.

Proactive Software Testing

Test organizations are undergoing a left-shift of testing activities into early development as well as a right-shift into the operations side to support stability at the production end of continuous delivery models. As such, in 2016 watch for major moves by test departments away from reactive testing toward proactive test practices in the pursuit of a fail-fast/fail-often philosophy.

In 2016, testing increasingly adds value to business requirements, design and the earliest development phases by offering expertise that reduces the jolts to production cycles that occur when software is created and evaluated by distinct groups in distinct time phases. As a result, user experience, scalability and performance issues will surface sooner and result in the delivery of software more closely reflecting customer requirements.

Increasing Focus on Security Testing

In 2016, surveys reveal that security testing is taking top priority among software organizations even over functional, compatibility and performance testing as the cost of security vulnerabilities continues to rise. These tests are critical to companies seeking to avoid the risks of breaches or service interruptions.

The approaches to security testing in 2016 will focus on these areas in order of decreasing priority:

  • Dynamic, runtime test cases to expose vulnerabilities
  • Static tests using automated code analysis tools during development
  • Manual code scrubs to expose security issues that automated tools may miss
  • Penetration testing, which uses a combination of manual and automated techniques to compromise apps and hardware and then further expose inner points of vulnerability

Increasing Need for Test Data Management

Before DevOps, when testing was its own silo, there was little thought to synchronizing code, test scripts and test data. In a continuous development/integration/testing/delivery methodology that spells time-consuming snipe hunts looking for hard-to-replicate defects in systems that inaccurately reflect the platform on which the defect was discovered.

Thus, as 2016 progresses, look for the rise of the Test Data Manager role in leading software organizations to enable data-driven testing, manage data set/configuration/code revision control, manage performance test setup and provide production-level service virtualization. The rise in use of test data management best practices is already reversing a multi-year rise in test data challenges.

Testing’s value to any enterprise serious about achieving high quality and efficiency in their software production continues to rise.

This year will see further significant improvements in the application of security testing, broader use of test automation, cloud-based testing services and an elevation of the role that test data management plays. As such, testing’s presence will span the entire SDLC using new approaches, tools and technology that will make the role of tester an exciting place to occupy.