Posts

5 Ways to Improve Mobile App Testing Quality and Efficiency

Mobile device application development faces significant challenges, which pass through to their testing. Though the obstacles appear daunting, there are ways to mitigate such complications, improve your team’s testing effectiveness and also raise app quality.

Mobile App Challenges

  • Mobile device fragmentation is rising. There are a myriad of hardware platforms, OS versions and network connection options across devices. Trying to maximize market coverage for a single app requires spanning this ever-increasing matrix of combinations with a concomitant increase in testing complexity.
  • Testing budgets are not expanding, which means doing more with less, which in turn leads to deciding between in-house or outsourced testing. Outsourcing infrastructure to the cloud often reduces capital and maintenance expenditures, but outsourcing test personnel and processes is a riskier proposition that is usually hard to unwind later. Outsourcing pressure is also increasing because new expertise and sophistication is required by leading edge test frameworks and tools.
  • Adding insult to injury, mobile app development requires ever faster development/release cycles. That is due to heightened competition and to end-users’ progressively shorter attention spans. Users more and more expect defects and improvements to be available in near-real time. That means testing has to constantly play catch-up so as not to become the bottleneck in the next upgrade.

Dealing Effectively with Modern Mobile App Testing

Before deciding you might be in the wrong line or work in the face of these looming challenges, let us consider five approaches that will improve your organization’s abilities to operate more efficiently while improving the quality of released apps.

1. Embrace Agile Development

This step is global to the organization. Fundamentally, it means involving all the stakeholders in the project from business managers to architects to production personnel. Depending on the delta between what you do today and how far you want to take agile development determines the size of the steps you take, but every step gets you closer to a more successful testing organization. Agile methodologies are proven to improve the ability to adapt to change, improve customer and stakeholder engagement, create more useful documentation and higher quality deliverables.

2. Value Your Automation Properly

Are you accurately measuring the value you derive from test automation? Often, more automation simply bloats maintenance tasks and increase test run times in the absence of ongoing evaluation of the tests contained within a framework. For instance, effort is commonly wasted automating tests that are better done manually, such as exploratory testing. The correct approach, customizable to your particular environment, is to measure your tests along a value spectrum. Tests such as build smoke tests are relatively easy to automate and provide high value. On the other end of the spectrum might be compatibility tests, which are necessary but probably should be lower priority.

3. Virtualize Your Testing Resources

Virtualized services and virtualized platforms are both relatively inexpensive in terms of infrastructure, setup and ongoing cost. They can be used on both the client and server sides of mobile applications. You will use less costly hardware and gain the ability to scale almost linearly, which is especially valuable for performance, load and stress testing.

4. Improve Your Ability to Test on Both Emulated and Real Devices

Testing a multi-platform mobile app on real devices is probably a non-starter for in-house deployment depending on the complexity of the instance matrix combining devices, platform capabilities, OS versions and network connectivity options. On the other hand, you cannot achieve a sufficient level of quality confidence with only emulators, which should by the way be deployed as early as possible in the development cycle.

The increasingly obvious answer to this situation is to employ cloud testing services that provide real-time access to a wide collection of new, sometimes unreleased, mobile devices and network operators along with built-in test and collaboration frameworks.

5. Gain Deeper Insights into the End User Experience

Simply reviewing comments and ratings in app stores is not enough anymore to evaluate how apps are performing in the field. Consider compiling in one of today’s sophisticated crash/analytics SDKs, which have vanishingly small run-time overhead. These provide real-time insights into how users are interacting with your app plus detailed reports on crashes that pinpoint problems immediately, especially if used in combination with a hotfix insertion tool.

Conclusion

Testing mobile apps gets harder every day, although new tools and techniques are always coming along to mitigate the difficulties. Seeing ways to improve development/test turnaround times, to optimize test automation capabilities, employing test resource virtualization and cloud-based device testing services are effective methods for maintaining the pace. Above all, keep your app’s user experiences squarely in your sights and utilize new tools for evaluating their behaviors and problems.

10 Considerations for Mobile App Testing (Part 2)

The following is Part 2 of our article on the top 10 Considerations for Mobile App Testing. Click here to read Part 1.

6 – Device Constraints

In general, mobile devices whether they be phones, tablets or wearable devices have severe constraints in terms of power, processing and storage compared to an average PC. Furthermore, their form factors, screens sizes, layouts, platform capabilities and connectivity capabilities vary widely even within a single manufacturer.

Testing any mobile app to its fullest requires pushing the limits of how low you can go in terms of platform resources. Creating an application that can adapt to the richness or paucity of its platform is far better in terms of user acceptance than simply throwing up an error message when resources are inadequate to support nominal usage expectations.

7 – Network Constraints

Other constraints on device performance are less predictable than its physical configuration. The most valuable apps are network-aware and respond consistently regardless of whether the user is connected via 3G, 4G or WiFi even in the face of network jitter and packet loss. If it makes sense, these apps should offer offline functionality as well.

Besides testing connectivity channels, tests to evaluate app functionality and usability across different carrier networks, may expose subtle differences that impact performance. Furthermore, the effects of network congestion, interrupts from text or voice messages, calls and sharing of connectivity with other apps must be assessed.

How well a mobile application responds and preserves state across networks and under varying network constraints is a vital part of the user experience. You can be certain that users will first blame the app for any deficiencies rather than the network.

8 – Back-end Constraints

Any mobile app that has a server side component must take account of latency issues when testing. A series of graded test cases can evaluate the effects of degradation in client-server communication. For hybrid and web-based apps, various browser loads should be imposed also.

If the web server or DB server is implemented by your organization, then separate instrumentation and testing on that side is a requirement. If these are not under your control, then it is still important to test for unacceptable latency and deal with it.

You can take a page from Google’s experience that a latency of ½ to 1 second results in a 20 percent traffic drop to calibrate your understanding of how impatient your app’s users can be. Often latency can be improved by reformatting data, reducing the amount of data over the connection or removing intervening agents or proxies.

9 – Security and Privacy

Gartner projects that 75% of mobile applications available in 2015 will fail fundamental security tests. An even greater number carry privacy risks as well. Most of these issues arise from access management, cross-site scripting and data storage leakage.

Due to the importance of security and privacy to your end users, you owe it to them to incorporate security testing end-to-end in the development process rather than saving it for the final phases or, worse yet, for real users post-release.

Tests cases for five common security defects must be covered:

  • User authentication and the constraints put on usernames, passwords or pin numbers
  • Ensuring data transfers are never done without encryption
  • Exposure of user identifying information to other apps or 3rd-party ad networks
  • Avoidance of storing unencrypted data on the client device
  • The robustness of server-side security including verifying API calls

10 – Regional Differences

The impressive diversity of the mobile app marketplace along the dimensions of OS types, device capabilities and network configurations will hopefully not lead you to de-prioritize supporting the diversity of your end users. Even for enterprise workforce business apps, attention must be given to testing for regional differences in language, culture, currencies and social network preferences in order for your app to gain international appeal.

Many aspects of localization can be contracted via third parties, but time must be allocated for verification of the results. If possible, have native speakers review interface labels and error messages to avoid unintentional misunderstandings. In locations where language is right-to-left, the effects on the UI must be evaluated also.

Business Benefits

Taking the above considerations to heart and applying them even before the next mobile application development project begins pays immediate and long-term dividends to a business’ competitive posture:

  • With agile methodologies, designers, developers, testers and IT all improve their abilities to collaborate and produce the highest-quality products at lower cost
  • Realistically managing the testing scope within the ever-increasing complexity of the mobile device ecosystem is an essential organizational skill for focused effort and profitability
  • A greater emphasis on automation lowers costs and increases the business’ responsiveness to customers and changing market conditions
  • Carefully evaluating all aspects of mobile app testing including functional, performance, compatibility, network, UI/UX and security tests improves the organization’s decision-making dexterity while reducing risk
  • Thorough and efficient testing processes free up resources that can be used for increased product innovation
  • Ensuring end user satisfaction by providing a functional, responsive and safe app experience is a nearly immeasurable aspect of application adoption and the reputation of the company’s products

 

Summary

Mobile device testing presents an array of challenges well beyond those for traditional desktop application testing due to the unparalleled diversity of products, operating environments, networks and inherent device constraints.

Despite these formidable challenges to producing and testing high-quality products, businesses must accept that mobile app users have little tolerance for apps whose functionality, performance or safety do not meet their heightened expectations. The mobile app world is a highly competitive one in which end users always have alternative products available to them.

Businesses up to the challenge of producing the best, most usable mobile app products and highest customer loyalty must adopt a mindset in which app production is an end-to-end, highly collaborative process that comprehends 10 key considerations and incorporates solutions to these at every step of the way. That is a tall order, especially for enterprises that have until now used a manual, waterfall approach to development and accepted gaps in testing coverage.

The benefits to organizations that are able to adopt highly collaborative, continuous delivery models of product development and can embrace test automation to the fullest extent are difficult to overestimate.

Besides producing higher quality software with faster time-to-market, improved quality and customer responsiveness, they will see overall costs decrease. Additionally, companies are able to free resources to advance the most innovative initiatives within the enterprise to further increase its reputation and profits.

10 Considerations for Mobile App Testing (Part 1)

As the world decidedly transitions from desktops and laptops to mobile devices, application developers and testers face enormous challenges to ensure that their products meet basic metrics of functionality, performance and usability. These are requirements for desktop apps too, but they are compounded for mobile apps due to the immense variance in platform resources, input methods, screen sizes, operating systems, operating system versions, networks and application architectures on mobile devices.

Furthermore, end users, who now have access to more platform and application choices than ever before, are completely unforgiving of faults in mobile applications. Failures result in almost instantaneous loss of customers according to most studies. A disappointing mobile application experience causes a quarter of users to immediately delete the app. A third will instead use a competitor’s application and nearly half will be skeptical of trying other apps from that supplier.

Getting it right for your customers, whether they be the captive audience for your enterprise workforce apps or the public, requires a particular mindset in regards to planning, development, testing and support that may be unfamiliar to teams oriented toward PC software.

Approaches to mobile application testing require bearing in mind several considerations that ensure your company’s mobile app testing is on target and successful.

10 Key Considerations for Your Mobile Application Testing

1 –The Potential for Combinatorial Test Case Explosion

It is easy to be carried away with the idea that your next great app will conquer the mobile device world. In order to accomplish that, you must go live on at least Android and IOS and probably Windows 10 Mobile.

Supporting more than one OS presents a significant cross-platform development challenge, but the reality for testing is even more onerous. The testers must assess the application across devices from a dozen manufacturers, each with a portfolio of products with different screen sizes, platform capabilities and resource configurations.

If the app is HTML5 based, testing can use a standardized browser, but hybrid and native apps require platform-specific testing especially with regard to the UI and on-board sensors. There will be additional test cases for network interoperability and security as well.

Unless your organization already has deep experience in developing cross-platform mobile apps, you are better off reducing the scope of both development and testing to something manageable by choosing a single OS and a subset of manufacturers and devices supporting that OS in order to produce the greatest value to your business.

2 – Your Current Design-Code-Test-Support Methodology

Testing within an organizational structure that utilizes a throw-it-over-the-wall approach between developers and testers is not going to cut it in the fast-paced mobile app ecosystem. Agile development methods greatly reduce lag times between coding, testing, delivery, defect detection and defect remedy. Not only do such methods increase team efficiency and allow for faster time-to-market but they improve customer support and team cohesion too.

If your development processes are not agile, then your efforts to deal with the challenges of mobile testing can be easily doubled. Though the introduction of agile methodologies within ongoing projects is likely to be counter-productive, pilot projects should be started sooner rather than later to begin a gradual switchover to methods proven to be more effective. Be careful not to fall into the trap of simply accelerating waterfall processes and thinking you have improved agility.

3 – Real versus Virtual Device Testing

The bottom line when choosing whether to use real or emulated devices for testing comes down to the fact that all of your end users are going to be using real devices. Although it is impractical to use only real devices for all test phases, no application must be released without full testing on actual devices on which the app is intended to run.

Especially during earlier development phases, emulator testing is a must for efficiency reasons. Some IDEs include emulation capabilities, which is critical for unit testing and UI evaluation. In general, it is easier to automate test suites using emulated devices, though that is changing with the advent of cloud services aimed specifically at mobile device testing.

Cloud testing services allow both developers and testers to interact in real-time on live networks with nearly any real mobile device on the market. Tests are accomplished via online dashboards that realistically display the device UI. Tests can be developed manually and saved to a script for regression tests or to demonstrate a bug to the developers. As this technology continues to improve, the need for an organization to acquire, maintain and plug into a test harness a multitude of actual devices diminishes.

4 – Automation

Whether or not your teams use real or emulated devices, the need for test automation should be paramount. It is not possible to automate everything of course. Most automated tests will have first been run manually because they originated from a defect found by a user or because new test case suites are developed. To whatever extent possible, broad and deep automation is essential for an agile, continuous delivery organization if it is to keep up with the mobile app markets.

There are increasingly sophisticated tools for test automation available. The best support end-to-end testing from requirements to test cases to regression testing and defect testing. These can streamline the entire testing and support process. However, simply finding the right framework is only half the challenge.

Automation requires increased cultural awareness of its benefits and a willingness to seek out and implement automation wherever it is practically possible. An organization supports this mentality via team incentives and designating an Automation Engineer. The occupant of that role drives automation strategy, ensures automation tools and initiatives are used correctly and measures the effectiveness of automated processes toward producing frequent, timely, high-quality app releases.

5 – User Experience

The ultimate testing challenge may be testing User Experience, or UX. While the functionality of most mobile application UIs can be tested thoroughly by the best automation frameworks, even those fall short in the face of sophisticated interactions such as gestures, hand waves and eye movements. An even greater imposition is automated testing of interactions via intelligent agents such as Siri or Google Now. Throw in contextual information such as tilt, acceleration, geolocation and ambient sound and even the best test automation efforts will struggle.

Plan for UX testing with real users under lab conditions or by field beta testers. In addition to testing for smooth operation, the app’s workflow, responsiveness and aesthetics should be reviewed. Application instrumentation that tracks how users interact with the app often reveal surprising insights for the designers and developers.

Keep in mind overall that the test with the most value is to ensure that the user receives intended and valuable results from the application.

Localization Testing Best Practices for Mobile and Web

Software localization is a process of translating and adapting a product such that it can be marketed to users whose cultures and languages differ from that of the original authors. It is a tedious and laborious job that is difficult to completely automate. Fortunately, best practices exist to ensure thorough and effective localization.

Plan for Localization Early

The effort to achieve excellence in localization is not as simple as hiring a native-speaking translator to rewrite the text within an app or web site. Consider carefully the effort to adapt your software to these additional factors:

  • Over 100,000 language characters exist among the world’s languages and many languages’ fonts differ significantly in size. Properly localized, your software must account for this on output and input.
  • Word lengths in various languages differ significantly, which your GUI must handle gracefully.
  • Some languages read right-to-left, which sets expectations for the layout of elements and pages.
  • Only the U.S., Liberia and Burma do not use the metric system.
  • Dates, time, currency and other numerical representations differ among regions.
  • Cultural differences can create misperceptions or embarrassments with regard to the content of your app or website.

Hire One or More Localization Consultants

End-to-end localization firms with native speakers/residents in your software’s targeted areas are expensive but invaluable. If you are new to localization, try performing a pass-through with automated tools with the help of one or two freelance translators. The experience will be enlightening. If it went well, then hire a local consultant as a reviewer and editor.

The ideal localization consultant does more than simply translate. They should provide sound marketing advice to ensure that emotional or rational marketing messages in your content are expressed appropriately but with equal power in the target language and culture.

Take Care with Strings

Every single string displayed by the software should reside in property files separately from program logic files. Assign each string a symbolic name, e.g. UserName = “User Name,” which is used within the program itself. New strings can replace the originals without changing the program itself.

Once strings have been converted, be sure their character encoding is Unicode/UTF-8. This makes subsequent localization steps and processing by tools, such as debuggers, less time-consuming.

There are other best practices for text localization:

  • Strings must never be concatenated as this leads to the creation of grammar errors.
  • Avoid the use of idioms, which can be difficult, if not impossible, to translate.
  • Include with string definitions any necessary punctuation. This might require duplicate strings with and without punctuation.
  • Avoid using the same word or phrase in different contexts as that often leads to unintended meanings. For example, in English alone there are dozens of contronyms, which are words that have opposite meanings in different contexts such as overlook, sanction, trim or screen.

Internationalizing Dates, Times and Currencies

Date, time and currency representations have many regional differences. Even for the same currency, some countries place the currency symbol in front or behind the amount. In these cases, the static indirection used for strings will fail. Often, such differences can be captured as macros or subroutines, which other parts of the software use inline or at runtime. These can be segregated in separate files to accommodate localization.

Avoiding GUI and Layout Issues

There are numerous ways a GUI can go wrong from one language to the next. For example, a GUI designed for English speakers, when internationalized for Arabic, must flip the entire layout of a page to make content appear in a natural organization for Arabic users. The layout of buttons, text fields or pull-downs may also exhibit an unnatural flow among them in a left-to-right language.

Text fields designed for the average length of English words are too short for many languages such as German. This can be a problem for visible labels on buttons or other widgets as well. If the interface is designed to re-size/re-order controls to accommodate labels or screen size, the entire layout may be morphed into something rather awkward looking.

Modern interfaces routinely provide context-sensitive help on controls via a hover action. If you do not localize both the GUI and Help system simultaneously, these may diverge significantly.

Test It Again, Sam

Once localization has been thoroughly applied, you need to re-test the software in much the same manner as the original language version to ensure no mistakes have occurred that would affect the user’s experience negatively.

Many organizations underestimate the amount of planning, development and testing required for quality localization. The effort and cost must be carefully scoped and weighed against the advantages of marketing to a new region. Once the investment has been made and experience gained, however, the effort is normally less for localization to other countries and regions.

Mobile Application Level Performance Testing Advice

Even within the walled garden of enterprise workforce apps, top performance is essential for app adoption and increased user productivity. If you produce mobile apps for public consumption, performance expectations increase by an order of magnitude. Many an app has suffered premature death due to inadequate performance regardless of how well it met users’ functional needs.

Thus, application level performance testing is essential. Such testing must evaluate availability, response times and the timely and accurate completion of business logic transactions. Furthermore, performance must be evaluated within diverse combinations of platforms, OSs, connectivity options and back-end services.

Five Pieces of Advice for Mobile Performance Testing

1) Measure Early, Accurately and Often

Well before actual performance testing begins, some planning needs to happen. Stakeholders must develop a list of Key Performance Indicators against which performance test results will be measured. Start with metrics important from a user’s point of view and work downward to metrics within the app that support the user-level performance metrics.

Determine how metrics will be collected. Code instrumentation, measurement libraries, external tools or scripts are a few of the options. Next, plan on when and where data are to be collected. Finally, plan for how a performance baseline will be created, maintained and upgraded at all test stages up to production. It is wise to continue to update the performance baseline even after deployment.

2) Test under Diverse Connectivity Scenarios

Connectivity issues are a prime generator of performance-impacting issues such as hangs, lag, degradation or failed transactions, especially with regard to cellular carrier networks. You cannot count on the user correctly attributing such performance issues to the network and not your app, however.

Therefore, it is vital to perform connectivity testing over as many types and instances of network infrastructures as practically possible. Design tests such that elements including platform loading are isolated from connectivity factors to develop an accurate picture of app performance over any connection. Augment testing by using a variety of carriers and Internet backbones. To achieve this degree of variation may require employing cloud testing services.

3) Test the Backend with and without the App

If your mobile app utilizes backend services, understanding their performance is equally important as testing for connectivity issues. Fortunately, it is likely to be of smaller scope and less complex. Backend instrumentation should support both standalone stress tests on the backend infrastructure as well as tests on both the mobile device and backend during interactions. The backend test strategy must include load and stress tests with unrelated services present that compete for server and network resources.

4) Test for Extraordinary Success

Although you app’s customer requirements or backend SLAs may not demand it, it is wise to probe what happens on both the mobile app and the backend should user demands exceed expected maximum capacity. It does not happen frequently, but “viral” success of an app could end up leaving egg on the face of the organization should too much success lead to disappointment for excess users. They will not appreciate the irony either.

Testing for double or triple maximum capacity reveals the degree to which your app’s performance degradation is graceful. Often, this level of testing reveals defects that went undetected during less extreme conditions. This result alone could make such testing worthwhile.

5) Use an Actual Production Environment

Initial app performance testing necessarily takes place under constrained conditions as code development is in its early stages. Although initial tests are valuable in revealing flaws in assumptions or architecture and in contributing to the ongoing performance baseline, their results should be assessed cautiously. It is especially important not to extrapolate the figures to expected deployments.

At some point, however, app performance including functional, speed and reliability evaluations must be performed in an environment as nearly like production/deployment as possible. Variables to account for include hardware and OS configurations, platform resources, network connectivity and the use of production databases.

Take prudent steps to reduce the possibility of mangling your production system during such testing:

  • Test during maintenance periods
  • Utilize a fraction of the production framework
  • Test in secret before the release’s public debut
  • Test during off-peak periods

A survey by Equation Research revealed that well over half of mobile app users expect their apps to exhibit performance equal to that encountered on desktop PCs. Other surveys have consistently demonstrated that mobile users’ tolerance for sub-par performance is unquestionably low. Glitches and long or inconsistent wait times, for instance, quickly lead to deletion of the app and an increased reluctance to try other apps from the same vendor. Therefore, accurate performance testing is crucial. Thorough planning, execution and measurement all have critical roles in mobile app performance testing. Increased effort in this area leads to improved quality and heightened user satisfaction.

5 Mobile Testing Challenges and Solutions

Forget for a moment the nascent Big Bang of the Internet of Things and focus on today’s explosion of Mobile Devices and Apps. Every day, thousands of new apps are uploaded, billions downloaded and they must run successfully on tablets, smartphones, vehicles, watches, kitchen appliances and a slew of new devices arriving at “ludicrous speed.”

What then of the plight of testing professionals who must cope with this incredibly diverse universe of devices, OSs, configurations, malleable user interfaces, paucity of platform resources and convoluted connectivity options? Furthermore, keep in mind that over half of companies report having inadequate mobile testing tools and insufficient device coverage that would allow them the luxury of feeling sanguine about their chances of satisfying users’ high standards.

Before hands are thrown in the air and everyone scrambles for the exits, consider these solutions to five major challenges in mobile testing.

5 Mobile Testing Challenges and Solutions

1) Device and Platform Diversity

There are today more mobile phones than people on the planet. iOS and Android dominate the OSs and five mobile device manufacturers cover half of the market with dozens of lesser device makers following close behind.

The easy, but expensive, way to confront platform diversity is off-loading it to an external lab, but most organizations instead choose a mix of internal mobile lab and cloud-based mobile testing services. Depending on shop size and the number of platforms supported, the ratio of in-house to cloud-based lab facilities will vary.

Cloud-based mobile testing is typically more efficient during late development stages when code is stable, performance and compatibility testing are the main areas of focus and access to a large mix of actual devices without the capital and maintenance expenses of in-house acquisition is required.

2) Network Diversity

Network carriers are even more fragmented than OS and device providers with a dozen companies sharing half of all connections and dozens more vying for the remainder. Testing must also account for this aspect of mobile testing diversity.

Network emulators can simulate cell network bandwidth and connectivity variances over Wi-Fi. These provide excellent mileage, but at some point testing on real networks becomes necessary. An interim solution is the use of device emulators plus an operator’s web or test proxy, which avoids airtime charges and lets testing proceed with an instrumented test stack.

Short of acquiring in-house accounts with target networks, cloud-based network test services provide the most realistic testing scenario. Many cloud-based device testing services include remote carrier coverage.

3) Mobile App Types

Another factor in mobile testing are the three approaches to mobile app architecture: native, Web/HTML5 and hybrid apps. Test case scenarios for each differ, especially regarding performance, stress, conformance and compatibility testing. Native apps have a reduced testing scope overall, whereas Web and hybrid apps require both on- and off-platform test cases and must account for complex connectivity and back-end issues.

Native and hybrid apps must be tested for successful download, execution, platform interaction and update behavior. Web-only apps depend less on the platform OS and more on the choice of browser and its versions of which all instances must be tested.

Since native, hybrid and web-based apps are all subject to other mobile testing challenges, the common solution for avoiding the additional complexity of supporting multiple app architectures is to eliminate at least one or both of the alternatives by executive decision.

4) Applying Test Automation

There are trade-offs between efficiency and coverage during various development and test phases when applying test automation. Early development functional and UI testing benefits most from frameworks that let developers emulate devices and facilitate their unit testing. Automation at later development phases must be selectively applied to achieve the highest ROI, since automation has substantial upfront costs of time and money.

Mobile testing automation typically has the highest payoff for data-driven, repetitive, frequent tests that require minimal human intervention and are relatively insensitive to code changes. If the tests chosen for automation can scale easily as platform diversity expands and be managed by less skilled testers, higher-skilled resources can be re-deployed to higher-valued test roles.

5) Finding Useful Test Tools

Mobile testing organizations are faced with an enormous diversity in test automation tools as well. Tool capabilities must be matched to the skill of test resources, the existing development tool set and the organization’s budget. Key features to acquire include the ability to build scripts from manual tests, which increases automated test coverage. Such an ability also increases collaboration between testers and developers during detect-verify-fix bug cycles for user errors.

Such solutions to the formidable challenges mobile app testers face should mitigate fears arising from the many and varied challenges that confront mobile testing teams and keep them moving forward.

Testing Strategies for Mobile Applications

Even if your organization makes heavy use of test automation, testing mobile application software consumes significant time and resources. The paybacks are delighted customers and an elevation of your company’s reputation. Therefore, it makes sense to continue improving your testing methodologies and strategies to meet the demands of the fast-moving mobile app market.

Naturally, there is no single tool or approach that guarantees success. You may already utilize a mix of homegrown test apps, cloud-based test services and 3rd-party test frameworks. Regardless of how well that all plays together today, your current strategy will feel constant pressure to become ever more potent while saving time and money.

Analyzing and modifying your organization’s current mobile app test strategy, therefore, is essential to ensure you are achieving superb product quality in the most cost-effective manner.

Testing Strategies for Mobile Applications

Key Testing Challenges

The main challenges in testing mobile applications relate to the unique characteristics of the mobile market:

  • To obtain significant market coverage, apps must run at least iOS or Android, probably both.
  • Apps can run natively, via abstraction libraries, within a web browser or within a combination of these. Each of these have similar, but different testing requirements and techniques.
  • It is extremely difficult to test an app on every single targeted mobile device due to the multitude of manufacturers, models, screen densities, UIs and platform versions.
  • Any network-aware mobile app requires performance testing under various protocol, bandwidth and load conditions. This is especially important for apps that depend on a backend server.

Grasp the Big Picture

Mobile software development processes are still behind the curve in adopting agile development methodologies that have been successful in web development for decades. For instance, continuous integration, in which development and testing processes are coordinated via shared code repositories, has been proven to improve quality and speed time to market but is unused by most mobile app development organizations.

Agile methodologies truly thrive when supported by automation. Thus, as your organization moves to tackle the challenges of mobile app development and testing, evaluate your strategy, tactics and tools relative to their ability to increase the amount of test automation you use.

Reduce Manual Testing

Manual testing is typically orders of magnitude slower than automated testing even accounting for the initial costs and effort for putting automation in place. It can never be entirely eliminated, nor should it be, but replacing manual tests whenever possible is simply low-hanging fruit in terms of increasing QA efficiency.

Plan for Scale

Always evaluate tools and frameworks with regard to how well they scale up and scale down to meet testing demands. Cloud-based services offer the most flexible scaling capabilities, but many 3rd-party test frameworks intended for in-house use are designed for substantial scalability as well. Look for free or free trial versions that allow you to fully investigate their scalability.

Embrace Cross-Platform Now

For the foreseeable future, market shares between iOS, Android and Windows Phone are unlikely to drastically change. Even if development within your enterprise is fixated on a single OS now, that situation may change at any time. Therefore, it behooves QA to grant extra weight to test frameworks that build-in cross-platform support. Otherwise, you may be maintaining two or more different sets of test tools simultaneously.

Choose Comprehensive Test Frameworks

Ideally, in the quest for full automation, you should concentrate on acquiring test frameworks that support the entire testing process of requirements, test case and script generation, test execution, multimedia logging and a discover/report/fix defect cycle that integrates well with the developers’ tools.

Some frameworks include or are closely associated with cloud-based infrastructure services that include access to hundreds of real devices. These services significantly reduce the costs of acquiring and maintaining large numbers of mobile devices within your labs. Many also include live network testing, which could be difficult to adequately represent in your labs.

Frameworks that support both emulation and real devices assist in staging testing as development progresses from requirements to final release. The costs and benefits of scaling by testing on virtual versus real devices can be efficiently allocated according to the development level of application code.

 

Mobile testing will become more complex over time. Fortunately, there are coping strategies to ensure app quality and time-to-market are not compromised. The newest tools and services can be used to streamline and automate test case generation, scripting, test runs, logging and fix verification. In the bargain, test coverage increases, which also leads to improved quality of the final produce release.

Continuing to analyze and adjust your testing strategies, tactics and methodologies is necessary to place your organization ahead of the competition by increasing your testing capabilities at reduced costs.

Top 10 Mobile Testing Tools

Apple’s recent release of a new, modern mobile programming language for iOS, Swift, has taken the mobile app development world by storm. Its adoption by developers has been record-breaking as it continues to climb the charts of most used coding languages. Swift replaces the aging Objective-C, which has been in use for three decades.

As mobile device usage continues to skyrocket and mobile app downloads reach well into the gazillions, other languages and tools will emerge to enable developers to churn out more cool apps and help testers improve app quality. Mobile app testers can already take advantage of more capable testing frameworks to streamline their work, support CI and improve test coverage.

Top 10 Mobile Testing Tools

1. Appium for Android and iOS – Appium is an open source project for cross-platform test automation. Essentially, it is an HTTP server managing WebDriver sessions. It supports tests in any framework and in any language that can create an HTTP request. No app code needs to be modified for testing. Any test is suitable to run on either iOS or Android on real devices or emulators. It supports native, hybrid, and web apps.

2. Calabash for Android and iOS – Maintained by Xamarin, Calabash consists of two open source libraries, one for iOS and another for Android, which automate testing for native or hybrid mobile apps. Used with Cucumber, test cases are written in natural language then translated to test scripts that run within the framework. It works well with Ruby, Java, .NET, Flex and many other programming languages.

3. MonkeyTalk for Android and iOS – Both testers and developers utilize this complete functional test platform for iOS and Android apps. It consists of three components: an IDE, an Agent and scripts. The IDE creates test scripts using record and playback. Agent is a test instrumentation library to which the app links. MonkeyTalk scripts use simple keyword syntax and Ant or Java execution engines. Tests can be data-driven from a spreadsheet using CVS format.

4. Robotium for Android – Robotium is an open source library aimed solely at Android UI testing. It is used for automated black-box testing for web, native or hybrid mobile applications. Using it in conjunction with TestDroid Recorder, JavaScript test scripts are created as the tester traverses the UI of the mobile application under test. A free extension library called ExtSolo adds multi-path dragging, auto-scaling for different display resoultions and other abilities.

5. Selendroid for Android – No app code modification is required to use Selendroid, which is essentially Selenium for Android apps. Selenium 2 and the WebDriver API are the basis for test code. The framework interacts with multiple devices or device emulators simultaneously. It even supports device hot-swapping. There is an inspection component for recording device UI state for test case creation.

6. UIAutomator for Android – UIAutomator creates functional Android UI test cases. Scripts are written in JavaScript. UIAutomatorViewer is used to run and examine test results. Complex sets of user actions can be reproduced and it can access native device buttons too.

7. UIAutomation for iOS – This is Apple’s test automation framework for iOS apps. JavaScript is used to operate the device UI. As a proprietary tool, it does not play well with other tools or methodologies such as CI. Nor does it support managing test cases and suites as other frameworks do.

8. Frank for iOS – Frank is an iOS-only test framework combining Cucumber and JSON. A statically linked server inside the mobile app under test interprets JSON and uses UISpec for execution. Although it has the advantage of not requiring app code changes, it is difficult to run directly on devices. It is most suited for emulators and web-based apps.

9. KIF for iOS – KIF stands for Keep It Functional. It is an open source framework developed for iOS mobile app UI testing. It utilizes the Accessibility APIs built into iOS in order to simulate real user interactions. Tests are written in Objective-C, which is already familiar to iOS developers, but not test teams. Apple’s switch to Swift makes its use of Objective-C a disadvantage going forward.

10. iOS Driver for iOS – iOS Driver utilizes Selenium and the WebDriver API for testing iOS mobile apps. Its default is to run on emulators, where execution is faster and scalable. The current version works with devices, but actually executes more slowly in that case. No app source code requires modification and no additional apps are loaded on the device under test. iOS Driver is designed to run as a Selenium grid node, which improves test speed as it enables parallel GUI testing.

 

At the moment, there appear to be many test framework solutions looking for problems, but that is to be expected as mobile app development and testing tools continue to be developed at a rapid pace. Every framework has its pros and cons, each of which should be weighted relative to the needs of the testing organization and the software products being delivered.

Although a cross-platform test framework probably makes the most sense in most cases, particular features of an iOS or Android-only test tool could make it a better choice. The most important criteria are to use tools that increase automation, have excellent support and appear to have staying power for the long haul.