Best Practices in Testing Mobile Applications

Like all QA processes, mobile development projects benefit from a well-designed defect tracking system, regularly scheduled builds, and planned, systematic testing. There are also plentiful opportunities for white box (or gray box) testing and some limited opportunities for automation.

Designing a Mobile Application Defect Tracking System

You can customize most defect tracking systems to work for the testing of mobile applications. The defect tracking system must encompass tracking of issues for specific device defects and problems related to any centralized application servers (if applicable).

Logging Important Defect Information

A good mobile defect tracking system includes the following information about a typical device defect:

  • Build version information, language, and so on.
  • Device configuration and state information including device type, platform version, screen orientation, network state, and carrier information.
  • Steps to reproduce the problem using specific details about exactly which input methods were used (touch versus click).
  • Device screenshots that can be taken using DDMS or the Hierarchy Viewer tool provided with the Android SDK.

Redefining the Term Defect for Mobile Applications

It’s also important to consider the larger definition of the term defect. Defects might occur on all devices or on only some devices. Defects might also occur in other parts of the application environment, such as on a remote application server. Some types of defects typical on mobile applications include

  • Crashing and unexpected terminations.
  • Features not functioning correctly (improper implementation).
  • Using too much disk space/memory on the device.
  • Inadequate input validation (typically, button mashing).
  • State management problems (startup, shutdown, suspend, resume, power off).
  • Responsiveness problems (slow startup, shutdown, suspend, resume).
  • Inadequate state change testing (failures during inter-state changes, such as an unexpected interruption during resume).
  • Usability issues related to input methods, font sizes, and cluttered screen real estate. Cosmetic problems that cause the screen to display incorrectly.
  • Pausing or “freezing” on the main UI thread (failure to implement asynchronous threading).
  • Feedback indicators missing (failure to indicate progress).
  • Integration with other applications on the device causing problems.
  • Application “not playing nicely” on the device (draining battery, disabling power saving mode, overusing networking resources, incurring extensive user charges, obnoxious notifications).
  • Using too much memory, not freeing memory or releasing resources appropriately, and not stopping worker threads when tasks are finished.
  • Not conforming to third-party agreements, such as Android SDK License Agreement, Google Maps API terms, marketplace terms, or any other terms that apply to the application.
  • Application client or server not handling protected/private data securely. This includes ensuring that remote servers or services have adequate uptime and security measures taken.

Managing the Testing Environment

Testing mobile applications poses a unique challenge to the QA team, especially in terms of configuration management. The difficulty of such testing is often underestimated. Don’t make the mistake of thinking that mobile applications are easier to test because they have fewer features than desktop applications and are, therefore, simpler to validate. The vast variety of Android devices available on the market today makes testing different installation environments tricky.

Managing Device Configurations

Device fragmentation is perhaps the biggest challenge the mobile tester faces. Android devices come in various form-factors with different screens, platform versions, and underlying hardware. They come with a variety of input methods such as buttons and touch screens. They come with optional features, such as cameras and enhanced graphics support. Many Android devices are smart phones, but not all. Keeping track of all the devices, their abilities, and so on is a big job, and much of the work falls on the test team.

QA personnel must have a detailed understanding of the functionality available of each target device, including familiarity with what features are available and any device-specific idiosyncrasies that exist. Whenever possible, testers should test each device as it is used in the field, which might not be the device’s default configuration or language. This means changing input modes, screen orientation, and locale settings. It also means testing with battery power, not just plugged in while sitting at a desk.

One hundred percent testing coverage is impossible, so QA must develop priorities thoughtfully. Developing a device database can greatly reduce the confusion of mobile configuration management, help determine testing priorities, and keep track of physical hardware available for testing. Using AVD configurations, the emulator is also an effective tool for extending coverage to simulate devices and situations that would not be covered otherwise.

Determining Clean Starting State on a Device

There is currently no good way to “image” a device so that you can return to the same starting state again and again. The QA test team needs to define what a “clean” device is for the purposes of test cases. This can involve a specific uninstall process, some manual clean-up, or sometimes a factory reset.

Mimicking Real-World Activities

It is nearly impossible (and certainly not cost-effective for most companies) to set up a complete isolated environment for mobile application testing. It’s fairly common for networked applications to be tested against test (mock) application servers and then go “live” on production servers with similar configurations. However, in terms of device configuration, mobile software testers must use real devices with real service to test mobile applications properly. If the device is a phone, then it needs to be able to make and receive phone calls, send and receive text messages, determine location using LBS services, and basically do anything a phone would normally do.

Testing a mobile application involves more than just making sure the application works properly. In the real world, your application does not exist in a vacuum but is one of many installed on the device. Testing a mobile application involves ensuring that the software integrates well with other device functions and applications. For example, let’s say you were developing a game. Testers must verify that calls received while playing the game caused the game to automatically pause (keep state) and allow calls to be answered or ignored without issue.

This also means testers must install other applications on to the device. A good place to start is with the most popular applications for the device. Testing your application not only with these applications installed, but also with real use, can reveal integration issues or usage patterns that don’t mesh well with the rest of the device.

Sometimes testers need to be creative when it comes to reproducing certain types of events. For example, testers must ensure that their application behaves appropriately when mobile handsets lose network or phone coverage.

Maximizing Testing Coverage

All test teams strive for 100 percent testing coverage, but most also realize such a goal is not reasonable or cost-effective (especially with dozens of Android devices available around the world).Testers must do their best to cover a wide range of scenarios, the depth and breadth of which can be daunting—especially for those new to mobile. Let’s look at several specific types of testing and how QA teams have found ways—some tried-and true and others new and innovative—to maximize coverage.

Validating Builds and Designing Smoke Tests

In addition to a regular build process, it can be helpful to institute a build acceptance test policy (also sometimes called build validation, smoke testing, or sanity testing). Build acceptance tests are short and targeted at key functionality to determine if the build is good enough for more thorough testing to be completed. This is also an opportunity to quickly verify bug fixes expected to be in the build before a complete retesting cycle occurs. Consider developing build acceptance tests for multiple Android platform versions to run simultaneously.

Automating Functional Testing for Build Acceptance

Mobile build acceptance testing is typically done manually on the highest-priority target device; however, this is also an ideal situation for an automated “sanity” test. By creating a bare-bones functional test for the emulator that, as desktop software, can be used with typical QA automation platforms such as Borland SilkTest , the team can increase its level of confidence that a build is worth further testing, and the number of bad builds delivered to QA can be minimized.

Testing on the Emulator Versus the Device

When you can get your hands on the actual device your users have, focus your testing there. However, devices and the service contracts that generally come with them can be expensive. Your test team cannot be expected to set up test environments on every carrier or every country where your users use your application. There are times when the Android emulator can reduce costs and improve testing coverage. Some of the benefits of using the emulator include

  • Ability to simulate devices when they are not available or in short supply
  • Ability to test difficult test scenarios not feasible on live devices
  • Ability to be automated like any other desktop software

Testing Before Devices Are Available Using the Emulator

Developers often target up-and-coming devices or platform versions not yet available to the general public. These devices are often highly anticipated and developers who are ready with applications for these devices on Day 1 of release often experience a sales bump because fewer applications are available to these users—less competition, more sales.

The latest version of the Android SDK is usually released to developers several months prior to when the general public receives over-the-air updates. Also, developers can sometimes gain access to preproduction phones through carrier and manufacturer developer programs. However, developers and testers should be aware of the dangers of testing on preproduction phones: These phones are beta-quality. The final technical specifications and firmware can change without notice. These phone release dates can slip, and the phone might never reach production.

When preproduction phones cannot be acquired, testers can do some functional testing using emulator configurations that attempt to closely match the target platform, lessening the risks for a compact testing cycle when these devices go live, allowing developers to release applications faster.

Leveraging Automated Testing Opportunities Using the Emulator

Android testers have a number of different automation options available to choose from. It’s certainly possible to rig up automated testing software to exercise the software emulator and there are a number of testing tools (monkey, for example) that can help with the testing process. Unfortunately, there are not really a lot of options for automated hardware testing, beyond those used with the unit testing framework. We can certainly imagine someone coming up with a hardware testing solution—in our minds, the device looks a lot like the automated signature machine U.S. presidents use to sign pictures and Christmas cards. The catch is that every device looks and acts differently, so any animatronic hand would need to be recalibrated for each device. The other problem is how to determine when the application has failed or succeeded. If anyone is developing mobile software automated testing tools, it’s likely a mobile software testing consultancy company. For the typical mobile software developer, the costs are likely prohibitive.

Understanding the Dangers of Relying on the Emulator

Unfortunately, the emulator is more of a “generic” Android device that pretends at many of the device internals—despite all the options available within the AVD configuration.

The emulator does not represent the specific implementation of the Android platform that is unique to a given device. It does not use the same hardware to determine signal, networking, or location information. The emulator can pretend to make and receive calls and messages, or take pictures or video. At the end of the day, it doesn’t matter if the application works on the emulator if it doesn’t work on the actual device.

Testing Strategies: White Box Testing

The Android tools provide ample tools for black box and white box testing:

Black box testers might require only testing devices and test documentation. For black box testing, it is even more important that the testers have a working knowledge of the specific devices, so providing device manuals and technical specifications also aids in more thorough testing. In addition to such details, knowing device nuances as well as device standards can greatly help with usability testing. For example, if a dock is available for the device, knowing that it’s either landscape or portrait mode is useful.

White box testing has never been easier on mobile. White box testers can leverage the many affordable tools including the Eclipse development environment, which is free, and the many debugging tools available as part of the Android SDK. White box testers use the Android Emulator, DDMS, and ADB especially. They can also take advantage of the powerful unit testing framework, which we discussed in detail in the previous chapter. For these tasks, testers require a computer with a development environment similar to the developer’s.

Testing Mobile Application Servers and Services

Although testers often focus on the client portion of the application, they sometimes neglect to thoroughly test the server portion. Many mobile applications rely on networking or “the cloud.” If your application depends on a server or remote service to operate, testing the server side of your application is vital. Even if the service is not your own, you need to test thoroughly against it so you know it behaves as the application expects it to behave.

Here are some guidelines for testing remote servers or services:

  • Version your server builds. You should manage server rollouts like any other part of the build process. The server should be versioned and rolled out in a reproducible way.
  • Use test servers. Often, QA tests against a mock server in a controlled environment. This is especially true if the live server is already operational with real users.
  • Verify scalability. Test the server or service under load, including stress testing (many users, simulated clients).
  • Test the server security (hacking, SQL injection, and such).
  • Ensure that your application handles remote server maintenance or service interruptions gracefully—scheduled or otherwise.
  • Test server upgrades and rollbacks and develop a plan for how you are going to inform users if and when services are down.

These types of testing offer yet another opportunity for automated testing to be employed.

Testing Application Visual Appeal and Usability

Testing a mobile application is not only about finding dysfunctional features, but also about evaluating the usability of the application. Report areas of the application that lack visual appeal or are difficult to navigate or use. We like to use the walking-and-chewing gum analogy when it comes to mobile user interfaces. Mobile users frequently do not give the application their full attention. Instead, they walk or do something else while they use it. Applications should be as easy for the user as chewing gum.

Leveraging Third-Party Standards for Android Testing

Make a habit to try to adapt traditional software testing principles to mobile. Encourage quality assurance personnel to develop and share these practices within your company.

Again, no certification programs are specifically designed for Android applications at this time; however, nothing is stopping the mobile marketplaces from developing them. Consider looking over the certification programs available in other mobile platforms, such as the extensive testing scripts and acceptance guidelines used by Apple iPhone and BREW platforms and adjusting them for your Android applications. Whether you plan to apply for a specific certification, making an attempt to conform to well-recognized quality guidelines can improve your application’s quality.

Handling Specialized Test Scenarios

In addition to functional testing, there are a few other specialized testing scenarios that any QA team should consider.

Testing Application Integration Points

It’s necessary to test how the application behaves with other parts of the Android operating system. For example:

  • Ensuring that interruptions from the operating system are handled properly (incoming messages, calls, and powering off)
  • Validating Content Provider data exposed by your application, including such uses as through a Live Folder
  • Validating functionality triggered in other applications via an Intent
  • Validating any known functionality triggered in your application via an Intent
  • Validating any secondary entry points to your application as defined in the AndroidManifest.xml, such as application shortcuts
  • Validating alternate forms of your application, such as App Widgets
  • Validating service-related features, if applicable

Testing Upgrades

When possible, perform upgrade tests of both the client and the server or service side of things. If upgrade support is planned, have development create a mock upgraded Android application so that QA can validate that data migration occurs properly, even if the upgraded application does nothing with the data.

Testing Product Internationalization

It’s a good idea to test internationalization support early in the development process— both the client and the server or services. You’re likely to run into some problems in this area related to screen real-estate and issues with strings, dates, times, and formatting.

Testing for Conformance

Make sure to review any policies, agreements, and terms to which your application must conform and make sure your application complies. For example, Android applications must by default conform to the Android Developer Agreement and the Google Maps terms of service (if applicable).

Installation Testing

Generally speaking, installation of Android applications is straightforward; however, you need to test installations on devices with low resources and low memory and test installation from the specific marketplaces when your application “goes live.” If the manifest install location allows external media, be sure to test various low or missing resource scenarios.

Backup Testing

Don’t forget to test features that are not readily apparent to the user, such as the backup and restore services and sync features discussed in Chapter “Managing User Accounts and Synchronizing User Data.”

Performance Testing

Application performance matters in the mobile world. The Android SDK has support for calculating performance benchmarks within an application and monitoring memory and resource usage. Testers should familiarize themselves with these utilities and use them often to help identify performance bottlenecks and dangerous memory leaks and misused resources.

Testing Application Billing

Billing is too important to leave to guesswork. Test it. You notice a lot of test applications on the Android Market. Remember to specify that your application is a test app.

Testing for the Unexpected

Regardless of the workflow you design, understand that users do random, unexpected things—on purpose and by accident. Some users are “button mashers,” whereas others forget to set the keypad lock before putting the phone in their pocket, resulting in a weird set of key presses. A phone call or text message inevitably comes in during the farthest, most-remote edge cases. Your application must be robust enough to handle this. The Exerciser Monkey command-line tool is a good way to test for this type of event.

Testing to Increase Your Chances of Being a “Killer App”

Every mobile developer wants to develop a “killer app”—those applications that go viral, rocket to the top of the charts, and make millions a month. Most people think that if they just find the right idea, they’ll have a killer app on their hands. Developers are always scouring the top-ten lists, trying to figure out how to develop the next big thing. But let us tell you a little secret: If there’s one thing that all “killer apps” share, it’s a higher-than average quality standard. No clunky, slow, obnoxious, or difficult-to-use application ever makes it to the big leagues. Testing and enforcing quality standards can mean the difference between a mediocre application and a killer app.

If you spend any time examining the mobile marketplace, you notice a number of larger mobile development companies publish a variety of high-quality applications with a shared look and feel. These companies leverage user interface consistency, shared and above-average quality standards to build brand loyalty and increase market share, while hedging their bets that perhaps just one of their many applications will have that magical combination of great idea and quality design. Other, smaller companies often have the great ideas but struggle with the quality aspects of mobile software development. The inevitable result is that the mobile marketplace is full of fantastic application ideas badly executed with poor user interfaces and crippling defects.

Leveraging Android Tools for Android Application Testing

The Android SDK and developer community provide a number of useful tools and resources for application testing and quality assurance. You might want to leverage these tools during this phase of your development project:

  • The physical devices for testing and bug reproduction
  • The Android emulator for automated testing and testing of builds when devices are not available
  • The Android DDMS tool for debugging and interaction with the emulator or device, as well as taking screenshots
  • The ADB tool for logging, debugging, and shell access tools
  • The Exerciser Monkey command-line tool for stress testing of input (available via ADB shell)
  • The sqlite3 command-line tool for application database access (available via ADB shell)
  • The Hierarchy Viewer for user interface navigation and verification and forpixel perfect screenshots of the device
  • The Eclipse development environment with the ADT and related logging and debugging tools for white box testing

It should be noted that although we have used the Android tools such as the Android emulator and DDMS debugging tools with Eclipse, these are stand-alone tools that can be used by quality assurance personnel without the need for source code or a development environment.

Avoiding Silly Mistakes in Android Application Testing

Here are some of the frustrating and silly mistakes and pitfalls that Android testers should try to avoid:

  • Not testing the server or service components used by an application as thoroughly as the client side.
  • Not testing with the appropriate version of the Android SDK (device versus development build versions).
  • Not testing on the device and assuming the emulator is enough.
  • Not testing the live application using the same system that users use (billing, installation, and such). Buy your own app.
  • Neglecting to test all entry points to the application.
  • Neglecting to test in different coverage areas and network speeds.
  • Neglecting to test using battery power. Don’t always have the device plugged in.

Outsourcing Testing Responsibilities

Mobile quality assurance can be outsourced. Remember, though, that the success of outsourcing your QA responsibilities depends on the quality and detail of the documentation you can provide. Outsourcing makes it more difficult to form the close relationships between QA and developers that help ensure thorough and comprehensive testing.