5 Common API Testing Mistakes

APIs may not appear as end-products to clients, but they play a vital role internally in an enterprise’s products or workforce applications. They often represent valuable assets for creating several top-level products around them. As such, they deserve respect and the best way to show that respect is to thoroughly test them. Unfortunately, through either neglect or inexperience, organizations often make several common mistakes in this regard.

Lack of Developer Responsibility for Tested Code

Many software organizations realize that testing starts even before the first line of code is written. The wise ones include thorough, early testing in the scope of developers’ responsibilities. Otherwise, sparse, happy-path, unit tests are produced that leave testers to fend on their own, which leads to longer defect report-repair cycles.

The most enlightened companies use agile methodologies that include a leftward shift of testing that virtually joins developers and testers at the hip. This move ensures many bugs are caught when their rectification costs less. Simply put, when developers take on more testing responsibility, API quality rises.

Insufficient Code Instrumentation

There are testing tools for instrumenting API code, but often it is more cost-effective for developers to include basic instrumentation themselves. This practice significantly reduces the defect location effort during testing. It also contributes to measuring API coverage and performance bottlenecks.

Such instrumentation may simply be composed of inline assertions, progress indicators, API call echoes or offline logs. To reduce runtime overhead and possible masking of bugs or performance issues, compile-time flags can selectively remove the instrumentation during runtime execution.

Not Testing API Coverage and Usage

The collection of endpoints within an API often encompasses a complex hierarchy of relationships that is difficult to get right. There may be duplication of functionality, awkward, error-prone sequences of related calls, overly fine call granularity or too many paths to accomplishing a single high-level task.

Passing an ill-designed API all the way through production not only ignores the cognitive load that testers must deal with but risks creating an API whose frequent design changes create headaches for its consumers. Such pitfalls are avoided by tracking and analyzing API call patterns, which may reveal the contortions apps must perform to utilize the interface.

Using the GUI to Test the API

Testing APIs via an application GUI presents many problems:

  • First of all, this approach is the least efficient way to test APIs. Because no GUI test automation tool can completely overcome the problem of consistently locating GUI artifacts in test scripts, there are high script maintenance costs.
  • GUI testing also adds an additional layer of test indirection, which masks API errors or creates false defect leads that suggest the error is in the API when it is actually in the GUI.
  • Additionally, the range of data inputs and outputs possible when using the API directly is typically severely restricted when employing a GUI as the API test framework.

For all these reasons, it is more effective to test the API directly with programmatic scripts, which can be in a language different from the API itself that matches the skills of the testers.

Overlooking the Effects of Environmental Failures

Since APIs are increasingly important layers affecting an application’s response and reliability, it is vital that performance, load and stress testing be part of their test plan. Such testing should be done directly whenever possible. However, even with increased testing rigour, many organizations overlook testing APIs in the context of fundamental environmental failures within the infrastructure upon which the API operates.

Even relatively trivial applications have dependencies on platform capabilities, networking resources, back-end databases and so on. If API testing in situations where these components are inadequate, slow or missing is not done, then ways to compensate for them will never be implemented. Unfortunately, for API developers, they cannot count on end users drawing the correct conclusion as to where the blame lies for such an API/application failure.

APIs are at the core of most software applications and services. In fact, one API is often used in multiple products. Getting their functionality, performance and maintenance requirements right is a significant challenge, but such efforts are vital to the creation of well-performing and responsive apps and services.

Due to their importance, but indirect nature, it is often easy for software developers, testers and operations staff to diminish the importance of thorough, direct testing, which leads to mistakes of omission that cost companies dearly in terms of development, testing and maintenance costs or even the loss of customers. Therefore, avoiding even the most common pitfalls in API development is crucial.