What to Look for in an Outsourcing Partner

Bad experiences with outsourcing providers are often traceable to badly designed selection processes or the use of deficient selection criteria. Whereas, you can find ample guidance online on how to build a robust selection framework, we will share the most important vendor attributes for comparing and contrasting outsourcing companies to ensure you select one right for you.

Sizing a Vendor to Your Project

When optimizing the capabilities of an IT provider to your project, size matters. If your organization can fund nine-figure deals, then the number of companies to field such a deal is not large, whereas for smaller projects you have a lot more choices.

The key is to find a provider of a size that will consider your deal to be a big deal. This significantly increases the odds that your project receives the attention it deserves by having their most talented staff assigned to it. Additionally, right-sizing usually provides meaningful accommodation in contract terms and professional treatment from the executive staff.

The risk of choosing an outsourcing company that is too small, however, is that they may not have a sufficient level of technical capabilities, skilled staff, certifications or experience to deliver what you hope to accomplish.

Local Presence with Global Delivery

If your company is based in North America, then choose an outsourcing vendor whose headquarters there. They will better understand your industry, business model, goals and processes since you are working within a similar cultural context.

Your company also benefits from local contractual protections should your project hit a serious speed bump. It also provides the distinct possibility that they can provide onsite staff at your site, which improves communication and timely escalation of critical issues.

However, vendors that also provide delivery from offshore will save you money. Furthermore, vendors with a global presence could directly interact with your own global sites and offer the possibility to add shifts in other time zones that work collaboratively with your local staff, which provides you with 24-hours a day of development.

Consistency in Quality and Delivery

Until recently, the majority of IT outsourcing firms sold themselves mainly on cost and based contracts on hourly rates. These days, more companies compete on their ability to produce results. Those results should include both timely delivery and measurably high-quality products or services.

During your due diligence, evaluate the vendor’s past work and pursue references to gauge how well the vendor has delivered on their promises. Have a detailed discussion with their senior staff about how their corporate culture reinforces the importance of on-time delivery and high quality throughout the ranks.

When you are convinced they will deliver what they say they will, it is still prudent to start the relationship with one or more smaller projects of a few months in duration to validate their work and timeliness for yourself.

Communication Capabilities

Well-planned, thorough and frequent communication is critical when using an IT outsourcing vendor. This goes double if the company you select has offshore resources, since both time and language may present communication barriers.

How much, when and how you each of you communicates with the other should be driven by the client. Both sides must identify primary contacts for specific areas. These people must have seconds in cases where the primary is unavailable. Daily meetings with program and development managers are not unreasonable as are weekly meetings with BDMs or department managers. To gauge frequency, ask yourself how much time you can afford to lose should a process go astray.

Vendors uncomfortable with your communication plan should raise a red flag with you, since this is such an essential element in your business relationship.

Their Range of Skill Sets

Except for the largest IT organizations, most companies do not have all the personnel with all the right skill sets for every project. When evaluating vendors’ technical and process capabilities, strike a balance between broad and deep skills that align with your business and project needs.

If you hope for the vendor to work on more than one type of project or you wish to establish a long-term relationship, then one with a broader range of skills may work out better in the long run. A possible drawback is that a project comes along that is a mismatch for the vendor’s skills and quality suffers.

Many enterprises today recognize that one size does not fit all, especially when working with small to mid-size IT outsourcing companies, so they choose to multi-source these services. This can complicate internal management of vendors, but often the point solutions that smaller vendors provide are of higher quality, with faster delivery and at the same or lesser cost.


Proper selection of an IT outsourcing vendor will significantly augment your company’s strategy and operations. Lack of due diligence, however, often leads to negative consequences plus lost time and money.

Use the selection criteria above along with a robust process comparing business requirements against each company’s pros and cons. This will lead to asking the right questions and building a seamless working relationship with a talented development provider.

The Optimus Information model is designed to allocate the right mix of local and offshore resources in order to optimize expertise, speed and cost. We provide the ability for development teams to quickly add specialty skills to a development team without incurring long-term costs.Our successful track record speaks for it’s self, and we love to share past work we’ve done. Our global team is made up of a diverse range of experienced professionals, allowing us to work on complex solutions requiring a wide variety expertise. The result for our customers is the capability to far better manage resource capacities and outcomes.

Contact us if for your next IT project. We’re always happy to help.

Contact us now

Top Ten Software Development Outsourcing Trends for 2016

Originally, the primary motivation to outsource software development was to achieve lower labor costs, but continuing and emerging business and technology trends in 2016 are leading to new client requirements on outsourcers. When choosing an outsourcing partner, more and more businesses are looking for closer alignment to their business goals, flexibility demands and quality requirements.

Thus, clients are evaluating outsourcing companies via increasingly sophisticated criteria. The smartest software providers are reciprocating by developing new service models while taking advantage of many of the same technologies driving these current trends.

1) Moving from Hours to Results

In order to ensure that enterprises are getting what they need for their money, most are now seeking out providers who operate on a results-driven model versus rates based on time. Furthermore, clients are demanding that payment schedules be based on satisfactory achievement of those results versus upfront fees or retainers.

2) Greater Flexibility

Clients are looking for providers who provide on-demand services without locking them into long-term contracts or volume commitments. This enables client companies to respond more efficiently to rapidly changing market demands. In response, development providers who are moving operations to cloud resources are the ones most likely to adapt to the increased demand for flexibility.

3) Utilization of DevOps Practices Continues Apace

DevOps continues to attract adherents as it goes mainstream in up to 25 percent of companies this year, according to Gartner. Most of the IT departments in these organizations are transitioning to a service center model. Service providers who already operate in this manner will more easily blend into these organizations’ processes and decision-making apparatus.

4) Security Risk Perception Increases

A key concern within any outsourcing strategy is security. With the growing presence of the Internet of Things and the potential for an exponentially larger attack surface, software development outsourcing companies must ensure that their own security vulnerabilities are addressed in a manner that will win the confidence of client decision makers. Demonstrating solid track records and establish policies is of high importance when selecting a vendor.

5) Managing Infrastructure as Code

Amazon’s AWS has enabled the application of software development change management systems to development and deployment infrastructure. AWS is dedicated to making this paradigm increasingly easier with new APIs and services. Outsourcers who adopt this practice are reaping large benefits in their software support, testing and deployment efficiency by synching servers, storage and networking infrastructure to precise versions of the source code.

6) Multi-Sourcing Technologies Impacts Integration

Client companies are utilizing a more complex mix of software products and services this year. This multi-sourcing of technologies presents in-house management challenges, and a rise of new vendor management offices. The challenge for software providers is meeting new performance and integration standards from VMOs. Compliance failure may result in the outsourcer being dropped in the interests of streamlining operations.

7) Business Process Outsourcing Being Replaced by Robotic Process Automation

The software outsourcing industry in 2016 will continue to feel the influence of the rise of RPA. In fact, one of RPA’s touted benefits is the reduction of outsourcing, especially via cloud-based RPA services. Those outsourcers who can adapt by offering relevant automated services in the most responsive, scalable and efficient manner are the ones who can survive and profit from this trend.

8) Outsourcing Selection is Speeding Up

Along with the adoption of agile methodologies within software development, business decisions are also being made with more agility and higher velocity. Outsourcers will increasingly recognize this trend as more clients endeavor to close smaller deals faster in order to stay ahead of their competition.

9) Adept Companies Are Being More Selective with What They Outsource

Many organizations who originally turned to outsourcing to compensate for a lack of internal expertise and resources have grown more sophisticated over time. They are progressively learning to be more selective regarding what to do in-house versus handing off to an outsourcing provider. Organizations are looking deeper into what their core competencies are and what they can outsource to make themselves more efficient in-house. Their motivations are usually the desires for greater flexibility, responsiveness or cost reductions, all of which software providers need to be sensitive to in contract negotiations.

10) Outsourcing Company Accommodation Increasing

It is no longer the case that companies seek out only the lowest cost provider. Sophisticated outsourcing companies will respond tactically and strategically to all the trends discussed here to grow or to survive. This trend can be seen in the greater tendency for outsourcers to adapt and adjust terms or offer new services in an effort to deliver the best product and service.


The outsourcing industry is more fluid than ever this year with clients focusing less on price per se and more on results, quality, integration, security and agility from software development providers. As you adapt to your own fast-moving markets and the rise of paradigm-shaking technologies such as IoT and on-demand infrastructure, so do we. Optimus stays two steps ahead in order to support your business in all your software and IT requirements.

At Optimus, we consistently stay on top of these trends while leveraging the forces driving them to bring you the solutions you need. Contact us to help with your next development, testing, cloud, BI or mobile project.


Using Context Driven Testing

The premise behind context-driven testing is that software should be viewed first and foremost as providing solutions to real problems. In order to effectively test software, the problems it solves, the end-user needs and usage contexts must be taken into consideration when creating a test plan and applying tests. It recognizes that test processes must adapt to the testing environment itself, which includes the methodologies, skills and resources available within a particular test organization.

Testing if the Problem Is Solved

Functional testing ensures that each feature works as stated in the requirements. However, that only partially addresses how successfully the features taken together provide a complete answer to customer needs. Context-driven testing also takes those needs into account when designing test cases.

For example, say a doctor’s office acquires an application that assists the doctor in making a correct diagnosis. It may have a high accuracy rate and every button and widget in the application works perfectly. If it offers a diagnosis in a non-transparent way, however, the doctor may lose confidence in its diagnostic abilities, which leads to disuse. If the app fails to interface with other office programs, its usefulness is also diminished.

Furthermore, pre-release testing must be extremely rigorous in order to avoid the possibility of a lawsuit for an incorrect diagnosis. These are usage, environmental and perception contexts that must be considered thoroughly when designing the test plan.

Context-Driven Testing Principles

Beyond the factors considered above with regard to software use and its operating environment, “true believers” of the context-driven school of thought consider other, more subjective factors as essential characteristics of context-driven testing. This is evident in the commonly accepted principles of context-driven testing paraphrased here:

  • The end product must solve the intended problem in the intended context. If it does not, then it does not work.
  • The value of any “best” practice depends on the context in which it is applied. The phrase “best practice” should never be considered an absolute statement.
  • he application of judgment and skill applied in a cooperative manner is essential to product testing success.
  • The people cooperating in the testing activity are the most important aspect of a project’s context. This especially applies to the project’s stakeholders.
  • Projects naturally evolve over time in unpredictable ways and thus, flexibility is critical.
  • Software testing, when performed correctly, is a challenging intellectual process.

Several of these principles do not apply directly to the context of software in the field, but to the context of the testing environment including the methodologies, standards, so-called best practices, abilities of the testers, test tool capabilities and stakeholder expectations with regard to what, when and how to test the software under development.

The Practices of Context-Driven Testing

To follow context-driven testing precepts, apply a holistic approach that wraps up testing constraints and goals into processes and practices most appropriate for the problem being solved by the software within its testing context.

Flexibility is a key attribute in context-driven testing, which leads many to believe that agile methodologies and context-driven testing significantly overlap in how they operate and get results, which is true to some extent.

Regardless of the methodologies used by your organization, however, context-driven approaches hold value. In practice, they lead to several useful practices that enhance any testing process:

  • Sharing information and asking lots of questions significantly improves consensus on what precisely is the context of the project from various viewpoints. It increases learning and transparency as well.
  • The test plan should be reviewed by as many stakeholders as possible supplemented by experts in the application or testing domain who are not directly tied to the project. Obviously, this should be done as early as possible in order to minimize the impact of changes.
  • The test plan should be considered adjustable as the project moves ahead because knowledge of the entire project/product context is unlikely to be known a priori regardless of how many questions were asked.
  • Carefully consider the appropriateness of any test tool, testing practice or paradigm, especially if it is being used out of convenience versus applicability to the test problems at hand.
  • Defer the decision of when testing is complete to the project stakeholders. This increases their attention to the project and allows the testers to concentrate on their job without making judgments about product readiness.


In the end, context-driven testing boils down to doing the best job possible for product stakeholders, especially end-users, with the resources you have available. By definition, it is an approach that highly values problem solving, collaboration, intelligent testing and the most efficient use of testers’ skills.

Test Automation in Agile

Although both agile development and automated testing have more or less independently come into more widespread use with each passing year, it seems they were made for each other from the start. The advantages of both are derived from the same desire to increase software production efficiency overall.

Both work best when there is a high degree of collaboration between development and testing although they offer significant benefits to all project stakeholders. You often see a high usage of each approach wherever organizations are growing their continuous integration and continuous delivery capabilities.

Test Automation in Agile versus Traditional Waterfall

There certainly is no reason that teams employing waterfall methods cannot take advantage of test automation too. Typically, however, automation tends to be of lower priority in those scenarios due to the ingrained “throw it over the wall” mentality. Such a back-end loaded scenario is unfortunate because automation, especially test-driven coding, can be used quite effectively during development phases.

In agile, however, the questionable “luxury” of developers shedding testing responsibility is simply anathema to a process of two week scrums in which planning, designing, coding, testing and deployment all happen nearly at once. Without thorough development and testing automation, the productivity and quality of agile cycles would consistently fail to meet project goals.

When Automating Tests, Choose Wisely

Despite the brevity of agile development cycles, there can exist a tendency to over-automate testing. This is where careful planning and in-cycle flexibility are both necessary. Tests will naturally be categorized as to their type, whether that be functional, white box, black box, performance and so on. Within categories, however, individual test tasks need to be sorted by how likely they are to be reused repeatedly in order to optimize automation.

Any test that will be repeated more than twice, especially if it appears it will carry over to the next scrum, is a prime candidate for automation. That pretty much eliminates automation of exploratory testing and perhaps some boundary testing if the boundaries are still in flux and are not easily parameterized. On the other hand, even one-off tests might be amenable to automation in the sense that they are ongoing placeholders within an automation framework test suite for future software revisions.

Avoid Automating for Automation’s Sake

With more sophisticated test automation tools and frameworks, it is uncannily easy to become lost in the forest without seeing the trees. This is especially true of homegrown test frameworks, where the framework development effort might rival that of the applications it is meant to test. It pays at all times to bear in mind that meaningful, reusable tests must be the top priority.

An additional potential trap is not paying enough attention to the ebb and flow of value within your test suites. They naturally become cluttered with marginally valuable test scripts that may have lost relevance and eat up undue cycles of execution time. That is why it is important to set clear criteria for what does and does not get automated plus regular reviews of the test code base.

Staying One Step Ahead

Due to the rapid pace of agile development, anything to gain a leg up will pay dividends. Crisp, detailed planning is one key to getting ahead of the game, but in addition you should implement testing and its automation as early in the development cycle as possible.

The use of pair programming with one developer and one tester simultaneously working on the same code is an innovative practice in this regard. It is especially effective when combined with test-driven development in which the tester first writes the unit tests based on the specification and the developer follows with application code that turns each test “green.”

Actually, the underlying concept of first creating acceptance criteria as test code and then programming the software to make that test succeed can be applied at most points throughout the development cycle. These “live” requirements should be captured in an automated test environment that can be accessed and run by all stakeholders in the project to further increase collaboration.


Test automation is a key tool in agile development project success. As development organizations naturally evolve their entire production team from agile development to continuous integration and continuous deployment, the use of build and test automation becomes ever more critical.

Test automation in an agile development environment, however, has to be finely tuned to a greater degree than when employed in waterfall environments. Precise planning, a shift-left of testing responsibilities, keen maintenance policies and a sharp weather eye for when and when not to use automation are the most important elements for extracting the highest value from test automation.

5 Ways to Improve Mobile App Testing Quality and Efficiency

Mobile device application development faces significant challenges, which pass through to their testing. Though the obstacles appear daunting, there are ways to mitigate such complications, improve your team’s testing effectiveness and also raise app quality.

Mobile App Challenges

  • Mobile device fragmentation is rising. There are a myriad of hardware platforms, OS versions and network connection options across devices. Trying to maximize market coverage for a single app requires spanning this ever-increasing matrix of combinations with a concomitant increase in testing complexity.
  • Testing budgets are not expanding, which means doing more with less, which in turn leads to deciding between in-house or outsourced testing. Outsourcing infrastructure to the cloud often reduces capital and maintenance expenditures, but outsourcing test personnel and processes is a riskier proposition that is usually hard to unwind later. Outsourcing pressure is also increasing because new expertise and sophistication is required by leading edge test frameworks and tools.
  • Adding insult to injury, mobile app development requires ever faster development/release cycles. That is due to heightened competition and to end-users’ progressively shorter attention spans. Users more and more expect defects and improvements to be available in near-real time. That means testing has to constantly play catch-up so as not to become the bottleneck in the next upgrade.

Dealing Effectively with Modern Mobile App Testing

Before deciding you might be in the wrong line or work in the face of these looming challenges, let us consider five approaches that will improve your organization’s abilities to operate more efficiently while improving the quality of released apps.

1. Embrace Agile Development

This step is global to the organization. Fundamentally, it means involving all the stakeholders in the project from business managers to architects to production personnel. Depending on the delta between what you do today and how far you want to take agile development determines the size of the steps you take, but every step gets you closer to a more successful testing organization. Agile methodologies are proven to improve the ability to adapt to change, improve customer and stakeholder engagement, create more useful documentation and higher quality deliverables.

2. Value Your Automation Properly

Are you accurately measuring the value you derive from test automation? Often, more automation simply bloats maintenance tasks and increase test run times in the absence of ongoing evaluation of the tests contained within a framework. For instance, effort is commonly wasted automating tests that are better done manually, such as exploratory testing. The correct approach, customizable to your particular environment, is to measure your tests along a value spectrum. Tests such as build smoke tests are relatively easy to automate and provide high value. On the other end of the spectrum might be compatibility tests, which are necessary but probably should be lower priority.

3. Virtualize Your Testing Resources

Virtualized services and virtualized platforms are both relatively inexpensive in terms of infrastructure, setup and ongoing cost. They can be used on both the client and server sides of mobile applications. You will use less costly hardware and gain the ability to scale almost linearly, which is especially valuable for performance, load and stress testing.

4. Improve Your Ability to Test on Both Emulated and Real Devices

Testing a multi-platform mobile app on real devices is probably a non-starter for in-house deployment depending on the complexity of the instance matrix combining devices, platform capabilities, OS versions and network connectivity options. On the other hand, you cannot achieve a sufficient level of quality confidence with only emulators, which should by the way be deployed as early as possible in the development cycle.

The increasingly obvious answer to this situation is to employ cloud testing services that provide real-time access to a wide collection of new, sometimes unreleased, mobile devices and network operators along with built-in test and collaboration frameworks.

5. Gain Deeper Insights into the End User Experience

Simply reviewing comments and ratings in app stores is not enough anymore to evaluate how apps are performing in the field. Consider compiling in one of today’s sophisticated crash/analytics SDKs, which have vanishingly small run-time overhead. These provide real-time insights into how users are interacting with your app plus detailed reports on crashes that pinpoint problems immediately, especially if used in combination with a hotfix insertion tool.


Testing mobile apps gets harder every day, although new tools and techniques are always coming along to mitigate the difficulties. Seeing ways to improve development/test turnaround times, to optimize test automation capabilities, employing test resource virtualization and cloud-based device testing services are effective methods for maintaining the pace. Above all, keep your app’s user experiences squarely in your sights and utilize new tools for evaluating their behaviors and problems.

How to Select a Test Automation Language

There are times when the language of choice for your test automation is essentially already made for you, such as when you must rely on a single test developer whose programming language proficiency is limited to one or two languages. If you can make the space for an evaluation, however, it often pays to consider several options.

Factors Affecting Your Choices

Besides current expertise in a particular language among your team members, there are several other aspects that require consideration when choosing which language or languages upon which to base your test automation going forward:

  • Assess how much you already have invested in whatever language you now use. This investment might involve training, IDE licensing, existing code and possibly support systems linked to running of test scripts or programs. However, holding onto an investment that is not performing well is an unwise long-term approach.
  • Moving to a new language and hoping to leverage existing test code might be unrealistic. It will, at the least, require a porting and re-testing effort, which will be hampered by developers who are unfamiliar with the previous language and will push to re-write tests from scratch.
  • Adopting a new language means new training costs. If the organization is planning to expand, then choosing a language that is not widely used may limit your recruiting choices.
  • Although it is not a requirement, choosing the same language as the software being tested might pay dividends, especially if there is a well-developed collaboration between developer and test teams already.

The Case for a Technological Decision

As you can see, some of the factors above are clearly non-technical and deal mainly with managerial issues, such as existing language expertise within the test team. Often, managerial issues trump the technical issues with respect to language choice. However, technical advantages should not be overly discounted as these often have a significant impact on the long-term efficacy of the test organization.

In general, there is clearly a difference in programming capabilities between scripting languages, such as Perl, Python and Ruby, versus compiled OO languages such as Java, C++ or C#. The latter can do everything the scripting languages can plus a lot more, such as OO exception handling, synchronization and multithreading to name a few. It is typically easier to interface to other libraries, data sources or cooperating applications using a compiled language.

If your enterprise is or will be engaged in large, complex software projects, then these more capable languages are likely worth the additional effort and cost to utilize.

Whichever language you choose, be sure to also evaluate the editing and debugging tools that come along with it. Typically, you have a choice of IDEs and choosing the right one can make a meaningful difference in the time it takes to create, test and rectify errant test scripts or programs.

Requirements of Your Test Automation Framework

Of course, the final decision regarding testing language may be highly restricted by your choice of test automation tools, since some of them only work with specific languages. Hopefully, your automation requirements analysis led you to choose a framework that supports most of, if not all, the development tools currently under your roof.

You may find that choosing the right test automation framework more or less obviates the need to choose any particular test code language. For instance, some tools use visual test editors that require almost no programming experience.

Other frameworks use scripts but automate their creation by using keyword tests that simulate user actions in a portable way that does not require re-creating a GUI-driven test when the underlying code changes. Still other automated test tools employ record-playback to create and run tests. Overall, however, these types of frameworks are severely limited relative to what can be accomplished with scripts developed in a programming language.


We have covered many of the important issues relative to choosing an optimal testing language for your enterprise. There will always be a mix of technical and managerial aspects feeding the final analysis. You should try to balance the pros and cons, but realize that there may not be a perfectly optimal solution. The most important thing is often to drive the process to a clear and timely decision rather than attempt to satisfy all parties and constraints.

Performance Testing Fallacies

When testing software performance, there are several erroneous assumptions commonly made about when and how to go about it, what is to be measured, and how to make improvements based on the results of such testing.

Performance, Load and Stress Testing Are Not Equivalent

Thinking that load and stress testing are the same as performance testing is a common fallacy, especially among developers. Not understanding the distinctions leads to inefficient approaches to measuring a software system’s responsiveness, infrastructure needs and its fragility.

  • Performance testing evaluates software runtime efficiency under moderate loads and should be performed as early as possible in the development cycle.
  • Load testing takes place after performance balancing is accomplished. It simulates real-world conditions to measure the software’s endurance and response to high volumes of I/O or concurrent users. It is most effectively performed using test automation tools.
  • Stress testing comes last. It applies extreme pressure in order to find the software’s breaking points and measure its ability to degrade gracefully and preserve data under crash conditions.

The Fallacy That Performance Testing Comes Later

The best time to apply true performance testing is during development. It should begin as soon as unit/regression tests are in place and full functionality of the software is nearing completion.

It should be performed by the developers with close guidance from testers. This is because you still have full mind share from developers, so any load balancing required as the result of performance testing can be accomplished quickly and efficiently. Throwing the task of performance testing over a virtual wall to the test team creates an unnecessary and expensive disconnect that increases defect detect-repair cycle time.

The One-For-All Testing Fallacy

Performance tests are expensive when they require testing a broad swath of functionality, many alternative flows and a full end-to-end infrastructure including client devices, servers and databases. Because of this, there is a temptation to limit the number of these tests and hope that a subset is able to shake out all the problems.

The better approach is to create a set of tests that exercise as much of the system as possible and prioritize these in terms of the cost or risk of defects in the parts of the system under test. Secondarily, take into account the cost of the tests themselves.

Then, create the highest priority tests, run them and find solutions to defects as you go along. In this manner, you are more likely to uncover the most serious defects that, once fixed, are likely to improve software performance in subsequent tests.

Assuming Your Test Environment Is “Good Enough”

The larger the delta between your in-house performance testing environment and the actual production or deployment environment, the more likely you will fail to uncover showstopper defects. The extreme cases of this fallacy are using a different OS or OS version than that used where the software is deployed.

There are many other differences possible between the test and production environments including the type or version of hardware or database in use. The exact provisioning and configuration of the runtime systems including resident applications and library versions must also be accounted for. The most advanced testing environments keep all hardware and software elements under a single version control system.

The Extrapolation Fallacy

Making conclusions about how a software system’s performance will scale is fraught with risk. For one thing, even software that theoretically scales linearly may succumb to potential non-linearity of the underlying infrastructure on which it runs.

For example, as the load on a particular hardware platform’s memory or disk usage increases there comes a point where swapping and thrashing create a bottleneck. Furthermore, as such limits are reached, the pressure these exert on one part of the software may lead to unexpected breakdowns in performance for other components.

For software systems with a prominent networking component, Metcalfe’s law is particularly applicable, which says that the number of network connections will grow in proportion to the square of the number of networking end points. In such a case, making an extrapolation from a test using an unrealistically small set of users could catastrophically miss defects.

The corollary to scaling assumptions is the additional fallacy that such problems can be solved by simply adding more hardware or networking.


Testing resources are always at a premium. In order to effectively take advantage of them with regard to performance testing, it is critical to understand performance testing types and when and where to apply them. Consistency and completeness in test environments is essential as is taking into account real-world software and hardware scaling issues. Finally, whenever possible, you should take full advantage of test automation to increase testing efficacy.

How to Test a Project with Bad Requirements

Testing software with poor requirements is clearly undesirable. It is often the sign of an immature software development process, especially with small start-ups. If it is an ongoing situation, it represents a serious risk to the organization’s viability. Testers, however, are in a position to improve this situation by applying best practices, working to understand what the requirements lack, and seeing that test cases and documentation fill in the gaps.

When Requirements Go Bad

If there is still time for review of the requirements, use the following criteria to analyze possible defects:

  • Missing requirements are the hardest to nail down because there is nothing already on paper with which to work. There might be obvious missing functionality, but missing error conditions or data types or allowance for extensibility are typical issues that get overlooked.
  • Incomplete or ambiguous requirements are a common occurrence. Most of these happen for two reasons:
    • Expressing precise requirements in any natural language is difficult at best.
    • The author makes unshared assumptions about the details behind the requirement.
  • Conflicting requirements often occur because the requirements were written by multiple authors who failed to fully communicate with one another. For example, this situation can lead to related software components reacting to a single user action, such as updating a record, in ways that produce inconsistent results.

Doubling Down on Testing Best Practices

Naturally, you should always apply appropriate best practices to any testing task, but in the context of ill-formed requirements, key tasks are even more important:

  • Shift testing leftward in the development process as much as possible. Doing so places you closer to the people who wrote or at least have a better understanding of the unwritten requirements than you do. This is especially true if the project is adding functionality to an existing product where the domain knowledge is fresh and based on experience.
  • Especially if there is an existing version of the system being built, lean heavily on exploratory testing to understand how the system works and to assist you in developing a test strategy. This strategy should include both what to test and what not to test. Document your effort thoroughly and share the findings often with other staff.
  • Write detailed test cases and an overall test plan. Done well, these documents can enhance the actual requirements, if any, by specifying expected functionality and behavior. Wire-framing is an excellent way to both collect data from others on expected behavior and to document how things should work. Wire-frames can elicit many “Aha!” moments from architects, designers, developers and other stakeholders.
  • Seek out domain experts within the company, or outside if you must, who can shed light on performance and functional requirements that would be expected in the market for your particular application or system under test. Your business analysts may already have a good handle on this, but have yet to document them thoroughly.

Staying in the Loop

As early as practical, proactively insert yourself into face-to-face, e-mail and any other electronic discussions related to the product’s development. This step is probably already happening for you if the development is employing an agile methodology.

Even in that situation, however, you should not hesitate to ask the “dumb questions” of others in the scrum as these often turn up unsaid assumptions that lead to additional detail or flaws that you, as a tester, need to know. Especially look out for what appear to be conflicting or ambiguous requirements on application behavior.

Avoid a Double Whammy

Organizations that are producing poor requirements often exhibit other bad software development behaviors. Perhaps the most important of these is failing to implement competent change management.

If your organization’s change management practices or CM software are inadequate, take up the banner to implement a more useful system. Once you have change management in place, it will be far easier to grasp the project requirements as the development progresses and to tie specific test documents and scripts to specific revisions. Without good CM, your testing effort may insert even more ambiguity and conflict into the understanding of the project requirements.


Working within a project or organization that spends too little effort developing sound software requirements can be stressful, but it need not cause you to throw in the towel. While it is a situation best rectified earlier in the development process, making improvements at the testing phase is possible and can be quite effective. View it as an opportunity to exercise your analytical and problem-solving skills as well as a chance to finesse your communication talents.

Popular Java Testing Tools and Frameworks

There are a wide variety of testing tools and test frameworks for automating the testing of Java/J2EE applications and server components. Many are aimed at unit or functional testing, while others are utilized for specific types of Java components such as view, logic and validation components.

Unit Testing

JUnit is perhaps the best known testing framework from which other frameworks are derived. It is useful in test-driven development methodologies. JUnit links into an application as a JAR. It possesses several valuable features such as annotation and assertions. Due to its simplicity, unit tests are run quickly with instant results via a red/green progress bar.

Jasmine is a unit testing framework designed specifically for JavaScript. Testers familiar with unit test frameworks such as ScrewUnit, JSSpect and JSpeci quickly spin up on Jasmine as it is closely related. Jasmine is especially useful for behavior-driven development methodologies. It is most popular for testing AngularJS applications.

JWalk is another unit testing tool for Java that supports a specific test paradigm known as Lazy Systematic Unit Testing. Tests are run using the JWalkTester tool on any compiled Java class. It supports “lazy specifications,” dynamic and static code analysis, plus insertion of programmer hints.

Functional Testing

The test framework HTTPUnit is based on JUnit and used for both unit and functional testing. It works as a browser emulator for such functions as JavaScript validation, cookie management, page redirection and form submission. It also supports the simulation of GET and POST requests.

TestNG is another JUnit-derived framework used for unit, functional and integration testing. It is growing in popularity compared to JUnit due to a number of extra features such as annotations, parameters, embedding BeanShell, flexible test configurations and the ability to test if code is multithread safe.

JWebUnit is a meta-framework that wraps other frameworks such as HTMLUnit or Selenium for use in functional, integration and regression testing of Java applications. It provides a single, simple interface for test case writing. It is also useful for screen navigation testing.

Java Test Frameworks

In addition to the test frameworks above, there are other component-specific frameworks that are popular for test automation.

JQuery Testing

For JQuery testing, QUnit is perhaps the most popular tool in use due to its simplicity and easily understood screen display. It has no dependencies on jQuery but supports all the browsers that jQuery 1.x does including IE, Chrome, Firefox, Opera and Safari.

Java Server Page Testing

TagUnit is similar in use to JUnit. It also has test cases, test suites and tests that are written as assertions. However, tests are written as JSP pages instead of Java classes. Each assertion is a custom tag. A single test is a collection of tags within a page. For a given tag, all the tests associated with that tag make up a test case. Tag classes are called when the JSP is converted to a Servlet.

Java Virtual Machine Testing

The most innovative and extensible framework for JVM testing is Arquillian. It is used by developers for rapid creation of automated functional, integration and acceptance Java tests. Tests are run directly from the run-time by managing the entire container life cycle and bundling all relevant test cases plus class and resource dependencies. It integrates with other frameworks such as TestNG, Junit and HTMLUnit.

Java Web Application Testing

Selenium WebDriver is an official W3C specification that provides a method for direct web browser interaction in the same way that humans do via a hook supplied by popular browser developers. It can simulate all user web actions.

HTMLUnit is another open source, headless browser testing tool. It is mainly used for integration testing via JSPs that run inside its web container and are converted to Servlets. Even without the container, however, it can be used to test the View portion of the application.

Other Useful Java Testing Tools

XMLUnit is JUnit extension for validating XML structure and performing content comparisons between actual and expected XML. It also shows the result of transforming XML with XSLT and XPath expression evaluations relative to specific portions of XML.

Apache JMeter is a valuable tool for testing website performance by means of sending multiple requests and displaying behavior graphically based on statistical analysis. It works over a large number of protocols including HTTP, HTTPS, SOAP/REST, FTP, LDAP, SMTP, MongoDB and others.


Java developers have a large number of testing tools and frameworks available to them of which we have briefly discussed the most popular and useful instances. These tools are used to automate testing throughout the entire software lifecycle from development to deployment.

What Is Positive and Negative Testing?

Finding bugs is the main goal for testers, which primarily involves verifying that the software correctly provides a solution to an end-user’s problem. However, no interface has been designed that can anticipate all the ways users can run amok operating the program. Thus, testers must also verify that the program fails gracefully versus crashing in the face of bad input or incorrect usage.

Both the expected and unexpected ways a program performs is covered by two types of testing: positive and negative testing.

Positive Testing

Another common name for positive testing is “testing the happy path.” These types of tests trial program actions and inputs not expected to produce errors. Testers typically begin with positive tests in order to verify that the program works in the most fundamental way.

Most software has alternative paths for accomplishing a single task. For instance, in a hotel booking application, the user starts at point A with a clean slate and attempts to reach point B, which is a room reservation for a specific date.

A flexible, user-friendly booking app may permit alternate paths that traverse from point A to point B. One user might specify a price range first, then select a region and then hotel ratings. Another user may employ a different selection order or use different criteria, but both should reach point B without errors.

Negative Testing

Negative test cases seek out the results of misbehavior on the part of the software user or programmer. It is watching to see if the software handles a bad situation in a reasonable way or not.

No programmer, however, can anticipate every way their program might fail. Often, they miss error paths because they make assumptions about user behavior. Their inside-out, intimate view of the application means they forget about hapless first-time user who did not read the user manual and who is probably distracted. Negative tests of this sort that trigger unanticipated errors demand high levels of creativity and intuition from testers to root out such defects.

Compiling Positive and Negative Test Cases

Positive test cases provide a behavioral baseline, so list those first. As these test cases are compiled, obvious negative test cases will naturally arise. For example, if the application has a sign-on page or widget, the positive tests ensure the application handles expected authentication information, such as a user name, a numerical ID or a password. Each of these fields already have a specification as to the input expected, such as letters, numbers and a select set of special characters plus acceptable string lengths.

The corresponding negative test is to input non-printing characters or strings longer than specified. These may have various outcomes such as an error being displayed, a program crash or the program mistakenly logging in the user anyway. Only one of those outcomes is probably acceptable and the others are flagged as defects.

Boundary and Equivalence Analysis

The things that a program shouldn’t do is always a larger set than what it should do. Thus, you need to pare down the negative test cases to a feasible quantity. Two main methods for this are boundary analysis and equivalence partitioning, which are both used in relation to one another.

The software requirements usually determine the range of valid values for input fields. For instance, a numerical ID field may be specified to handle only 5 digits ranging from 0 to 9 with no other extraneous characters. Thus, boundary values are zero and 99999. Any number less than zero falls into one equivalence partition and any number greater than 99999 falls into another.

Thus, you only need test a single number in each partition to validate this negative test. Clearly, many input fields are more complex than this example, which will require more careful analysis of boundary conditions and equivalence partitions and probably some additional exploratory testing as well.


Both positive and negative testing can be applied throughout the product lifecycle starting with unit tests. Positive tests include the straightforward, so-called happy path plus alternate paths that get the user from A to B.

Negative testing includes testing the known error-handling capabilities of the software plus inputs and actions not accounted for by the coder. The scope of negative test cases can be significantly reduced by using boundary and equivalence partition analysis.

Both positive and negative testing are critical to defect detection, which assists in producing the highest quality software product.