Optimus Information Helps a Leader in Catering and Takeout Remain Number One

MonkeyMedia Software is a Canadian company with tremendous expertise in restaurant systems for the take-out, delivery and catering segment of the food industry. Their business focus is to help multi-unit restaurant owners execute their off-premises operations to serve their on-demand consumers.

Marketing a Better Sandwich

MonkeyMedia Software approaches its business from a strong marketing and foodservice operations background. The company’s CEO, Erle Dardick, was a restaurant owner who dramatically increased sales through takeout, delivery and catering orders to his local community.

As Dardick’s off-premises sales grew, he needed a better internal system at his deli to maintain, control and scale during the rapid growth. As Chris Roy, MonkeyMedia Software’s VP of Technology, says, “A restaurant can do a great job making a single sandwich for you, but if they have to make a hundred sandwiches fast, then the business model of single-order transactions collapses.” Catering is a different business.

That led Erle Dardick to work with a software developer to solve this dilemma in 1996, a time when the internet was focused on “static web pages”. They used the deli as a platform to design a SaaS-based solution for takeout, delivery & catering.

From Food Sales to Software Sales

As a result, Dardick began to license his software-as-a-service to other restaurant owners and foodservice operators, hosting servers at a nearby location on a private cloud. MonkeyMedia Software was a pioneer in the adoption of cloud technology and by 2002 the company became a SaaS product focused on off-premises operations for the on-demand consumer.

Over the years, MonkeyMedia Software has evolved an ecosystem of technology partners including POS, payment, loyalty, delivery, marketplace, analytics and fraud prevention.

This is where MonkeyMedia Software is leveraging the power of Optimus.

Evolving to the Azure Cloud

Optimus’s relationship with MonkeyMedia Software began with something we know and do very well: testing. In this case, it was regression testing, covering hundreds of thousands of different configurations of their platforms, as the company prepared to move them to Microsoft’s Azure cloud.

Chris Roy describes the task as one of scale. “We were doing lots of manual testing at the time, so automation was a natural evolution. But we had to back-fill and make sure that our older code, some of which was ten years old, was still working as well. It was an enterprise-level system that we had built and now had to test.”

The relationship grew from there. Optimus provides MonkeyMedia Software with deep architecture experience in development and testing services. They have come to rely on us to oversee our testing teams in India on their behalf and to supply top Azure architects with specific knowledge that MonkeyMedia Software doesn’t have in-house – and doesn’t need to acquire.

“Moving our key services, like our credit card payment services, to the Azure cloud gives us more security than we can provide. By using Optimus as our Azure partner, we can not only achieve the levels of compliance that we need, we can also make use of their expertise in this area. Along the way, we have developed a deep level of trust with Optimus,” says Chris Roy.

Relying on Optimus

Optimus Information offers all our clients exactly what MonkeyMedia Software is using: the diversity of our skillset. Currently, we are employing automated testing with MonkeyMedia Software to establish secure credit card gateways in Azure that they can ramp up quickly and efficiently.

We continue to do both automated and manual testing with MonkeyMedia Software, becoming an extension of their QA team on a day-to-day basis. One of our Azure architects has helped to design their APIs and provides expertise on how to integrate even more Azure architecture into their platforms.

“We don’t have the in-house talent or knowledge to do it well on the Azure side,” says Chris Roy, “so Optimus is filling a big hole.”

We’re experts in testing and development in the cloud. We can offer solutions that push businesses to do more and earn more. We invite you to call us today to learn more about our specialized services and talented workforce.

API Testing: Do It Right and Automate

Software development in 2018 looks nothing like it did a decade ago. Developers have been driven to find faster and more efficient ways to produce a finished application. Customers demand better products, and market pressures mean get your apps out now or risk annihilation from competitors.

This means abandoning the waterfall method of software development where each part of a program was completed before the next was started, often leaving error identification (and subsequent delays) until the end of the cycle. Instead, developers are now embracing Agile development with its highly integrated production methodology, often releasing product in two-week sprints.

APIs Can Get Your App Market-Ready Faster

We live in a fast world. Enterprises that are big users of technology need to keep up with the pace of change demanded by their customers. This can put a high degree of stress on internal IT teams and lead to the inevitable coding errors and resulting delays in new product release.

An advantage is the ubiquitous use of APIs or Application Programming Interfaces. APIs allow nearly limitless possibilities for how applications can interact with each other.

However powerful the use of APIs can be, they also need to be put through a rigorous testing process. Unfortunately, this often places added strain on a company’s IT team that may not be familiar with the most effective testing methodologies.

“Often a client doesn’t know what KPI they should focus on or they don’t define it at all. Then they’d have no way of knowing what the metrics are for the data they’re gathering. But if you do it right, you get the right information back from the metrics to make improvements.”

Ashish Pandey, Optimus Information’s Technical Lead

API Testing Begins with KPIs

API testing can be the most challenging part of software and QA testing because APIs are complicated creatures, using protocols and standards not often seen in other forms of testing.
It’s critical to test all the components of the API – not just its UI or its functionality. Testing performance and security are just as critical.

To test properly, says Ashish Pandey, Optimus Information’s Technical Lead at the company’s location in India, it’s important to begin with Key Performance Indicators (KPIs).

Don’t Forget Security and Performance

In addition to defining your KPIs, you also need to focus on more than UI. Some customers, Pandey continues, concentrate on just the UI portion of the application and ignore testing the other components. Security and performance are two areas often overlooked and, if something breaks, it’s probably because key areas within the API were ignored in testing.

“When we test APIs for our clients”, says Pandey, “we generate a lot of data which gives us and our clients a clear summary of what might need to be improved in the future.” This can only happen when thorough testing is done and nothing is left to chance.

While Ashish admits that thorough testing of the UX is extremely important for the customer,  it’s the KPIs that measure the data that then determine the performance of the application. If there are problem areas turning up, they can be remedied in a matter of seconds.

Fail Forward Faster – Automation is Key to Proper API Testing

Proper testing of an API is accomplished by running test cases which are designed to uncover failures. It can be extensive and time-consuming – the opposite of what agile development tries to accomplish – so Optimus specializes in automating as much of a customer’s testing as possible.

It begins with the test cases, themselves, Pandey explains. “We take the customer’s test cases and analyze them to determine if they are ‘automatable’ or not. This makes it possible for us to suggest the correct technology stack with which the automated test could be performed.” In the future, the customer simply runs automated scripts to test different iterations of their applications, saving a great deal of time and money otherwise spent on manually writing the test scripts themselves.

iStock-870784968 API Testing: Do It Right and Automate

The Optimus Test Harness – Why No Company Should Test Without It

A further advantage for Optimus customers is the use of an open source test harness. Ashish Pandey is one of the creators of the Optimus test harness which uses open source components and is configured to test cloud-based applications.

Optimus estimates that 85% of its customers are technology firms that have cloud-based apps often undergoing testing as new iterations are created. Optimus has designed the harness to perform automated testing at different levels. “If we’re doing test automation at the UI level”, explains Pandey, “we have the ability to create automated test scripts for UI. We also have the capability to test at the full API level as well. In fact, our test harness is efficient enough that customers can perform a wide variety of testing on things such as execution of SQL queries to their database. We have built in to our harness APIs like SoapUI, WebSockets and others.”

Test Feedback in Hours (Sometimes While You Sleep)

What Optimus strives to do is provide customers with the sort of speed and agility that can be achieved through automation. “Many of our customers are into Agile development, so what they want is quick delivery of their app with feedback in a few hours,” says Pandey. “Some of them are also evolving with DevOps practices and they want results fast.”

Automated testing, he points out, means that if a customer has three or four hundred test cases to run and each test takes four hours to perform, automation allows Optimus to test while the customers are sleeping. “The next morning, they have the results in their hands, rather than having to wait several days,” Pandey concludes.

Understand and Implement the Right Methodology for Automated Testing

Optimus has one aim with its clients: to ensure that they implement the right thinking and methodologies around testing. Doing so will improve the customers’ UX, decrease errors and get the app to market faster and on time.

We understand Agile development, DevOps and automated testing and how the combination leads to rapid deployment of new, error-free applications at greatly reduced costs. We also know that this translates into powerful ROI for our customers.

To learn more about using API test automation to make your software better, faster and more secure, download our new eBook now.

More resources:

Automated Testing for SSRS Reports

Motivations for SSRS Report Testing

Both data warehouse developers and end users of data analytics reports have a keen interest in the accuracy and appearance of data content. As SSRS reports are being developed, they are typically tested piecemeal during construction and as a whole when completed for both these aspects.

However, it is always possible to overlook certain report items or their representation after report deployment. Furthermore, issues with changes in data sources or the ETL flow may introduce anomalies or errors that affect data values or their presentation at a later date.

SSRS Report Testing Automation Issues

To increase report testing efficiency and accuracy, automated testing is key during both development and maintenance of SSRS reports, especially for organizations utilizing dozens or hundreds of reports on a regular basis. Automation can decrease development time and play an important role in correcting discrepancies in report data post-deployment, which could otherwise negatively impact confidence in the data warehouse.

The complexity of interactions within SSRS itself and the many other components of SQL Server, however, make the creation of fully automated testing a tricky business. Furthermore, the high-degree of customization possible for SSRS reports implies that customized testing approaches are probably required in addition to standardization of report content and layout wherever feasible.

SSRS Testing Areas for Automation

Unit Testing

There are a large number of characteristics that can be tested during development via unit tests. Unit tests can migrate to test suites for troubleshooting post-deployment bugs quickly. These are a few example coverage areas for such automated tests:

  • If data is not reaching a report, validate the data source, check that the correct mappings to the dataset are being used and that each column’s Visibility property is set correctly.
  • If data is not present and the report uses stored procedures, validate the SP parameters, their data types, conversions and formatting requirements.
  • The failure of data to group as expected can be tested by examining the grouping expressions and that the Aggregate function is applied to Numeric columns.
  • Create a query test framework that can take as input individual queries or query scripts. Such a framework would contain validated comparison datasets including metadata to run against actual queries/datasets with each report’s specific data source, parameters or configuration.
  • Test report rendering into formats used by the organization. Initial tests require manual inspection of results, but subsequent tests could be automated using input reports under version control. The most useful of these tests is to output the report in XML format, which is likely to be complete, free of data conversions and most amenable to automated comparison tests.

A possible starting point for building a permanent RS unit testing framework can be found here:

Layout Testing

Automating report layout tests typically presents the greatest difficulties. If your organization uses the SSRS web portal, however, you can take advantage of a number of web UI automation tools that facilitate such testing.

Selenium Web-Driver is a free tool for testing web page layout and functionality, which works with Firefox, IE, Chrome and Safari. Automation test scripts are written in either Java, C#, Ruby, Python or JavaScript. Scripts utilize the Web-Driver API which invokes a live browser or runs headless.

Other UI-based testing tools are also available such as the open-source Sikuli, freeware AutoIt or TestComplete, which is a proprietary tool by SmartBear.


Test automation has its limits, of course. This is especially so with regard to SSRS reports testing. For instance, usability testing is clearly out of scope for automation. Automation is further complicated in SSRS, since data is often manipulated beneath the covers beyond control of the report itself. For example, data may be manipulated outside of queries in embedded VB.Net code or in external class libraries.

Even so, automating report testing wherever feasible always pays off. Choose the low-hanging fruit first before expanding into other areas, such as troubleshooting suites and GUI-based layout testing. As you progress, you are likely to find that developing a mindset aimed at automation frequently instills a virtuous cycle of discovering new opportunities for further test automation.

In SSRS testing areas that present the most difficult obstacles, employ third-party tools or employ an experienced automation consultant who can demonstrate automation methods most appropriate for your SSRS development and usage scenarios.

Test Automation in Agile

Although both agile development and automated testing have more or less independently come into more widespread use with each passing year, it seems they were made for each other from the start. The advantages of both are derived from the same desire to increase software production efficiency overall.

Both work best when there is a high degree of collaboration between development and testing although they offer significant benefits to all project stakeholders. You often see a high usage of each approach wherever organizations are growing their continuous integration and continuous delivery capabilities.

Test Automation in Agile versus Traditional Waterfall

There certainly is no reason that teams employing waterfall methods cannot take advantage of test automation too. Typically, however, automation tends to be of lower priority in those scenarios due to the ingrained “throw it over the wall” mentality. Such a back-end loaded scenario is unfortunate because automation, especially test-driven coding, can be used quite effectively during development phases.

In agile, however, the questionable “luxury” of developers shedding testing responsibility is simply anathema to a process of two week scrums in which planning, designing, coding, testing and deployment all happen nearly at once. Without thorough development and testing automation, the productivity and quality of agile cycles would consistently fail to meet project goals.

When Automating Tests, Choose Wisely

Despite the brevity of agile development cycles, there can exist a tendency to over-automate testing. This is where careful planning and in-cycle flexibility are both necessary. Tests will naturally be categorized as to their type, whether that be functional, white box, black box, performance and so on. Within categories, however, individual test tasks need to be sorted by how likely they are to be reused repeatedly in order to optimize automation.

Any test that will be repeated more than twice, especially if it appears it will carry over to the next scrum, is a prime candidate for automation. That pretty much eliminates automation of exploratory testing and perhaps some boundary testing if the boundaries are still in flux and are not easily parameterized. On the other hand, even one-off tests might be amenable to automation in the sense that they are ongoing placeholders within an automation framework test suite for future software revisions.

Avoid Automating for Automation’s Sake

With more sophisticated test automation tools and frameworks, it is uncannily easy to become lost in the forest without seeing the trees. This is especially true of homegrown test frameworks, where the framework development effort might rival that of the applications it is meant to test. It pays at all times to bear in mind that meaningful, reusable tests must be the top priority.

An additional potential trap is not paying enough attention to the ebb and flow of value within your test suites. They naturally become cluttered with marginally valuable test scripts that may have lost relevance and eat up undue cycles of execution time. That is why it is important to set clear criteria for what does and does not get automated plus regular reviews of the test code base.

Staying One Step Ahead

Due to the rapid pace of agile development, anything to gain a leg up will pay dividends. Crisp, detailed planning is one key to getting ahead of the game, but in addition you should implement testing and its automation as early in the development cycle as possible.

The use of pair programming with one developer and one tester simultaneously working on the same code is an innovative practice in this regard. It is especially effective when combined with test-driven development in which the tester first writes the unit tests based on the specification and the developer follows with application code that turns each test “green.”

Actually, the underlying concept of first creating acceptance criteria as test code and then programming the software to make that test succeed can be applied at most points throughout the development cycle. These “live” requirements should be captured in an automated test environment that can be accessed and run by all stakeholders in the project to further increase collaboration.


Test automation is a key tool in agile development project success. As development organizations naturally evolve their entire production team from agile development to continuous integration and continuous deployment, the use of build and test automation becomes ever more critical.

Test automation in an agile development environment, however, has to be finely tuned to a greater degree than when employed in waterfall environments. Precise planning, a shift-left of testing responsibilities, keen maintenance policies and a sharp weather eye for when and when not to use automation are the most important elements for extracting the highest value from test automation.

How to Select a Test Automation Language

There are times when the language of choice for your test automation is essentially already made for you, such as when you must rely on a single test developer whose programming language proficiency is limited to one or two languages. If you can make the space for an evaluation, however, it often pays to consider several options.

Factors Affecting Your Choices

Besides current expertise in a particular language among your team members, there are several other aspects that require consideration when choosing which language or languages upon which to base your test automation going forward:

  • Assess how much you already have invested in whatever language you now use. This investment might involve training, IDE licensing, existing code and possibly support systems linked to running of test scripts or programs. However, holding onto an investment that is not performing well is an unwise long-term approach.
  • Moving to a new language and hoping to leverage existing test code might be unrealistic. It will, at the least, require a porting and re-testing effort, which will be hampered by developers who are unfamiliar with the previous language and will push to re-write tests from scratch.
  • Adopting a new language means new training costs. If the organization is planning to expand, then choosing a language that is not widely used may limit your recruiting choices.
  • Although it is not a requirement, choosing the same language as the software being tested might pay dividends, especially if there is a well-developed collaboration between developer and test teams already.

The Case for a Technological Decision

As you can see, some of the factors above are clearly non-technical and deal mainly with managerial issues, such as existing language expertise within the test team. Often, managerial issues trump the technical issues with respect to language choice. However, technical advantages should not be overly discounted as these often have a significant impact on the long-term efficacy of the test organization.

In general, there is clearly a difference in programming capabilities between scripting languages, such as Perl, Python and Ruby, versus compiled OO languages such as Java, C++ or C#. The latter can do everything the scripting languages can plus a lot more, such as OO exception handling, synchronization and multithreading to name a few. It is typically easier to interface to other libraries, data sources or cooperating applications using a compiled language.

If your enterprise is or will be engaged in large, complex software projects, then these more capable languages are likely worth the additional effort and cost to utilize.

Whichever language you choose, be sure to also evaluate the editing and debugging tools that come along with it. Typically, you have a choice of IDEs and choosing the right one can make a meaningful difference in the time it takes to create, test and rectify errant test scripts or programs.

Requirements of Your Test Automation Framework

Of course, the final decision regarding testing language may be highly restricted by your choice of test automation tools, since some of them only work with specific languages. Hopefully, your automation requirements analysis led you to choose a framework that supports most of, if not all, the development tools currently under your roof.

You may find that choosing the right test automation framework more or less obviates the need to choose any particular test code language. For instance, some tools use visual test editors that require almost no programming experience.

Other frameworks use scripts but automate their creation by using keyword tests that simulate user actions in a portable way that does not require re-creating a GUI-driven test when the underlying code changes. Still other automated test tools employ record-playback to create and run tests. Overall, however, these types of frameworks are severely limited relative to what can be accomplished with scripts developed in a programming language.


We have covered many of the important issues relative to choosing an optimal testing language for your enterprise. There will always be a mix of technical and managerial aspects feeding the final analysis. You should try to balance the pros and cons, but realize that there may not be a perfectly optimal solution. The most important thing is often to drive the process to a clear and timely decision rather than attempt to satisfy all parties and constraints.

6 Test Automation Trends for 2016

Although the number of test organizations utilizing test automation has only incrementally increased over last year, the demands for automation overall are increasing significantly. As a result of new technologies, new approaches to software development, the increasing sophistication of cloud testing services and an enormous increase in the complexity of deployment environments, those using automation today are looking to automation more than ever to deliver productivity and quality.

1. Mobile Testing Remains a Top Priority

Gartner predicts close to 300 billion mobile app downloads next year. Thus, testing gravity continues to center on mobile app testing for both consumer and business applications. Every testing area from functional to performance to compatibility and usability is going to demand increased automation to deal with this deluge. The skills and innovation of test teams must also increase as mobile apps intersect with other technologies and new ways to make software.

2. New Framework Capabilities Supporting Automation

Supporting the onward rise of mobile app testing, the industry will see increased use of both paid and open-source testing frameworks, such as Selenium and Appium, plus specialized tools in the quest to lower costs while increasing efficiency and quality. Throughout 2016, many tools and frameworks will increase their capability to cover more of the software development process from inception to production.

3. Increased Reliance on Cloud Testing

Hand-in-hand with the increased demand for mobile app testing, more QA departments will further their use of cloud-based, automated testing services. These offer test organizations relief from upfront capital costs and ongoing maintenance and decommissioning outlays.

Other features, such as on-demand scalability and complete cross-platform test suites that can be accessed by geographically dispersed development and test teams, are now necessities. Furthermore, as mundane tasks, such as configuration and provisioning, shift to cloud services, test teams will enjoy the opportunity to work on higher-value tasks.

4. Testing’s Left-Shift Continues

Businesses in 2016 are continuing to increase the frequency of product releases. This situation has brought about an increased appreciation of early defect discovery using such techniques as TDD and BDD. It also leads to closer collaboration between developers and testers in an overall leftward shift in testing scope.

This provides testers opportunities to deliver more value to new projects, but it challenges them to automate early development tests, which, up to now, are perceived as being incompatible with the requirements for efficient automation in other development phases.

5. New Software Architectures Using Micro-Services

Software design is trending toward micro-architectures that divide functionality into smaller components. These message-based “micro-services” are developed, tested and delivered as part of larger applications or systems without disturbing enclosing or associated apps or services.

Micro-services increase a business’ ability to deliver products to market in even shorter cycles and accurately meet rapidly changing customer requirements. Testing organizations benefit from a smaller testing scope and reduced inter-dependencies. Automated API testing will become a primary technique to ensure inter-service compatibility even before code completion.

6. Complexity in Deployment Scenarios

With the rapid rise of smartphones, the increasing popularity of wearable computational devices and the rise of IoT, the complexity of cross-platform testing is difficult enough. Add to that the need to test apps in relation to their Social, Mobile, Analytics and Cloud environment.

SMAC testing is adding yet another dimension to mobile app testing and its automation. This new reality is already claiming additional resources and raising the ante on automation tools to deal with it effectively. SMAC, especially, calls for innovations in how to acquire and process sensitive user data that can help drive automated testing strategies.

The world of testing automation is facing an array of new opportunities in 2016 arising from fresh approaches to harnessing additional efficiency and agility from software development and deployment processes, especially in the mobile app domain. One effect of this has been testing’s left-shift to the earliest stages of development and the consequent difficulty in applying automation at this point.

Existing automation tools, frameworks and cloud-based testing services are also faced with novel challenges in how they will support an increasingly complex world of mobile/IoT interaction, micro-service/API testing and effective SMAC testing.

Taken together, these converging trends are spawning new ideas and approaches to exactly how, what, when and where software is tested let alone how such testing is to be automated in ways that increase efficiency and product quality.

Clearly, 2016 and beyond promise to provide a vibrant, vigorous and exciting atmosphere for the world of testing and test automation in particular.

HP UFT/QTP vs. Selenium – Automated Test Tool Comparison

HP Quick Test Pro, also known as HP Unified Functional Testing, is currently a well-known force in the web-based testing market, but Selenium is quickly gaining mindshare and advocates as a more capable, open-source competitor. There are several clear-cut distinctions between each test tool that should make choosing one or the other a straightforward decision in most cases.

Comparison Highlights Between QTP and Selenium

Cost – QTP requires a license fee for acquisition and more fees for upgrades and add-ons, but Selenium is a totally free, open-source download and will always be so.

Testing Applicability – Selenium is only for testing web-based apps, whereas QTP can test client-server and desktop applications in addition to web-based apps.

Cloud Ready – QTP’s one-script/one-machine execution model cannot make efficient use of distributed test execution via cloud-based infrastructure. Selenium-Grid is specifically designed to run simultaneous tests on different machines using different browsers and different operating systems in parallel. Thus, it is a perfect match for cloud-based testing architectures and services.

Execution Efficiency – QTP tests one application per machine, whereas Selenium can execute multiple, simultaneous tests on a single machine. Furthermore, QTP script execution takes more RAM and CPU power than does Selenium. QTP can run in multiple Windows VMs, but these are more resource-hungry than Linux VMs, which Selenium can utilize.

Browser Compatibility – QTP works with four browsers, albeit some of the most popular ones, whereas Selenium works with those plus five more browsers of which two are headless. Headless browsers provide additional test execution efficiency.

Language Compatibility – QTP tests are written in Microsoft’s VB Script. Selenium tests can be written in one of nine different popular languages including Java, JavaScript, Perl, Python and PHP. Thus, you can adapt it to the programming resources you have on hand.

OS Compatibility – Selenium tests applications on all major OSs including Windows, Linux, OS X, Solaris, iOS and Android. QTP runs only on Windows.

Support – Selenium is supported by a vibrant, active user community, but it does not have dedicated support staff, whereas you can buy technical support for QTP. For some issues, paid support provides faster resolution cycles.

Technical Challenges When Using Selenium

Although Selenium has a number of meaningful advantages over QTP, including its much broader compatibility with different app and test configurations, it is not without technical limitations:

  • Selenium is not uniformly compatible across browsers. It is most compatible with Firefox, so scripts developed for Firefox may need tweaking to run in IE or Chrome.
  • It has no object types other than WebElement or Select, and no repository for storing object mappings.
  • There is no inherent support for data-driven testing.
  • HTML tables and other elements require coding to work correctly.
  • Image-based tests are harder to accomplish.
  • Dialog box support is limited.
  • In general, Selenium requires a higher level of coding skills

Some of these issues are resolved via testing frameworks compatible with Selenium. However, these may add upfront development effort when developing or integrating with such a framework.

Tips on Migrating from QTP to Selenium

When migrating from QTP to Selenium, take a piecemeal, trial approach to get started. Choose a script language with which the team has the most experience already, or one that best supports the type of automation framework you plan to build.

However, at the beginning, defer building a framework and simply write a handful of simple Selenium scripts. Collect notes about issues as you go along. Incorporated those learnings into building larger test suites. It is easier to write Selenium scripts from scratch than trying to port them directly because of the many differences between QTP and Selenium.

Once you become familiar with Selenium and commence building an automation framework, look at tools that play well with Selenium such as Maven for compilation, Test NG to drive unit, functional and integration tests and Jenkins or Hudson if you plan to use a Continuous Integration methodology.

If your web-based automation test projects require the highest flexibility across browsers, OSs and script languages at the lowest cost, then Selenium is the clear winner. However, especially in the earliest stages of adoption, more skilled programming resources are required for script generation and integration into an automation framework of your choice.

The many pros and cons between QTP and Selenium may present difficult choices if you currently use QTP, but otherwise Selenium provides the better payback and versatility over the medium-term.

6 Popular Test Automation Frameworks

What Is a Test Automation Framework?

Automated testing frameworks provide test environment structure that is typically missing from underlying test tools. Each framework style offers unique rules, guidelines, protocols and procedures for test creating, organization and execution.

Six types of test automation frameworks are regularly encountered. Those covered here increase in complexity and levels of indirection to achieve their goals. The aspects to evaluate include scalability, re-usability, maintenance effort and the cost of technical skills.

In most cases, the choice of an initial style of framework is less important than the establishment of any framework, which brings standardization to development and testing processes and improved efficiency.

Module-Based Testing Framework

This framework assigns independent test scripts to specific software modules, components or functions. Avoiding script inter-dependencies is crucial to framework stability and maintainability.

Each module script embeds both actions and data. Any change in the test data requires script changes or a new, separate script. Where data changes are frequent, a data-driven testing framework would be preferable.

Common Library Testing Framework

This framework is a module-based framework that segregates common script functions into a shared library that each test script calls. This approach avoids having to re-implement common functions within each test script. This reduces the lengths of scripts and repetitive effort. A drawback is that if a common function is incorrectly implemented, then the error propagates to many scripts.

Data-Driven Testing Framework

Where tests utilize varied sets of input and output test data, a data-driven approach is more efficient and more easily managed. Test scripts obtain all input and output data from an external database, which improves their re-usability. By convention, test data are stored as key-value tuples in a standardized fashion. Data extraction within scripts is facilitated by a common library.

This type of framework significantly reduces the number of test scripts compared to a module-based framework. The test data matrix can be changed independently of the scripts as long as required data are not deleted entirely. This framework introduces more complexity overall and requires intermediate programming skills to set up and maintain.

Keyword-Driven Testing Framework

This framework is also called table-driven. It separates test automation from test case design by adding an additional layer of abstraction via action keyword-value tuples in the data matrix. The keywords are used to drive tests independently using underlying test execution tools. This framework style enables less technically skilled staff to create test scripts. It also improves test script readability.
The implementation and maintenance of such a framework does require programmers who can create the keyword execution mechanism or who can leverage open source tools such as Robot Framework.

Hybrid Testing Framework

A hybrid test framework is a combination of two or more of the previous framework types. It attempts to leverage the most desirable benefits of other frameworks for the particular test environment it manages.

For instance, one hybrid approach would be the use of a common library coupled with a data repository that combines both test I/O data plus action keywords. Each tuple in the repository may contain a name, description, action keyword, UI locator and text string, if necessary, for an input field.

Naturally, a hybrid approach may be more complex to set up initially. However, it could provide the greatest flexibility if the tradeoffs between the frameworks it incorporates are carefully evaluated and implemented.

Behavior Driven Development Framework

This framework is unlike previous frameworks in that it tends to be more a development process than a test management structure per se. Its purpose is to facilitate test specification by business stakeholders as early in the development process as possible. It requires increased collaboration between development and test teams as well.

It is centred on the use of a non-technical, semi-formal, natural language for test case specification. This process is facilitated by higher-level tools like Cucumber or RBehave in conjunction with testing tools such as Selenium.

A key drawback to this approach is that BDD scenarios are so abstracted that they are often at odds with technical reality. Furthermore, in practice, it is difficult to obtain participation from business stakeholders in creating or evaluating BDD scenarios

This brief survey is not an exhaustive list of test automation frameworks, but the most popular styles are covered. Other than BDD, these frameworks have no inherent incompatibilities. An organization could easily begin with a module-based approach and incrementally grow their framework to more complex styles over time depending on development and testing needs and the organization’s technical expertise.

How to Make Your Test Automation a Success

Automated-Testing How to Make Your Test Automation a Success In a previous post, we wrote about the need for test automation, its pro’s and con’s when compared to manual testing. After providing this analysis, we felt it would be helpful to give a little more context. Here we discuss a few strategic tips for creating successful automated tests. The idea of automated testing is very simple – let the computer do the annoying, time consuming and mundane testing tasks while you focus on truly unique and interesting challenges without worrying about the thousands of test cases that require a mouse-click or a keyboard input. Its good for the programmer because it saves time and rids them of the mundane work that requires inhuman levels of patience, and its good for the project because the margin of error is greatly reduced. The real question revolves around how you approach automated testing. Many testing projects stall because developers cannot decide whether they should opt for automated testing and if they choose this route they become inundated with additional questions around where should they apply it and how. Here’s a few tips that will help you out in that difficult initial decision-making stage.

Tips for Successful Testing Automation

Automation is not Always Advantageous

Welcome to the other side of the myth. While many testers are scared that automated testing might take away their jobs, a few rely on it completely. Both practices are wrong. Like most things in life, automation works best when applied in moderation. Deciding how much of your project must be automated is the first step to a successful testing process. A few cases when automation testing might be a bad idea:

  1. When there is a chance of the automated tool quickly becoming obsolete because of drastic changes in the application
  2. When the test cases are manageable through manual testing.

The decision to go for automation is more strategic than technical. The expected ROI, time frame and costs involved in either cases must be thoroughly consulted.

Choosing the Right Tools

There are many test automation tools available on the market, a number of them open source: Selenium, QTP by HP, SilkTest by Borland, Calabash, Ranorex are some of the better known and trusted names. Some tools are designed to work with specific browsers and test conditions, while others are built to suit a more general environment. Many open source tools have easily available patches that make them suitable for a wide number of test cases. Be careful while choosing the right tools for the job, depending upon the scope, necessity and requirements.

Building a Stable Testing Code

Do not make the mistake of being lax while designing your testing code. This is a pitfall that many testers find themselves a victim of. Don’t be lazy about this; it affects the complete testing process. Look at the code development like any other software development life cycle. A thorough, quality design for your testing code will make your testing process a success, and can also become a viable asset for the future. Your decision to automate is going to save you a lot of time and resources. Try to employ some of them into building a good testing code.

Cross Team Code Development and Usage

You can make your testing code more robust and versatile by sharing it. Let different development teams come up with automated testing codes for different scenarios and create a code bank for your testing projects. The developers’ dream is creating a program that does their job. Unfortunately, that’s currently impossible, but we can at least create an army of programs that do all our mundane tasks for us and leave us with ample time to focus on bigger problems. The best part about this practice is that after some time passes, you’ll only have to regularly man and upgrade your testing codes and not have to start from scratch with every new project, saving more time and money. Get in touch with OptimusQA if you want to know more about the automated testing process. Automated testing ends up making life easier for you, the developer, the tester and your customer. But to avail this opportunity, you must carefully understand the process. Our professionals are always happy to help you solve your testing needs.

Manual Testing vs Automated Testing: Pros and Cons

There is a large overlap between the capabilities of manual and automated testing, but neither can completely replace the other. Cases where one approach is clearly superior over the other are easy enough to discern, but the tradeoffs between each in overlap areas requires more careful consideration.

Characteristics of Manual Testing

Functional testing at the unit, integration or system level is often done manually. During early development phases, manual tests make a good deal of sense when developers are exploring implementation choices and testers are performing risk analysis and designing subsequent test tactics. Tests at this stage have a short half-life, and the advantages of human cognitive skills and intuition are more valuable than high-maintenance automated test suites.

When the software user interface begins to emerge up until final release is an excellent time to apply manual testing also. User interfaces frequently are in a constant state of flux right up until the final stages of development. Additionally, the usability and aesthetics of the UI require human cognitive skills to evaluate. For both these reasons, it is difficult to utilize automated testing methods.

Manual testing of software even by non-technical staff is possible at any stage of development, of course. This sort of ad hoc testing often leads to the discovery of unexpected defects or insights into how the software could be improved.

Characteristics of Automated Testing

Automated testing is almost universally applied to build regression testing for the purpose of validating that new code changes have not resulted in new defects once the integration phase has been reached. This is typically the starting point for a formal test automation process, although earlier automation at the unit test level is also possible.

Beyond regression tests, automation is very useful for any kind of highly repetitive tests. Whenever test suites can be divided and run in parallel to save time, automation is also the preferred method over hiring teams of manual testers. In order to run performance or load tests that simulate hundreds or thousands of concurrent users or transactions, automation is essential.

Although there are record/playback tools that generate test scripts via UI interactions, these are not typically the best use of automation because of the fragility of the test scripts if the UI is undergoing even slight changes. It is especially inappropriate if the purpose of the UI test scripts is actually to test layers below the UI. Those should be tested directly via APIs.

Summary of Manual vs Automated Testing Advantages and Disadvantages

Manual Testing Pros and Cons


  • Exploratory testing is best done manually
  • Manual testing is cost-efficient for tests run only a few times
  • Manual testing of UIs is most effectively done by humans
  • A software’s UX can only be evaluated by human testers


  • Relative to the amount of code coverage, manual tests are inefficient
  • Manual testing is prone to human error, so may lead to inconsistent or misleading results
  • Human test resources are more expensive than test machines
  • The simulation of large numbers of users or configurations is nearly impossible to accomplish manually

Automated Testing Pros and Cons


  • Far more code is covered in a shorter time using automation
  • It relieves human testers from highly repetitive, low-value test work
  • Automation can allow non-technical staff or customers to run tests
  • Automation enables time savings due to parallelization of test execution
  • Automated tests can simulate thousands of users and configurations easily


  • Automation and maintenance of test suites requires significant investment in tools and skills
  • Automation cannot replace human cognitive skills required for evaluating software’s UI or UX
  • It is not appropriate for exploratory testing used to improve the software’s design

No software organization would dare attempt to test their products solely via manual testing these days. Neither can they hope to achieve 100 percent test automation, nor would this be a good idea given the limitations of automation in some areas.

It is the in-between spaces where the effectiveness and costs of both approaches must be carefully evaluated. Such evaluation leads to the most sensible choices relative to the capabilities of the organization. In general, however, informed decisions to automate testing where possible typically lead to better test coverage and improved product quality.


Nothing Found

Sorry, no posts matched your criteria