Posts

Test Automation in Agile

Although both agile development and automated testing have more or less independently come into more widespread use with each passing year, it seems they were made for each other from the start. The advantages of both are derived from the same desire to increase software production efficiency overall.

Both work best when there is a high degree of collaboration between development and testing although they offer significant benefits to all project stakeholders. You often see a high usage of each approach wherever organizations are growing their continuous integration and continuous delivery capabilities.

Test Automation in Agile versus Traditional Waterfall

There certainly is no reason that teams employing waterfall methods cannot take advantage of test automation too. Typically, however, automation tends to be of lower priority in those scenarios due to the ingrained “throw it over the wall” mentality. Such a back-end loaded scenario is unfortunate because automation, especially test-driven coding, can be used quite effectively during development phases.

In agile, however, the questionable “luxury” of developers shedding testing responsibility is simply anathema to a process of two week scrums in which planning, designing, coding, testing and deployment all happen nearly at once. Without thorough development and testing automation, the productivity and quality of agile cycles would consistently fail to meet project goals.

When Automating Tests, Choose Wisely

Despite the brevity of agile development cycles, there can exist a tendency to over-automate testing. This is where careful planning and in-cycle flexibility are both necessary. Tests will naturally be categorized as to their type, whether that be functional, white box, black box, performance and so on. Within categories, however, individual test tasks need to be sorted by how likely they are to be reused repeatedly in order to optimize automation.

Any test that will be repeated more than twice, especially if it appears it will carry over to the next scrum, is a prime candidate for automation. That pretty much eliminates automation of exploratory testing and perhaps some boundary testing if the boundaries are still in flux and are not easily parameterized. On the other hand, even one-off tests might be amenable to automation in the sense that they are ongoing placeholders within an automation framework test suite for future software revisions.

Avoid Automating for Automation’s Sake

With more sophisticated test automation tools and frameworks, it is uncannily easy to become lost in the forest without seeing the trees. This is especially true of homegrown test frameworks, where the framework development effort might rival that of the applications it is meant to test. It pays at all times to bear in mind that meaningful, reusable tests must be the top priority.

An additional potential trap is not paying enough attention to the ebb and flow of value within your test suites. They naturally become cluttered with marginally valuable test scripts that may have lost relevance and eat up undue cycles of execution time. That is why it is important to set clear criteria for what does and does not get automated plus regular reviews of the test code base.

Staying One Step Ahead

Due to the rapid pace of agile development, anything to gain a leg up will pay dividends. Crisp, detailed planning is one key to getting ahead of the game, but in addition you should implement testing and its automation as early in the development cycle as possible.

The use of pair programming with one developer and one tester simultaneously working on the same code is an innovative practice in this regard. It is especially effective when combined with test-driven development in which the tester first writes the unit tests based on the specification and the developer follows with application code that turns each test “green.”

Actually, the underlying concept of first creating acceptance criteria as test code and then programming the software to make that test succeed can be applied at most points throughout the development cycle. These “live” requirements should be captured in an automated test environment that can be accessed and run by all stakeholders in the project to further increase collaboration.

Conclusion

Test automation is a key tool in agile development project success. As development organizations naturally evolve their entire production team from agile development to continuous integration and continuous deployment, the use of build and test automation becomes ever more critical.

Test automation in an agile development environment, however, has to be finely tuned to a greater degree than when employed in waterfall environments. Precise planning, a shift-left of testing responsibilities, keen maintenance policies and a sharp weather eye for when and when not to use automation are the most important elements for extracting the highest value from test automation.

Types of Software Testing

Types-of-Software-Testing-300x227 Types of Software Testing

Types of software testing and their position in the testing cycle.

With countless types of software testing, it can be daunting to figure out what you should focus on and when. Our experience has taught us that focusing on the right testing at the right time, saves both time and money.

This diagram illustrates the software testing cycle. It starts with very specific tests on core components, then tests those components as a whole and works its way towards user acceptance and beta testing.

Following this testing cycle increases efficiency as each new test builds upon previous tests. To be even more efficient, using automated software testing will greatly reduce the effort when regression testing.

In an agile software testing environment, this testing cycle would be broken down into smaller cycles and have a higher dependence on regression testing.

Here is a brief description of the most common types of software testing:

1. Acceptance testing –Test cases are created from user stories. Normally this type of testing is done to verify that the system meets the customer’s specified requirements. This testing is performed to determine whether to accept an application.

2. Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of software development. There is a possibility of some minor design changes as a result of such testing.

3. Beta testing – This is a final testing before releasing application for commercial purpose. This is the second phase of software testing in which a sample of the intended audience tries the product out.

4. Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality of the software.

5. Comparison testing – Comparison of product strengths and weaknesses with previous versions or with other similar products.

6. Compatibility testing – Testing how well software performs in a particular hardware/software/operating system/network environment and different combinations of above.

7. End-to-end testing – Similar to system testing, this involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

8. Functional testing – This type of testing ignores the internal parts and focuses on the output, i.e. whether the output is as per requirement or not. This is a black-box type testing geared to functional requirements of an application.

9. Incremental integration testing – Bottom up approach for testing i.e. continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. Incremental integration testing is done by programmers or by testers.

10. Install/uninstall testing – Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware and software environment.

11. Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

12. Load testing – This is a performance testing to check software behavior under load. This entails testing an application under heavy loads, such as testing of a web site under a range of loads to determine the point at which the system’s response time degrades or fails.

13. Performance testing – Term often used interchangeably with ’stress’ and ‘load’ testing. This utilizes different performance and load tools and checks whether the system meets performance requirements.

14. Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

15. Regression testing – Testing the application as a whole for modifications in any module or functionality. It is difficult to cover the entire system in regression testing so automation tools are typically used.

16. Sanity testing – Testing to determine if a new software version is performing well enough to be accepted for a major testing effort. If the application is crashing in initial use then the system is not stable enough for further testing.

17. Security testing – Testing to determine whether the system is vulnerable to hackers and to evaluate how well the system protects against unauthorized internal or external access.

18. Stress testing – System is stressed beyond its specifications to check how and when it fails. Testing is performed under heavy load, e.g. putting in large numbers beyond storage capacity, complex database queries, continuous input to system or database load.

19. System testing – The entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, this covers all combined parts of a system.

20. Unit testing – Testing of individual software components or modules. This is typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Testing may require developing test driver modules or test harnesses.

21. Usability testing – User-friendliness check. Application flow is tested, and it is determined whether a new user can understand the application easily.  Proper help functions are documented at points where the user could potentially become stuck.  System navigation is checked in this testing.

22. White box testing – This testing, also known as Glass Box Testing, is based on knowledge of the internal logic of an application’s code.  Internal software and code workings should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, and conditions.

At OptimusQA, we provide clients with software testing services. For more information on our services, please contact us at rupmeet.singh@optimusinfo.com

 

Integrating Software Testing and Development

costs-of-software-development-and-testing-300x225 Integrating Software Testing and Development

Don’t flush money down the drain. Coordinate your development and testing cycles.

In any software product, quality is commonly recognized as lack of bugs. This is expressed as the reliability of software or defect rate.

To survive in a hyper competitive environment, many software organizations are now focused on integrating software testing and development through a Quality of Service based approach towards the development and testing process.

Benefits of the Quality of Service approach:

  • Save time and money by identifying defects early
  • Identify and catalog reusable modules and components
  • Avoid or reduce development downtime
  • Provide better customer service by building a better application
  • Build a list of desired modifications and enhancements for later versions
  • Identify areas where programmers and developers need training
  • Enhance user satisfaction based on their requirements

How does one ensure that the development and testing process is both cost and time effective?

Optimization is the Key!

The product development life cycle consists of multiple complex phases. The output of previous phase is the input to the next phase and every deliverable has certain quality attributes; therefore, the testing process holds the key to success of the product in the market.

Since time is always in short supply, optimized testing is paramount. As shown in Figure 1 below, every phase in the software development process is accompanied by elements of software testing.

For example, in the Detailed Requirements phase of the development process, one should also design a testing strategy, test analysis and design plan. These should be derived from user interviews and followed by a requirements testability review. There are many software testing types that help companies get through the software testing process.

Software-Development-and-Testing-Process1 Integrating Software Testing and Development

Figure 1. – Software Development and Testing process done in parallel.

Ultimately the purpose of implementing the testing process along with the development process is to save the team effort, time and money. During this process, it is important to identify someone to drive the implementation, identify the scope, define an implementation plan and monitor the roll out.

All of this is usually best covered with a small software testing pilot project before any organization wide roll outs are considered. This enables a company to test this approach on a small scale, so adjustments can be made where necessary before rolling out the process across the company.

At OptimusQA, we help companies with their software testing needs. For more information on our services, please contact us at rupmeet.singh@optimusinfo.com

7 Must Read Software Testing Books

software_testing_books-150x150 7 Must Read Software Testing BooksWhether you’re new to software testing or have been doing it for a lifetime, here is a list of 7 books so you can learn from the successes and failures of others.

  1. How We Test Software at Microsoft: Written by 3 lead test architects at Microsoft, this book explains practical testing solutions. Using insight gained from their combined experience, they explain how the software testing process works at Microsoft. If you’re using Microsoft technologies for development and testing (ie: .Net), this is a must read.
  2. Agile Testing: A Practical Guide for Testers and Agile Teams: If you’re interested in learning how to apply agile testing practices to your testing process, this book is the definitive guide. From effective hiring to best practices, this book is your how-to guide for adapting to an agile world.
  3. Rapid Development: Taming Wild Software Schedules: Software development cycles are shortening while demands steadily increase. In this wild world of software development, a Microsoft consultant offers his advice to keep development projects on the right track. Although this book is focused on development, it highlights the classic mistakes teams face every project.
  4. Perfect Software: And Other Illusions about Testing: This book makes sense of common misconceptions in software testing and explains methods of improving communication between the entire product development team. With a mix of wit and experience, this book offers real life scenarios and how to deal with them. This non-technical book is a good read for anyone on a software development project.
  5. User Stories Applied: For Agile Software Development: This user-centric guide to development explains how to properly write user stories to aid development. With a focus on agile software development and extreme programming, this book offers useful tips that are immediately applicable.
  6. The Art of Unit Testing: With Examples in .Net: With a clear focus on unit testing .Net applications, this passionate book explains successes and failures experienced while testing both green and brown field code. Read this book if you wish to learn the best practices of unit testing.
  7. Agile Java(TM): Crafting Code with Test-Driven Development: This book is designed to take a developer from hearing oral requirements, to developing tests, then writing code to satisfy the requirements. This introductory book will introduce you to test-driven Java development while teaching you advanced fundamentals.

Have any other books to recommend? Please share them in the comments below.

(image credit: brewbooks)

Software Testing at Facebook

facebook_bugs-300x225 Software Testing at FacebookFacebook is famous for its rapid development and frequent releases. That agile software development cycle requires diligent software testing and bug tracking. Steven Grimm, a Test Engineering Tech Lead at Facebook, has shared his experience using a combination of manual and automated tools while developing at Facebook.

In order to manage the steady stream of updates, Facebook has implemented a hybrid system of automated and manual tests. This system is built of several tools (mentioned below) that interface with their bug tracking software. By integrating testing with their bug tracking software, they are able to efficiently identify which tests are failing and which engineers can resolve the issues. The suite will also run tests on edited code and include the test results when a patch is submitted for review.

Software testing tools used at Facebook:

  • For PHP they use PHPUnit and have developed over 2200 test cases. They are run both manually by developers and automatically by the back-end system.
  • For browser-based testing they use the Watir Framework. They use Watir to both double check that backend rules have been followed and to semi-automate tests that include manual form entry.
  • For JavaScript testing they have started using JSSPec but it’s still in the early stages.
  • For backend testing they use a combination of open source and internally built C++ frameworks. Backend tests are built right into the build process and report error automatically.

Facebook is very proud of their frequent release cycle, they take a delay in pushing a release to production very seriously; however, the engineer in charge of the release may hold back if there are issues.

Facebook’s other major concern is protecting user’s privacy, so they have redundant tests at both the back-end and front-end. Even with this redundant testing, Facebook has still run into problems protecting user’s privacy. We can learn a lot from Facebook’s rapid development/testing cycle, so thank you to Steven Grimm for sharing his experiences.

Agile Software Testing in Vancouver

software-bugs-3491 Agile Software Testing in Vancouver

We have been steadily ramping up our agile software testing services in Vancouver and are here to answer a few frequently asked questions.

1. What is agile software testing?

To understand what agile software testing is, it’s best to compare waterfall software development to agile software development. The waterfall methodology is a sequential software development cycle that designs and develops an entire application in one project.

Agile software development is more iterative. Software developers deliver early versions of the software to end users so that users can start working with the software, generating feedback, and making feature requests. This way, developers receive user feedback before deciding which features are most important.

When developing applications using the traditional waterfall methodology, the software is tested in its entirety after the development phase is complete. In the agile software development process, the software must be tested after each update. Due to the high frequency of updates, agile software testing can be very time consuming.

2. How can automation help with agile software testing?

Since agile software development makes iterative changes to software, much of the functionality will remain the same. Automation can primarily be used to test the functionality that is not affected by the latest update. That way software testers can manually test the new features while automation can continue to confirm that the rest of the software’s functionality still works correctly.

3. What are the best ways to automate agile software testing?

The best methodology for automation is dependent on the situation. In certain situations, white box/grey box test automation may be the right approach. However, in situations such as regression testing, black box test automation fits well. The best approach is to understand the specific challenge, determine if automation is worthwhile, and if so proceed to select the appropriate method.

4. Who is best to conduct the testing?

Business analysts and end users are very good at testing new functionality to ensure it meets the business requirements it was designed for. QA testers can test the existing functionality to make sure it has not been affected by the recent updates.

On occasion, a software update meant to add a new feature can accidentally break pre-existing modules in the application. End users are often too busy with day-to-day activities to test an application in its entirety, so an efficient agile software development process will have end users focused on new features while QA testers can confirm no other functionality is lost.

5. How are testing artifacts affected by agile software testing?

Traditional software development requires all documentation to be completed by the end of the project where it primarily remains static. Agile software development requires documentation to be frequently updated to reflect changes in the application.

The documentation that is affected the most by agile software development is the user-manuals, test cases, and test scripts. In order to keep documentation current, it must be reviewed and updated after each update which can become very cumbersome when done inefficiently.

6. What are the common pitfalls of agile software testing?

The most common pitfall is to fit the waterfall software development methodology with agile software testing. Once methodologies are properly aligned, the second most common pitfall is a failure to engage testers and developers together in the early stages of the project.

7. How can you measure success or effectiveness?

Success criteria remain somewhat the same for testing in general: identify defects early in the development cycle to minimize the cost and time of making corrections. Another criterion specific to agile software testing is the re-usability of testing artifacts. Since documentation must be frequently updated and maintained, a well designed framework and process will improve efficiency.

Visit OptimusQA to learn more about our agile software testing methods and results.

(Photo credit: Guilherme Tavares)

Agile Software Testing and Automation

Automation can be important to agile software testing because the development process is much more iterative and requires frequent testing. A waterfall development cycle only requires major testing at the end of the development process; whereas, agile development requires testing throughout.

What’s the difference between waterfall and agile software development?

Below is a typical example of the waterfall methodology. As illustrated, the process leads developers through a series of sequential steps with major testing done towards the end.

Waterfall Software Development Cycle

waterfall Agile Software Testing and Automation

The agile methodology is much different than the waterfall methodology. Agile development requires very active project management. The process begins with planning, but then instead of building an entire application at once, agile development builds each functional component separately in what are called Sprints. The illustration below shows how the process constantly involves end-users, business analysts, and end users.

Agile Software Development

agile_testing Agile Software Testing and Automation

Agile software development is a flexible process that rapidly builds each piece of a software project, adding functionality every few weeks until the entire project is complete. This kind of rapid development has the flexibility to meet the demands of evolving business requirements.

Agile development teams are often broken into small groups that can work on individual pieces of the overall project. Agile development requires organized communication between business analysts, programmers, and end users in order to ensure all business requirements are being met.

For example

For example, imagine two companies building an email program. Company A uses the waterfall methodology, so they create a requirements document of what their email client will need to do. They list everything from sending and receiving mail, to managing an address book, to scheduling appointments. Then they begin development and build the entire email program over a few months. Once the entire program is built, they begin testing. Often using a combination of manual and automated testing, they go through the program with a fine toothed comb and identify issues. Then they go back to the program and remedy all the bugs before releasing it to end users.

Company B uses an agile development process, so they get a team started right away on basic functionality. In the first week they build the framework and test the user interface. The following week they create the ability to receive emails. The week after that they build the ability to send emails. This continues week after week. They build a small piece of the program, test it, and release it to end users. Their team has already released the program to the client and is adding functionality every week. This gives Company B the ability to remain flexible while responding quickly to user feedback.

Where does testing come in?

In the waterfall development methodology, testing is at the end. Once the project is complete, the entire program is tested using a combination of automated and manual testing. A large list of test cases is worked through to ensure proper functionality of the entire program.

In the agile development methodology, testing is conducted throughout the process, often on a daily basis. That is why agile software development requires increased automated testing because the testing is much more frequent. It would be nearly impossible, or at least in-efficient, to manually test an entire program on a daily or weekly basis throughout the development process. In that way, automated testing makes it possible because it can run through a large series of test cases rapidly, efficiently, and most importantly, effectively.