Types of Performance Testing and the Best Tools for the Job

In the abstract, it’s easy to think of testing a piece of software as a single set of actions. Within the industry, however, it has become common practice to look upon performance testing as a multifaceted task. The process includes:

  • Load testing
  • Stress testing
  • Endurance testing
  • Scalability testing
  • Volume testing
  • Spike testing

Each phase has its distinct requirements and goals, and it’s important to be aware of them before moving ahead with a project. Likewise, it’s prudent to know which tools and processes are most suited to the job.

Load Testing

Load testing is intended to look at performance under two sets of conditions, normal and peak loads. An organization needs to model what it feels is likely to be normal usage of software. For example, a cloud-based photo storage system might expect to handle a certain load during particular parts of the year. Conversely, specific annual increases, such as during the holidays, would also need to be anticipated.

The aim of load testing is not to overload the system. This can be done by using software to create virtual users and have them interact with the software. The goal is to see what performance looks like when an expected load is regularly hitting the system. Bottlenecks have to be identified, and notes need to be passed along to developers to see what can be done.

Stress Testing

Taking things to the next logical step, we arrive at stress testing. This is a deliberately intense process that’s intended to find out where the breaking points of operational capacity are. It should only be conducted once reasonable load testing efforts have been made and remedies have been implemented during that stage.

The objective is to identify safe usage limits, and it’s particularly important to spot vulnerabilities that may be triggered when the system is operating under stress. If a database implementation suffers a buffer overrun during excessive loads, it’s good to know that in advance.

Endurance Testing

It may seem a fine distinction to make, but the question of how a piece of software will hold up over a long period of load is important. Anyone who has ever watched a desktop program’s memory usage balloon over the course of several hours of normal use can appreciate the difference. Just as issues often occur when a system is overwhelmed during a peak test, similar problems may begin appear only after a prolonged run of normal usage.

Scalability Testing

Maintaining any project over the course of years will present issues as the user base grows. This calls for a degree of guess work, as you’ll find yourself often trying to determine how 1,000 users today might grow out over five years. This can lead to unanticipated failures, if not addressed early on in a non-functional environment. No one wants to see a production database run out of space for entries because the index was built using INT11 and the system ran out of assignable unique IDs.

Volume Testing

The throughput of any user base is likely to grow as the popularity of a product increases. To get ahead of these problems, it’s also wise to perform volume testing. The goal in this case is to identify where problems might exist based on the volume of usage. For example, read-write issues with critical interface files, such as settings stored in XML, may create volume limits that can be adjusted by minor tweaks.

Spike Testing

Sudden increases and drops in usage can lead to issues that are difficult to predict. If an entire block of internet addresses loses connectivity, a high-volume site might experience a dropoff that’s both massive and instantaneous. These interruptions may even occur mid-operation. Spike testing allows you to identify specific potential issues and see the system fails elegantly.

Moving to Performance Testing

Devising a way to engage in testing while developers are still working on a specific generation of software takes a lot of planning. A lot of companies are turning to Agile methodologies in order to handle their testing needs. The goal with Agile processes is to see that orderly efforts are made to advance products into testing, make notes of issues, implement changes and confirm completion of work.

Software performance testing work tends to call for a large degree of automation, and it’s wise to keep this in mind when choosing what to use. Many software development environments, such as the Enterprise editions of Microsoft Visual Studio, come with their own performance testing components. Those looking for an open source solution designed for web applications might wish to check out Apache JMeter. IBM Rational Performance Tester and HP LoadRunner are also popular choices for Licensed solutions.

There are several questions to look at. For example, JMeter, by virtue of being open source, doesn’t offer the same sort of scalability that the Visual Studio tools do, especially in terms of being able to buy more virtual users instances in order to keep loading up. If you’re looking for a system that offers cloud-based solutions and simple Agile integration, IBM Rational Performance Tester is a solid option.


If you have questions about getting started with Performance Testing or want to push the toolset further, give us a call. We’re always happy to answer any questions.

Include more than 1000 Values in Oracle IN Clause

In Oracle we can’t include more than 1000 values in the “IN” clause.

To get around this limitation, you divide the string of comma delimited values to be included into two or more parts using SUBSTR or through any similar other function and then these new strings should be placed in the IN clause of different SQL statements that are later combined using the UNION operator.

Sample Code

First create two hidden parameter that include the initial first and second half of the selected values of the parameter. For this write a custom code can be used.

Sample code to break the parameter into two hidden parameter of half the original length.

Public Function Group1(ByVal parameter as Parameter) as String
 Dim s as String
 If parameter.IsMultiValue then
 For i as integer = 0 to parameter.Count/2
 s = s "" CStr(parameter.Value(i)) + ","

 s = CStr(parameter.Value(0))

End If
 Return s
 End Function

Public Function Group2(ByVal parameter as Parameter) as String
 Dim s as String
 If parameter.IsMultiValue then
 For i as integer = (parameter.Count/2) to parameter.Count-1
 s = s "" CStr(parameter.Value(i)) + ","

 s = ""
 End If
 Return s
 End Function

Sample Query

New we use two SQL queries including the two hidden parameters joined by the UNION operator

SELECT * FROM TABLE_1 WHERE VALUE_1 IN (:prmHiddenInspectorFirstGroup1)
SELECT * FROM TABLE_1 WHERE VALUE_1 IN (:prmHiddenInspectorFirstGroup2)

Obviously, this example breaks when you get to 2000 values, but it is presented here as a sample approach that can be easily modified to suit more advanced situations.

The post Include more than 1000 Values in Oracle IN Clause appeared first on OptimusBI.

Test Scenarios for Credit Card Payment Through a POS Application

Point of sale applications need to handle a wide variety of transactions like cash, debit cards, credit cards, gift cards and loyalty cards. Credit cards play an important role in payment for anything purchased. Below are some of the most important scenarios which should be tested with any credit card payment solution when integrated with a POS application.


Credit Card Configuration: Includes configuration of card length, card range and card type (VISA, AMEX, MASTER etc)

Merchant Configuration: A merchant needs to be configured who is authorized to accept card type payments

Credit Card Processor Configuration: Credit card processor needs to be configured to process credit card payment (e.g. Mercury Payment System or Lynk, etc.)

Test Scenarios

Capturing Card Details: Following areas should be tested while capturing card details.

  1. Credit card number : Test card numbers using the correct length and range and card numbers that are outside the correct length and range.
  2. Expiry date: Test valid expiry dates, invalid expiry dates and invalid date formats.
  3. CVV number: Test valid CVV numbers , mismatched CVV numbers and blank CVV numbers.
  4. AVS code: Entering AVS details for configured numeric or alphanumeric formats.
  5. Card reader to capture card details: Test swiping of cards from both sides and chips.
  6. Encryption: Verify that captured card numbers are properly encrypted and decrypted.

Authorization: Once card details are captured, they are sent to processor to be authorized. Following areas need to be tested during authorization.

  1. Authorized amount: Test that the correct amount is being authorized.
  2. Receipt printing: Test that merchant and customer copies of the receipts and any vouchers print properly.
  3. Receipt details: Check that the receipts are printing the proper date, time, card details, authorized amount etc…
  4. Response code: Test that the correct response codes are being returned for approved, declined, on hold and all other transactions.

Settlement: Once the payment is done following things should be tested:

  1. Reprinting receipt: Test that you can reprint the receipt for a closed transaction.
  2. Void credit card’s payment: Check that you can void a payment before posting it and that after posting a payment voiding is not allowed.
  3. Verifying report: All information regarding each credit card transaction should be reflected in reports. Any adjustments made in closed checks should be reflected in the report.

The post Test Scenarios for Credit Card Payment Through a POS Application appeared first on OptimusQA.

Report Performance Optimization: Data Quality

Query optimization is the first place everyone looks when doing performance optimization on reports, but there is more to performance optimization than just optimizing queries.

Proper use of tables, joins conditions and wide data ranges are all solid techniques for optimizing queries. However, data quality is another factor in report performance.

Let me share an example from an organization we were working. The company was using the project management module of an ERP system to track hours, billing and expenses. They had more than 500 projects in the system and ran a weekly report that is configured to run on the database of active projects to provide detailed information on hours consumed, capacity, project status and resources breakdown.

The problem is that it takes one hour to run the report and requires a lot of computing power even after query optimization.

The report is developed to show the information for only active projects in the week. There is no process of closing and archiving the completed projects on regular basis. As a result, the query had to run through all the projects that are marked active, even though these are actually completed.

The solution to this performance optimization challenge was not query optimization in this case. Improving the process of changing the status of projects and archiving them resulted in the performance improvements that they needed. This resulted in better data available in the database and less time and computing resources to produce the query output.

After implementation of the process, the organization has seen significant decrease in the query run time.

The lesson learned here is that report performance optimization isn’t always about query optimization.  The people that touch data at any point can also have an effect on performance.

The post Report Performance Optimization: Data Quality appeared first on OptimusBI.

Writing and Coding Calabash-Android Test Cases

Building on my calabash-android setup post, I want to explain how to start writing calabash-android test cases in the ‘feature’ files and their respective code in ‘step definition’ files.

After you install the Calabash-Android gem then a directory structure is saved at the following path:

[code language=”text”]C:Ruby193librubygems1.9.1gemscalabash-android-0.4.3[/code]

In this folder: “C:Ruby193librubygems1.9.1gemscalabash-android-0.4.3libcalabash-androidsteps” you should find various .rb files which contain the pre-written code for various mobile app events such as touch, swipe, scroll down, enter text in input field number and wait for events.

Such pre-written steps help a lot in writing test cases in the plain English format of Cucumber.

Feature File and Step Definition

Here is an example of a feature file and step definition content for testing login.

Feature file content

[code language=”text”]
Scenario: Login Functionality
Given I am on the Login page
When I enter invalid username and password
And I press Login button
Then Error message should appear stating “Invalid username or password”

Step definition file content

[code language=”text”]
Given (/^I am on the Splash Screen$/) do
wait_for(180) {element_exists("button id:")}
##This row will wait for 180 seconds before loading the login page after installing the app on your mobile device or emulator.

When (/^I enter invalid username and password$/) do
steps %{
Then I enter "" into input field number 1
## This row will enter text in the username field (If it is the first input or text field on the page.)

Then I enter "" into input field number 2
## This row will enter text in the password field (If it is the second input or text field on the page.)

Then I wait for 5 seconds
## This row will make the app wait for 5 seconds

And (/^I press Login button$/) do
steps %{

Then I press the "Log in" button
## This row will press the button with text “Login” on it

Then I wait for 5 seconds

Then (/^Error message should appear stating “Invalid username or password”$/) do
steps %{
Then I see “Invalid username and password”
# This row will check for the text: “Invalid username and password” on the page.

Now calabash-android will read these step definitions and will work with the app according to them.

In this way calabash-android and its pre-defined libraries easily automates the mobile app testing on an emulator or real device.

Common Calabash for Android Issues

Challenge: Check for text that appears twice on the same page but with the second instance not in the viewable area.

Solution: Check that the text appears correctly in the first instance then scroll down the page and check for the second instance.

Feature file content

[code language=”text”]
Then I see “abcdef”
This I scroll down
Then I see “abcdef”

Pros of using calabash-android:

  1. It is absolutely free.
  2. As it is based on the Cucumber framework the test cases are very easily created in simple language.
  3. Support for all the basic events and movements on the mobile are present in the libraries.
  4.  It has a thriving forum and Google Group: “Calabash Android”.

Cons of using calabash-android:

  1. It takes time to run on an emulator or device as it always installs the app first before starting the first scenario.
  2. If a step fails then the subsequent tests in the scenario are skipped.
  3. It is still in its nascent stage. Support for many complex scenarios or events is not supported. For that either you have to code your way in Ruby or wait for these supports to appear on the scene.
  4. We must have the code of the app for identifying the ids of various elements.

Testing a Point of Sales Application

Testing point of sales (POS) systems is something that we have done a lot of here and I wanted to share an overview of what POS testing entails.

A POS is a computer that is used instead of an old fashioned cash register. It’s more like a personal computer connected to a receipt printer, cash drawer, credit/debit card reader and a bar code scanner.

Goals of a POS Test Plan

A good POS test plan should ensure the following:

  1. The system makes the process of paying for an item simple and fast. This can be accomplished by scanning the item with a barcode scanner, totaling the purchase and accepting payment easily through credit, debit, gift cards or by cash.
  2. It gives the customer the option to pay part of the bill by cash and part of it by a Credit/Debit/Gift card. The process of paying with multiple options should be simple and fast so that customer does not have to wait to process their orders.
  3. A good POS system allows you to suspend a transaction for a particular customer with one touch in case the customer forgets something while letting the cashier help other customers who are waiting for the checkout. When the customer with the suspended transaction returns, the system should retrieve the transaction and complete it without re-entering all the items.
  4. The system keeps track of what your customers are buying and who they are. It keeps track of what’s selling, at what times of day or week, to which types of customers and by which sales people. The data collected from POS terminals is useful in planning of long term strategies. A good POS System will also have reminder dates for each customer so you can call or e-mail them prior to an anniversary or birthday.

Testing of POS applications

In today’s competitive business, a POS can be a key differentiator for retailers. It needs to meet changing business needs under tight budget and aggressive timelines. So it is very important for POS applications to be reliable, scalable, easily maintainable, highly secured, and easily customizable by the customer.


  1. Multiple Configurations: Testing a POS application with different settings and configurations is a cumbersome task. Test cases should be designed covering each and every scenario (valid or invalid) in detail. Therefore significant budget should be put in testing of such applications to prevent any major issues at the customer end.
  2. Peripheral issues: The peripheral issues may be related to devices which are connected to a POS like barcode scanners, scales, printers, towers and cash drawers.
  3. Complex interfaces: Integration of POS System involves numerous interconnected systems and third party elements. Systematic test design techniques are followed to reduce the complexity of interfaces
  4. Test Lab Maintenance: As a significant amount of hardware is normally connected to a POS, so it requires a large amount of space to house this hardware. You also have to put some effort and expense in to keeping the hardware is in good repair.
  5. Upgrades: Rapid technological advancements necessitate a frequent hardware and software upgrades which requires more infrastructure.
  6. PCI Compliance: Care must be taken to adopt of PCI-compliant, tamper-proof infrastructure at all POS terminals to protect cardholder data and identity.

POS Test Stack

  1. System Testing: System testing involves testing the complete, integrated system against the requirements specified in the requirement documentation. It aims at finding defects within the system as a whole. It includes testing of complete workflows from ordering an item to settling of checks through different modes of payment and generating reports.
  2. System Integration testing: It aims at evaluating the software in terms of its co-existence with other third party software. For instance, in POS applications third party software like CRM systems are integrated and tested to find major flaws in their interactions.
  3. Business layer testing includes testing of:
    • Hardware Settings and Hardware Units: Testing of hardware devices connected to a POS application with different and configurations.
      Barcode Scanner: Testing of barcode scanners to check whether it reads the code of an item correctly and displays price corresponding to that item.
      Printers: Testing of printers to verify whether the correct information related to an item is printed on various receipts generated at the printer. Correct information related to the customer, his ordered items and total should be printed on the receipts.
      Scale: Testing of scales used to weigh items sold by weight. The scale should display the exact quantity of an item which is placed on it to weigh and its price should appear according to the weight of the item.
      Cash Drawer: Testing of Cash Drawer to check whether it opens when an order is placed or when a check is settled.
      Tower: Testing the tower that displays the ordered item and its price. This helps customer to see what he has ordered. It also displays the due amount, tendered amount and the change given to the customer. A tower is tested against this information to check if correct information is displayed on it.
    • Normal and Abnormal Scenarios: This involves testing of POS application with valid and invalid data. This testing is performed keeping in mind that the application does not crash on entering the invalid data.
    • Business Transaction flow tests: These tests involve testing the complete workflow from ordering an item to the settlement of the check.
  4. Reporting Tests: This involves testing of reports. The reports are an important part of the POS application which keeps track of all the transactions that take place on a particular day, in a particular week, month or year. These reports need to be verified to check that they show the correct information related to sales, transactions etc… The reports must also be tested to see if they are properly apply any filters.
  5. Regression Testing: This is testing which is performed to verify the application after a new functionality has been introduced into the application to checks that this has not hampered other parts of the application which were working correctly earlier. Regression testing may also be performed as a result of some bug fixes.
  6. Performance Testing: Performance of the system should not degrade with the increase in size of data in the database.

Four Common Pitfalls of Test Automation Projects

After helping out with test automation projects for a number of different organizations, I have identified several common pitfalls of test automation projects that I often see.

Avoiding making these common mistakes will certainly accelerate the project momentum and make the automation experience more rewarding. The first two suggestions address the managerial pitfalls while the other two address the technical pitfalls.

Always treat test automation projects as development projects

Often, test automation is seen as “nice-to-have”, so it does not receive the attention required to take the fullest advantage of it. Therefore, test automation project often slip and the company never enjoys the benefits of test automation.

In some cases, the team even becomes frustrated with test automation and hesitates to invest in test automation again.

To make your test automation project effective, always approach test automation projects as development projects.

  • Assign a product owner or project manager
  • Depending on the development methodology your company deploys, come up with a project plan or project milestones just like you would for any other development project.
  • Dedicate at least one to two people to the test automation team. Since test automation is also a kind of development project, ideally hire or arrange someone who has development experiences to be the lead programmer for the project.
  • Create a design of the test framework or test architecture before coding your automated tests. Without a proper design, the development process and maintenance will be a pain.

Have the product development team collaborate on test automation

In every organization that I have helped with, the test automation team is separated from the product development team, and collaboration between the teams is minimal or non-existant. Test automation should be a joint effort of the product development team and the test automation team.

This helps improve the software’s testability and reduce test automation overhead thus easing the architecture design and future maintenance.

Invest time in result reporting

When we talk about test automation, most people focus on the test execution and miss the result analysis and reporting part. Test automation is not just about performing the test steps; implementing a good result reporting component helps you find bugs more effectively. Just having “success” and “fail” as the report messages does not help diagnose the actual problems too much, and testers need to repeat the tests manually to find the bugs.

In addition, create a scale of error severity so that the team can easily identify critical and major bugs. This can also help the automation tool to determine if the error is severe enough to terminate the test. For example, in cases where data correctness is not critical, even if data validation errors are detected by the tool, the test should continue until it hits a critical bug.

Implement test set up and tear down procedures

Test set up and tear down procedures are usually missed out for test automation. People tend to forget about them because when we execute the tests manually, the set up and tear down are so seamless and subtle that we don’t see a need to implement them for automation.

But in fact, not implementing the automation code to handle set up and tear down will have greater impacts than most people expect. For example, the browser cache may preserve the state of the website under test and, without clearing the cache, you may get incorrect results back. Another example, if there are multiple instances of the same application under test running on the same machine, the tool may have troubles identifying which one to run tests with.

What to Include in a Software Testing Traceability Matrix

A traceability matrix is an essential tool for any thorough software tester. It should be referenced throughout the entire software development life cycle to bring transparency and completeness to software testing efforts.

Testing Requirements Traceability Matrix

In simple words, a testing requirements traceability matrix is a document that traces and maps user requirements, usually requirement IDs from a requirement specification document, with the test case IDs. The purpose of this document is to make sure that all the requirements are covered in test cases so that nothing is missed.

The traceability matrix document is prepared to show clients that the coverage is complete. It usually includes the following columns: requirement, baseline document reference number, test case/condition and defect/bug ID. Using this document the person can track the Requirement based on the Defect id.

Adding a few more columns to the traceability matrix gives you a good test case coverage checklist.

Types of Traceability Matrices

  1. Forward Traceability: Mapping of requirements to test cases.
  2. Backward Traceability: Mapping of test cases to requirements.
  3. Bi-Directional Traceability:  A good example of a bi-directional traceability matrix used in software testing is the references from test cases to basis documentation and vice versa.

A forward traceability helps us see which requirements are covered in which test cases? Or whether a requirements is covered at all.

A forward traceability matrix ensures that we are building the right product.

A backward traceability matrix helps us see which test cases are mapped against which requirements.

This will help us in identify if there are test cases that do not trace to any coverage item. If the test case does not trace to a coverage item, it is either not required and should be removed, or else a specification like a requirement or two should be added. This backward traceability is also very helpful if you want to identify that a particular test case is covering how many requirements.

A backward traceability matrix ensures that we are building the product right.

A bi-directional traceability matrix contains both forward and backward traceability.

Why use traceability matrices?

The traceability matrices are the answer to the following questions when testing any software project:

  • How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customer’s needs?
  • How can I certify that the final software product meets the customer’s needs? It lets us make sure requirements are captured in test cases.

Disadvantages of not using traceability matrices include the following:

  • More defects in production poor or unknown test coverage.
  • Discovering bugs later in the development cycle resulting in more expensive fixes.
  • Difficulties planning and tracking projects.
  • Misunderstandings between different teams over project dependencies, delays, etc…

Benefits of using traceability matrices include the following:

  • Making it obvious to the client that the software is being developed as required.
  • Ensuring that all requirements are included in the test cases.
  • Ensuring that developers are not creating features that no one has requested.
  • Making it easy to identify missing functionalities.
  • Making it easy to find out which test cases need updating if there are change requests.

How to create a traceability matrix

  1. Open Excel to create Traceability Matrix:
  2. Define following columns:
    1. Use case ID / requirement ID.
    2. Use case / requirement description.
    3. One column for each test case.
  3. Identify all the testable requirements in granular level from requirement document. Typical requirements you need to capture are as follows:
    1. Used cases (all the flows are captured)
    2. Error Messages
    3. Business rules
    4. Functional rules
    5. Software requirement specifications
    6. Functional requirement specifications
  4. Identity all the test scenarios and test flows.
  5. Map Requirement IDs to the test cases. Assume (as per below table), Test case “TC 001” is one flow or scenario.  SR-1.1 and SR-1.2 are covered .
  6. Now from below table you can easily identify which test cases cover which requirements and which test cases need to be updated if there are any change requests.
Requirement IDRequirement DescriptionsTC 001TC 002TC 003
SR-1.1User should be able to do this. x
SR-1.2User should be able to do that. x
SR-1.3On clicking this, the following message should appear. x
SR-1.4 x
SR-1.5 x x
SR-1.6 x
SR-1.7 x

This is a very basic traceability matrix format. You can add more columns and make it more effective. Here are some columns you should consider adding:

  • ID
  • Assoc ID
  • Technical Assumptions
  • Customer Needs
  • Functional Requirement
  • Status
  • Architectural/Design Document
  • Technical Specification
  • System Component
  • Software Module
  • Test Case Number
  • Tested In
  • Implemented In
  • Verification
  • Additional Comments

Here is another simple forward traceability matrix that we used on a recent project in Excel format.

Testing Infrastructure in the Cloud: Is it Right for Your Business?

cloud-testing Testing Infrastructure in the Cloud: Is it Right for Your Business?

In last six months, many of our clients have started asking about leveraging cloud-based testing infrastructure. We were intrigued by this development and have now used cloud infrastructure on a few testing projects.

Some key benefits of this approach include :

  • Pay as you go: Clients need testing infrastructure only for limited duration of the project. Cloud pricing helps there.
  • Scalability: Clients need certain infrastructure scale to test non functional aspects like scalability, availability etc. Cloud infrastructure’s scalability fits this need perfectly.
  • Capital vs. operating expense: With cloud infrastructure, it is easier to expense to a specific initiative. It brings more accountability.
  • Cost: Overall, our experience show that cloud testing tends to be cost-efficient. Part of that efficiency comes from lower overhead and reduction in IT staff costs.

So with all of these advantages, why is not everyone adopting cloud testing?

First, I would say that the trend towards cloud testing, and cloud infrastructure in general, is definitely strong and growing. Clients looking to replace infrastructure are considering the cloud as one of the top 2 options.

However, there are aspects like lack of knowledge, fear of uncertainty and issues related to organizational dynamics holding many companies back from embracing the cloud.

Despite these holdups, the case for cloud infrastructure is quite good and we expect to see much higher traction in next few quarters.

Is the cloud ready for testing infrastructure?

Our view is that cloud is not be ready for 100 percent of testing, particularly in areas where you need to capture some network issues in a real environment. However, with the launch of new tools almost on a weekly basis as well as the cloud’s suitability to simulate scenarios, cloud testing is rapidly maturing.

So, to summarize, if you are considering either setting up testing infrastructure or replacing your current infrastructure, encourage you to evaluate some of the cloud-based options.

Scrum-y Tools

As a QA Professional, I’m always looking to be in the know about tools and techniques companies use to deliver software.  I’ve worked in companies that have used Waterfall, Agile, and a mix and match of everything in between. With that, I’ve been exposed to a variety of tools to track testing and Scrum project progress.

It’s almost a given that everyone has a grip on the tools they’re using.  However, no matter what company and tool, sometimes the tools just don’t quite fit what we are looking for.  Below are three tools that I’ve worked with and a quick highlight of the pros and cons of each.






So far, the tool I’ve used the most is JIRA.  JIRA tracks bugs, tasks, and other issues and comes with a set of reports for upper management.  JIRA is especially useful for tracking medium to complex projects, but may be overkill for organizations that handle smaller projects.  The Greenhopper add-on is especially useful for Agile Project Management.

Pros: The drag and drop functionality for tickets is handy, easy to pull stats from for reporting, very visual interface.

Cons: It can be quite costly to use JIRA depending on the number of users. All web-based; therefore, access to information is dependent on the internet (or internal connectivity).

on-time Scrum-y ToolsOnTime:


Axosoft’s OnTime is great for organizations where client feedback drives the product backlog.  Because OnTime is so customizable, it’s best for the organization to have their internal SDLC process in place before trying to implement this tool; otherwise, it can get hairy trying to put the pieces together while configuring OnTime.

Pros: Client issues can easily be ported into defect tickets, can define work flows to move tickets from different departments, intuitive to use.

Cons: Unable to add inline attachments to tickets, web interface has less functionality than the downloadable client.

Redmine Scrum-y ToolsRedmine:


Organizations that have plenty of different projects on the go can use Redmine to track issues, tickets, and time spent on each task.  Redmine can also act as your company’s central repository for documents and other project artifacts.

Pros: being able to move tickets from project to project, facilitates team collaboration and easily searchable.

Cons: UI is not as intuitive as other tools, needs more built in reports and dashboards.

What tools have you used to track projects and what do you like/not like about it?