Traditional vs Agile Software Development

The modern software development life cycle methodology can be subdivided into two types – the Traditional process and the agile process. In this post, we will look at what each of these processes are and then do a comparative analysis between them.

A software system is built in such a way that it can perform complex tasks and computations on the behest of the user. The process of building a software requires a rigorous attention to detail and a general guiding algorithm. The traditional and agile processes essentially are manifestos for software development ideologies, more than anything else. They can be further subdivided into more specific processes that form the basic plan of each unique software development life cycle. The agile methodology post-dates the traditional one in the evolution of the software development processes.

Traditional Software Development

Traditional methodologies are characterized by a sequential series of steps like requirement definition, planning, building, testing and deployment. First, the client requirements are carefully documented to the fullest extent. Then, the general architecture of the software is visualized and the actual coding commences. Then comes the various types of testing and the final deployment. The basic idea here is the detailed visualization of the finished project before the building starts, and working one’s way through to the visualized finished structure.

Agile Software Development

As its name suggests, the agile method of developing software is a lot less rigorous than the former. The crux here is incremental and iterative development, where phases of the process are revisited time and again. Agile developers recognize that software is not a large block of a structure, but an incredibly organic entity with complex moving parts interacting with each other. Thus, they give more importance to adaptability and constant compatibility testing.

Traditional Vs. Agile Methodologies

Customer Involvement

While traditional methodologies require the user to provide a detailed idea of the exact requirements with respect to the intended software, agile developers are more flexible through their iterative style of work. With agile development, the user is constantly in the loop, suggesting improvements and reviewing every phase. This increased customer involvement works on two levels – one is that it makes reflective changes easy, as opposed to traditional development where chunks of system might have to be dismantled to improve a small part, and the second being that it increases the customer satisfaction drastically.


This is a key issue. Especially the reworking costs, which tend to shoot after a bout of testing, are very low in agile computing than the traditional methods. Since testing is not compartmentalized in agile development, a potential problem can be identified and swiftly dealt with. This affects the corrective maintenance costs for the software.

Development Flexibility

Again, agile wins hands down in this category. The idea is that in traditional systems, the role of a coder and a business analyst is bifurcated. The architecture stems from the initial customer requirements collected and analyzed by the business analysts which is then developed by the coders. With agile, since the customer interaction is so extensive at every stage of development, and since the developers are the ones who are handling the interaction, the amount of architectural, functional and fiscal flexibility endowed to the project is pretty high.


One of the few areas where the agile methodology falls short as compared to the traditional one is the documentation process. Traditional development prides itself in stringent documentation and reviewal of every step in the way to deployment. The documentation process also becomes easier due to the unidirectionality of the algorithm. With agile, the primary interface of documentation and reviewal is the code itself, with annotations and comments added by developers. This can become a bit of an issue in terms of communicability.

System Size

Small to medium sized systems are perfect for agile development since they offer more compatibility to the elastic method of attack that agile brings to the table. Large software systems offer more inertia to iterative change in some respects, especially with the system design parts. In this case, one can either go for the traditional way of software development that deals with bulkier, unilateral systems with better ease and forgo the other advantages offered by agile; or one can compartmentalize the large system into smaller agile developed processes and play a compatibility gamble.

Choosing the development ideology compatible to your software is a difficult and complex decision.

Get in touch with OptimusInfo for more information on Agile and Traditional development processes. Make an informed decision with the help of our experts!

What is Software Development Life Cycle and How Does it Affect Outsourcing?

outsourcing What is Software Development Life Cycle and How Does it Affect Outsourcing?

Last year we spoke about the Software Development Outsourcing Life Cycle, that deals with the process of outsourcing development from a client’s point of view. Here we talk about the SDLC, that is, the actual development process for a software, and how it might affect the outsourcing of the software.

SDLCs & How They Affect Outsourcing

The Software Development Life Cycle is something every Computer Engineer learns in school. It is the formal process to approach the development of a software – a scientific guideline towards the right way of building a software. Of course, there is no one right way of doing this. Throughout the history of computing, we have seen new development life cycles being invented, each with their own purpose, pros and cons. Each development methodology comes with a philosophy of its own, and in turn tends to be compatible to softwares that attest to that philosophy. At the same time, certain best practices have been singled out through history while others simply discarded and forgotten. Every SDLC methodology drastically affects the outsourcing cycle for a software. It affects the time lines, the development cost and even the ROI of the finished product. Let’s get to know more about this.

A General Guideline to SDLC

Every SLDC, however different it might be from the other, tends to follow a few basic steps in building a software. Depending on the SDLC, the order of these steps might differ, or even coincide. The basic ones are the following –

  • Planning and Visualization

  • Requirement Analysis

  • Software Modelling and Design

  • Coding

  • Documentation

  • Testing

  • Deployment and Maintenance

Each of these points is crucial for software development. But, depending upon the order and the importance given to a few of the above, we subdivide our SDLCs.

Types of SDLCs

Of the Several models and their subsets that have been implemented for years, a developer has to choose the most compatible according to the customer needs. The developer may believe in a certain set of cycles and may train his team to create systems in accordance with those processes. Here are a few basic types of life cycle models.

  • Waterfall/Cascade Model

The waterfall model is the most basic development model. It is pretty straightforward and reliable. Development starts with planning and design, then the actual coding, then the software is tested and once approved, the maintenance process begins.

  • Spiral Model

In 1988, Barry Boehm introduced a new methodology in which the development takes place in iterations. That is, instead of a classic incremental format of the waterfall model, one goes back and forth in an iterative manner, while with each step increasing the complexity and size of the software – like a spiral.

  • Agile Processes

In the past few years, the ideology of agile development has been on the rise. Agile methodology is supremely adaptive, with the software being tested and corrected throughout the development process. Agile developers believe in hands on corrective coding and implementation rather than documentation for later changes.

How Does it Affect the Outsourcing?


Every development methodology will give a different timeline as a result of different procedures that will affect the final ROI for the customer.

Reliability and Risk

Some development life cycles are specifically built for large, complex software where a sensitive and careful approach is required. More stress is given to the planning and documentation process. On the other hand, small, low risk software can be effectively tackled even by what is known as ad hoc programming, in which hands on coding gives the final result.

Resources and ROI

Methodologies like Agile development are developed so as to minimize the risk of finding a fault in the software during the later stages of development, which tends to increase the corrective costs by manyfold. This is just one example of how the SDLC can affect the ROI


Development process includes many trade-offs to meet the budget, time frame and resource requirements. Many of these can directly or indirectly affect the software security, which might lead to unprecedented losses in the future. Certain life cycles put utmost importance to the software security during the planning and implementation phases, but this means that they tend to increase the immediate costs of building the software.

Get to know which SDLC will be right for the development of your software through OptimusInfo. With our experienced staff of industry experts, we will guide you through the tedious but worthwhile process of software development.

Continuous Delivery: Benefits and Challenges of Automation

Archaic, waterfall methods of development and release are yielding to a new model where software is envisioned, specified, coded, integrated, tested and released within a continuous, seamless process. Continuous integration, delivery and deployment forgoes the fits and starts of linear, throw-it-over-the-wall methodologies in favour of a transparent, stable and highly responsive development pipeline based on build/release cycles of a day or less. A critical component in implementing continuous delivery is a high degree of automation all along that pipeline.

Signs That Your Enterprise Can Benefit from Continuous Delivery

If any of these conditions apply to your organization’s software development and delivery processes, then you should be taking a hard look at switching to a continuous delivery model:

  • Release times are long and getting longer. This is probably due to a combinatorial increase in complexity, which CD is particularly equipped to deal with.
  • Your projects require a large number of contributors, who are dispersed across time zones. CD increases cross-team efficiency by eliminating wait times for other teams to fix defects.
  • Development and test teams are bloated with experts and owners whose functions become bottlenecks to rapid, constant delivery of tested code.
  • Your organization’s software requires frequent updates, especially to fix security issues.
  • If your software is web-based in any degree, you need CD to in order to rapidly introduce incremental features to maintain distance ahead of the competition.
  • The cycle time between feature conception, implementation, analytics and revenue generation is too long to respond effectively to market trends.

Benefits of Continuous Delivery

The largest benefits of continuous delivery to any software development process are stability and increased responsiveness. Because continuous development depends on a build/test cycle of no more than a day, everyone works from the same shared source trunk, which eliminates disruptive branch merges. Responses to requirement modifications, new defects or feature additions are done incrementally and rapidly with more predictability. Bugs are not allowed to linger and improvements lead the competition rather than react to them.

Furthermore, continuous delivery enables your organization to discard batch processes that do not scale with the growth of development teams and software complexity. The overhead of planning for an increasing number of contributions for each iteration and increased testing complexity vanishes when pre-defined workflows for contributions and their testing are used instead. This has the side effect of reducing the number of face-to-face coordination meetings too.

Additionally, when development and delivery become continuous, there is no need for team members to shift between “modes” of development, test and release. Everyone along the pipeline benefits from increased focus on their prime contributions without disruptive, stressful context switches.

The Necessity of Automation

The ultimate ideal of a continuous delivery paradigm is a one-click process that can be initiated by anyone. The degree to which your enterprise can achieve that state of nirvana depends critically on embracing automation every step of the way.

Attainment of that state requires automating code analysis, unit testing, integration/build/regression testing, environment provisioning, defect reporting/repair, staging, release production and deployment. Complete automation often demands cultural changes, infrastructure build-out and re-architecting the software itself. In the latter case, a shift to SOA for all of the enterprise’s software is most complementary to a continuous delivery model. In every point where automation is applied, a high degree of scalable parallelism is absolutely essential to meet daily or higher frequency development/release cycles.

Take, for example, Mozilla’s development process for their Firefox browser. Every code commit automatically triggers an automated build/test process requiring hundreds of hours of CPU time. An enormous amount of investment was required to build the infrastructure to accomplish this, but the benefit is an extremely shorter developer feedback cycle and the ability to test quickly over multiple platforms and OSs.

Other companies utilize continuous development/delivery processes to produce release-quality software several times a day. They can introduce micro-features complete with analytics in a day or two that measure user response in nearly real time that results in the most useful and acceptable product features almost instantly.

For most organizations, the shift to continuous delivery is a seismic event. A recipe for failure is to regard it as something that can be accomplished in an organization’s “spare time.” Even given top priority, a shift to CD is likely to take several months depending on current practices, infrastructure and the ability of the organization’s culture to adapt.

The benefits include more efficient utilization of code and test staff, closely-knit teams, rapid feature introduction and increased confidence in quality due to the natural stabilizing effect that continuous delivery brings to software development. It will enormously hasten the enterprise’s response time to customer and market demands as well.

Improve Software Quality through Continuous Integration

The final judge of any software’s quality is in the eyes of the customer based on three fundamental measures:

  • The software’s functionality, performance, usability and so on meet or exceed requirements
  • The software is as nearly defect-free as humanly possible
  • The software is delivered on time and within budget

Given that the development organization’s planning processes, tools and talent are in good order, one of the key ingredients in producing quality software is the use of agile development methodologies that include continuous development practices, especially continuous integration.

Two Software Development Realities

If original software requirements were correct and unchanging, the jobs of developers and testers would be relatively straightforward. Additionally, if developers were able to write defect-free code from those requirements, the risk of missing delivery deadlines would be greatly reduced. However, the probability of either of those conditions being true is extremely low.

In part, agile methodologies address these realities by embracing the tendency of customers to modify requirements and accommodating inevitable coding defects. These notions are behind the short development cycles – scrums or sprints – that characterize agile development processes. Integral to the process is the idea of continuous integration.

Continuous Integration’s Benefits to Software Quality

With regard to coding defects, it is generally agreed that the earlier defects are detected the easier they are to correct. This is due to two factors. First, defects are eliminated more efficiently by developers if the code in question is still fresh in the mind of the developer. Secondly, the sooner a defect is corrected the less chance other code has to become dependent on that defect, which would increase the effort to debug and correct it.

There can be defects in the requirements themselves, which propagate throughout the software the further development proceeds. More commonly, however, “defects” related to requirements occur because the requirements change and invalidate existing work.

Both of these situations are addressed by unit testing and integrating new code, building the entire software package such as it is and applying regression tests within short time periods. Instead of a frequency of weeks or months as is common in waterfall methodologies, continuous integrations dictates that the entire process be accomplished in less than a day, every day. In the case of a globally dispersed team, the frequency should be higher to avoid regional sub-teams waiting for fixes to defects introduced by teams in other time zones.

Removing Barriers to Continuous Integration in the Enterprise

Adopting continuous integration practices across the enterprise can go a long way toward improving the quality of its software along all three dimensions listed at the beginning of this article. To take full advantage of its benefits, however, the enterprise must focus on removing barriers to its adoption and implementation:

  • The integration process must be flexible enough to incorporate more than code changes. It must accommodate modifications to the development and test environments, production test data, third-party libraries, OS patches and hardware under test.
  • CI procedures, tools and servers do not necessarily require centralization, but they should be homogenized across the organization for the sake of efficiency in tracking and reporting.
  • Because of the high frequency of integrations, effort must be applied to increasing test automation and parallelization of build/test execution either via additional servers or the use of VMs. A complete built/test cycle should finish in under 30 minutes to be most effective.
  • The tendency of code, test scripts, test libraries and other tools to diverge over time or regions must be countered to reduce complexity and increase sharing. This is accomplished by everyone working from a single mainline within version control with no branching. This is the only way to ensure one of CI’s biggest advantages: stability.
  • As per agile methodology, removing human barriers to a smooth execution of CI requires discarding strict matrices of responsibility within and between development and test teams. In this way, bottlenecks created by experts and “owners” of process steps are disposed of.
  • Special attention must be paid to enabling CI infrastructure, processes and personnel to scale as projects grow in complexity or become increasingly dispersed in time and space.

Continuous integration, especially as part of an encompassing agile methodology, provides distinct improvements to the quality of software produced by any enterprise. It improves development and test team efficiency in detecting and repairing software defects, while boosting responsiveness to inevitable changes to customer requirements. By definition, it also increases the stability of any software project, which leads to improved efficiency and further enablement of continuous delivery and continuous deployment capabilities.

Continuous Integration, Delivery and Deployment Explained

The concepts of continuous integration, delivery and deployment arose from the shift to “small-chunk” software methodologies versus “large-chunk” processes. Waterfall development measures progress by weeks or months, whereas agile development gauges tasks in hours, days or a week at the most. Conceptually, both approaches employ a development pipeline of discrete steps from specification to coding to testing and deployment. However, the brisk, incremental pace of agile methodologies is what imparts that impression of seamless continuity.

Although the agile pipeline, observed from on high, seems to blur differences between continuous integration, delivery and deployment, they each have distinct roles. However, they share an underlying philosophy that favors incremental, stable momentum over big-bang, throw-it-over-the-wall practices. One of the fundamental characteristics of continuous software development is that task duration is short. One effect of that is a significant improvement in organizational responsiveness.

Continuous Integration

Developers tend to prefer working on code via a branch off version control’s mainline in order to reduce distractions caused by other developers’ contributions. Typically, this approach creates greater difficulty merging code back to the mainline than it saves. Continuous integration requires developers to work from a single shared code source without branching. Tasks must be well-scoped and completed in no more than a day. Integration efficiency increases, since it avoids combinatorial integration thrash due to multiple, broad changes with unforeseen side effects.

Continuous integration is greatly facilitated if an easy-to-use, automated unit test framework is included in the development environment. With such a framework, changes are thoroughly validated by the development team prior to builds and regression testing. Such a component usually resides on a dedicated server. If defects are found, they are more quickly remedied while the coding details are still fresh in the minds of developers.

Continuous Delivery

After the build/integration process, but prior to customer deployment, lies the process of continuous delivery. It includes a staging environment that emulates the final production context as nearly as possible. Achieving production conditions requires such things as identical hardware, software configurations and production data to drive tests. Within this environment, business logic tests and user acceptance tests are performed.

In a typical waterfall process, a nearly complete version of the software under development would be delivered to the staging environment every month or so. In an agile methodology, however, incremental versions are constantly delivered via the development pipeline. As in continuous integration, the impetus behind this is enabling faster turnaround time on defects whether they appear in programming or business logic. Furthermore, it facilitates early customer feedback, which in turn reduces development efforts to accommodate customer requirement modifications.

Continuous Deployment

It is improbable that your software’s customers require actual, live deployment of the software on a daily or even weekly basis. However, a fundamental principle of continuous deployment is that you could deliver software that frequently if the need arose. That characteristic is a natural disciplinary consequence of the continuous development and continuous delivery stages that precede it.

Continuous deployment lies on the same pipeline, works from the same shared code base and embodies precisely the same rapid, continuous change paradigm. It enables incremental feature introduction, measurement and early revenue generation without the burdensome overhead that would result from attempting integration, testing and releasing new features in a “whole-hog” fashion. New ideas enter the development pipeline and emerge into deployments in one or a few weeks instead of months later. Improvements can be added weekly based on real-world feedback.

The Role of Automation in Continuous Development

The success of agile methodology depends on instilling stability into the process from end to end at the start. If done correctly, the evidence is reduced development thrash, instant progress feedback and the ability to create a product that better meets customer requirements.

Automation is key to obtaining such stability, which in turn enhances the automation process itself. It is not unusual for early automation efforts to be fragmented over development, integration, delivery and deployment stages, but the ideal is a one-touch, end-to-end process that validates the stability of the project as a whole.

Overall, the operational goal of a “continuous” approach to software development is the reduction of overhead activities that slow down the implementation of new features and diminish the organization’s responsiveness to defects, changing functional requirements and market shifts. Metaphorically, it is akin to climbing a mountain slope step by step rather than spending months building a rocket-propelled sled to attain the summit in one go. At any point in a continuous process, you are able to measure your progress, adjust your route and keep both feet on the ground.

Opportunities for the Internet of Things in Enterprise

An October 2013 forecast by IDC predicts that by 2020, the Internet of Things will consist of 200 billion devices in a market worth $9 trillion dollars. A May 2013 McKinsey Global Institute whitepaper forecasts several tens of trillions of economic activity around IoT by 2025. Even if they are mostly wrong, it seems appropriate for enterprise to start making sense of the business opportunities just around the corner.

The IoT Is Underway, Full Steam Ahead

Today, the nascent IoT includes industrial sensors, mobile phones, wearable devices, PCs, servers and all the networking equipment tying them together. The universe of new sensors, smart devices and the multitude of ways in which they will communicate with us and other machines, however, is far larger than contemporary experience. We are on the brink of phenomenal growth in both the number and variety of industrial and consumer-level devices that will pop into existence over the next five years.

The obstacles in the way of IoT’s growth are substantial but so are the opportunities. Besides working hard to stumble across the “next big thing” in IoT, here are four ways in which enterprises can take advantage of the expected exponential growth instigated by IoT.

Improvements to Business Efficiency

IoT adds another dimension to companies’ quest for complete business digitization. Embedded smart devices will sharpen the assessment of assets’ value to business operations. Distributed throughout a business’ supply chain, for instance, IoT will measure supplier and production efficiency and responsiveness in the face of market fluctuations. Retailers will stretch thin margins further with IoT data, including multimedia that predict buyer behavior and detect new trends.

Big Data Bigger Than Ever

The IoT is about to unleash a tsunami of data. In the storage, management, preprocessing and analysis of these data lie revenue growth opportunities for Internet infrastructure companies and software services that assist customers in extracting its latent value.

Surveys indicate that a small fraction of companies consider themselves competent at identifying and acting upon key data in current data streams, and over 90 percent of smart device data is currently discarded. The monetary potential for companies that can help other companies deconstruct, digest and deploy decision-making solutions based on this biggest of Big Data seems unbounded.


The consumer-based IoT brings an even broader diversity of devices and communication protocols than the mobile device explosion underway now. This will bring exponential growth in connectivity also. These aspects plus a larger attack surface mean the potential for digital mayhem in our personal lives is a real concern. Threats to our personal security and privacy are opportunities begging for companies who supply effective means to detect and neutralize the perils.


Some see standards as quelling innovation, but that potential is muted when there is widespread participation by stakeholders. In the case of IoT, such participation will necessarily be cross-industry if the market disruption of IoT is ever to come to fruition as many hope and some fear.

The standards opportunity for enterprises that participate early and often is that competitors will not gain unfair advantage and to ensure that one’s own technology can be accommodated. Standards are vital to end users who want assurances of device interoperability and products that adhere to policies and regulation pertaining to privacy and security.


Anyone who claims to know what IoT is all about and where it is going should be regarded with intense suspicion. Few question that it will lead to greater productivity and economic activity, but where the highest value products lie is open to interpretation. In the face of uncertainty, however, there are path for enterprises to following in seeking value from IoT:

  • By inward contemplation of how IoT can create operating efficiencies in their own business including pilot programs to verify value
  • Devising or revising their Big Data strategy as the IoT wave builds, regardless of whether the organization is a producer or consumer of Business Intelligence analysis
  • Examining security and privacy issues closely that may affect enterprise operations, IoT product designs and customer relations
  • Committed participation in IoT standards efforts in order to influence their direction and details and to engender closer collaboration with other key players

The companies that make an earnest effort to understand IoT and what it means for their operations and revenues will likely find opportunities knocking on their door. In any case, they will be in the best position to respond effectively when IoT takes sudden swerves as is inevitable with any technological disruption.

Top Internet of Things (IoT) Trends in 2015

Just as cloud computing and Big Data were the up and coming trends at the start of this decade, the Internet of Things is now in the limelight. While the Cloud and Big Data are well underway in terms of enterprise adoption, the IoT is just starting to gain serious traction. The hype is still in full swing, but innovation and implementation are also being realized that will lead to a ubiquitous and varied world of connectivity and data sources. Here then are 10 key trends for which you should be on the lookout in 2015 regarding IoT.

New Devices

Up to now, the penetration of IoT has been shallow. As these devices get smaller, smarter and less expensive, watch for them to broaden and deepen their application reach from health monitoring to smart homes, smart cars, medical devices, energy systems and everyday machines such as parking meters and hair curlers.

New Applications

Without applications to control these devices and process their data, they will never reach full potential. Already IoT-targeted development platforms are being created to interface the things of IoT with end users and analytical backend services in support of enterprise marketing and decision-making.

Standards Development

Supporting the expected wave of applications and services springing out of IoT, 2015 will see proprietary and open standards breakthroughs aimed at a reduction in re-inventing IoT infrastructure so that companies and individuals concentrate more effort on true innovations.

Raised Expectations

Now that the IoT hype has penetrated down to the consumer level, and even though consumers are more than a bit fuzzy as to what it all means, their expectations for seeing more products with embedded sensors, logic and connectivity have never been higher. Everyone will realize, some too late, that 2015 is the year that businesses can most effectively grab the early adopters of IoT-enabled machines.

Multi-Sensor Support

This year, IoT appliances will incorporate multiple sensors that increase the accuracy of calculations, provide redundancy and simply do more stuff via data sharing. Multiple sensors will require off-loading hubs that allow sensors to connect to one another directly and bypass the main processor.

M2M Automation

Improved communication capabilities and multi-sensor hubs will also contribute this year towards IoT machines talking to one another. Taken a step further, the possibilities of machines pooling data and automatically making decisions without human involvement will soon be realized.

Vertical IoT Services

Cloud computing is a mature trend and one of its most fruitful applications is Big Data processing. Thus, since IoT is a natural contributor of Big Data, watch for tailor-made IoT cloud services for acquiring, digesting and analyzing this growing treasure trove of information.

Privacy and Security Concerns Take Center Stage

With the onslaught of IoT devices soon to come, 2015 has and will continue to be the year in which the obvious concerns about what it all means for individual privacy and data safety come to the fore. It is none too soon, since as with all previous technology waves there will be many unforeseen side effects.

Strategic Partnerships

In preparation for a full-on IoT environment, watch for IT vendors, telecoms, semiconductor manufacturers, Big Data software vendors and IoT platform providers to make big moves to make acquisitions or strategic partnerships that will position them to best advantage against competitors.

Tales of Success

As 2015 wears on, IoT customer case studies are providing a sure sign that vendors are successfully applying the technology and acquiring the experience necessary to sell the benefits to enterprises in multiple industries. This trend will be evident also by growing numbers of training programs, seminars and industry showcase events.


The trends above are clear indicators that the hype around the Internet of Things is in a validation phase leading to increased product adoption and growth of the supporting ecosystem necessary to carry it forward. For those organizations waiting to see what will happen, they may underestimate the amount of disruption to their markets and business processes that the IoT brings. They are best advised to gauge the impact of IoT trends to their own goals.

‘Internet of Things’ Security and Privacy Concerns

A 2014 consumer consolidation survey by the Acquity Group contains a telling statistic. Nearly 90 percent of consumers have no clear concept of what the so-called “Internet of Things” is. That should not be a total surprise. Even though the IoT has been discussed for years and companies are rapidly implementing IoT components, its definition is nothing more than a loose architectural vision that varies vastly depending on who is describing it.

The reality of IoT today is a collection of discrete devices such as smartwatches, phones, appliances, health monitors and home environmental systems that interact and from which data is collected. Visionaries imagine a world in which thousands of such invisible, embedded devices permeate where we work, sleep and play. These will include medical devices, home energy systems, transportation systems, geolocation sensors, parking meters, vending machines and even toothbrushes.

Implications to Personal Privacy and Security

Just as no one was able to predict how the advent of the PC and the Internet changed how we worked and communicated with one another, so the IoT is suffused with unforeseen significance regarding our personal privacy and safety. In the context of advanced abilities to collect, correlate and analyze huge data streams, the side effects of IoT data acquisition are impossible to predict. The availability of massive amounts of personal information could easily mean that everyone loses some control over their life.

The potential for increased, detailed surveillance of individuals cannot be ignored. Furthermore, the ubiquity of IoT devices broadens our personal security attack surface to those who may have nefarious purposes. Such devices could provide a gateway to other connected devices containing sensitive information. Even if such data collection were intrinsically benign, would you want to live in a world where your personal habits and activities are continually quantified sold to third parties?

The Incentives to Diminished Security

Despite such real concerns, the history of technology adoption by consumers demonstrates that most people are willing to sell out their privacy to one degree or another. Just like the data collected on us via our web browser, the data coming from the IoT has value beyond a device’s primary application to those able to analyze it. These data have monetary value that could allow manufacturers to practically give them away with the expectation of reaping the value of these data. Additionally, since most IoT capabilities will be built-in as secondary components to larger appliances, cars or environmental systems, most consumers will have scant choice but to accept their presence.

Methods to Protect IoT Privacy and Security

Privacy Preferences

End users should have control over which data are collected and how they are shared directly or indirectly. For instance, they should be permitted to define groups such as family, friends and professionals with specific sharing policies. To be truly effective, this step requires preference standards to be applied across all devices. Such standards could be outlined by government regulatory bodies and implemented in detail by industry groups.

Data Minimization

IoT device makers should adhere to a policy of data minimization aimed at collecting the smallest amount of information required for device operation. Such a policy must include spatial and temporal minimization as well that dictates where and for how long such information is stored.


Consumers must be made aware of which data are collected, transmitted and stored by embedded IoT devices. This information should include specific data formats, communication protocols and which other devices are capable of communicating with the device. The usage and sharing policies of anyone acquiring these data must be disclosed.

The degree to which technology containing IoT devices meets the above protections could be represented by standard, condensed privacy or security ratings. Not only does this provide consumers insight into how a device potentially impacts their privacy but manufacturers could use such “seals of approval” to competitive advantage.


The advent of the Internet of Things poses potential hazards to every individual’s privacy if guidelines, policies and designs do not mitigate these threats. Past experience is rife with unforeseen privacy threats resulting from technology advances. The obvious complexity and data collection capabilities arising from the IoT should give consumers and device makers equal pause to consider how to build in safeguards starting now. To not do so now will assuredly have a negative impact on the IoT’s usefulness and potential for growth.

Introduction to Agile Project Management

Agile software development was created to address inefficiencies inherent in sequential development methods, such as waterfall methodologies. Creating software with an opaque assembly line process where specification, coding and testing are separate and done in strict order struggles to keep up with fast-moving markets requirements within which modern software products must compete. In order to retain market relevance and extract the highest productivity from development and business resources, companies are ditching plan-driven software methodologies for the responsive, continuous delivery model that the Agile Method offers.

The Agile Method

The Agile Method takes into account the unpredictability of non-trivial software projects that are the bane of plan-driven methodologies. Agile software development is centered on brief, concurrent development/test cycles and close collaboration between all stakeholders.

The Agile Method acknowledges the impossibility that all project requirements are known before coding begins. Participants in a waterfall development process know this also but compensate for missing requirements by padding the schedule to account for their disruptive effect at later development stages.

Unlike plan-driven methodologies, the Agile Method is adaptive and based on iterative requirements discovery. It compresses the typical steps of the waterfall method into sprints. To build a sprint, which typically has a lifespan of a day or two, specific functional component requirements are targeted. These are coded, tested and approved or disapproved by the agile team’s product owner. Feedback is gathered after each sprint, which is used to fine-tune the next sprint.

Principles of Agile Project Management

The original Agile Methodology outlines 12 principles that are essential to meeting the objectives of continuous delivery, intimate stakeholder involvement and self-motivated contributors:

  1. Satisfy customers by continuous delivery of working software
  2. Changes in requirements are expected, planned for and embraced
  3. Useful software is delivered regularly with the shortest possible frequency
  4. Cross-functional collaboration between developers, testers, and business development owners is essential
  5. Project contributors are empowered to make decisions within an environment of trust
  6. Face-to-face communication is the preferred method for information transfer among teams
  7. Useful, working software is the fundamental measure of project progress
  8. Development must occur at a consistent, sustainable rate
  9. Technical excellence and sound design principles must be adhered to at all times
  10. Simplicity is essential in order to maximize the amount of work that is not done
  11. Teams should be self-organizing and self-managed
  12. At each iteration, teams incorporate feedback to adapt to changing circumstances

Benefits of Agile

The Agile Method is widely recognized as the best solution to the problems inherent in building software products of high quality and market relevance. Agile Methodology has a number of valuable benefits:

  • Delivery of working software is frequent and predictable.
  • The rapid, iterative nature of agile development teams means shorter time-to-market and time-to-revenue.
  • Major and minor product revisions are seamless within a continuous delivery environment.
  • Defects are found quickly and when they have the least impact to the overall schedule.
  • Stakeholders are actively engaged and progress is transparent, which leads to the highest business value.
  • The right product is delivered in the end without surprises.
  • Market relevance is achieved by a process that embraces change.
  • Motivated, decision-empowered teams are not waiting to catch what other teams throw over the wall.
  • Contributors expand skills that provide increasing value to the organization

Differences between Agile and Waterfall

  • The waterfall process works well only with fully specifiable projects. Agile has built-in requirements discovery/coding/testing iterations that deliver usable software early and often.
  • Traditional software development is control-centric, whereas agile methodology distributes responsibility.
  • Waterfall development prizes skills specialization, whereas agile contributors fulfill multiple roles.
  • Waterfall development utilizes functional silos, whereas cross-functional collaboration is the key feature in agile development.
  • Contributors are guided by planned tasks in waterfall methodology. In agile methodologies, product features guide activity at any moment in time.
  • Waterfall development engages customers at the beginning and end of the process, whereas customers are active participants throughout agile software development.

The Agile Method has become the de facto standard for competitive, nimble software development products driven by fluid market requirements. Compared to plan-driven methodologies, such as the waterfall process, the seeming informality of agile methodologies may present a challenge to the status quo, but their clear benefits outweigh these concerns in the long run.

10 User Interface Design Fundamentals

You just finished the world’s newest killer web/mobile/desktop app and are ready to release it to the world. The program logic is airtight. The functionality saves users buckets of time. You prepare for the accolades you so richly deserve. What could go wrong?

Unfortunately, despite all the magic you created within your app, it is all for naught if users are presented with a confusing, difficult-to-use and opaque interface. Most app programmers are whiz-kids at putting together logic, data and connectivity, but fall short when it comes to UI design.

There are two reasons for that:

  1. Programmers look at their apps from the inside out. They instinctively know how to use the app because they wrote it. Users do not have that same advantage.
  2. A superb UI is a mix of diverse disciplines unlike programming such as human psychology, communications, physics and art.

Do not despair. Following the 10 UI design fundamentals below will significantly improve your UI’s looks, functionality and keep it away from user’s trash icon.

10 User Interface Design Fundamentals

1. Separate Your Wants from User Wants

Try not to subordinate user goals to your business goals. For example, a user comes to your web app to search for information. If a newsletter subscription popup is the first thing they see, you threw up a roadblock before they even get started.

2. Create an Interface Map

Good UIs impart a feeling of control to users. That feeling of control is diminished if they navigate into UI dead-ends. Mapping out all routes a user can take reveals these UI cul-de-sacs, which you then eliminate. In any case, use interface breadcrumbs so users can always retrace their steps without having to go back to square one.

3. Consistency is King

The need for consistent looks and behavior throughout your app is paramount. Keep the layout, size, style and color of buttons, widgets, fields and text the same from screen to screen. Highlight important actions or information the same way throughout the app. If image hovers are used, use them on every image. If one action produces a feedback message, you should use feedback for every action.

4. Do Not Distract Your Users

It is tempting to add some “cool” to any UI with special widgets, new icons, animations, hovers, and popups. Unless your app’s purpose is to entertain or extra bling makes your app more intuitive to use, leave it out. Unfortunately for your latent artistic ego, users find that such things distract them from solving their problem.

5. Be a Conformist

Believe it or not, most of your users spend way more time on other apps than they do on yours. Thus, as much as possible, make your UI look and behave in familiar patterns like Facebook, Twitter, WordPress, Amazon, news outlets, etc.

6. Simplify

Once you take the time to simplify your app’s UI, simplify it again. Question the purpose and layout of every element and action to evaluate if they are truly necessary to the user accomplishing their mission.

7. Never Expect Users to Figure It Out

Even though certain choices and actions may be obvious to you, the app programmer, they may be opaque to users. Guide them through actions with a few simple choices. Highlight the preferred choice. Always present an undo option so they retain control over their actions.

8. Break down Complex Actions

If you are unable to simplify long or complex action paths in the app, at least break them down into series of simpler steps to reduce cognitive overload for users. Where possible, display which step they are executing, which have been completed and which are to be completed.

9. Plan for Errors

If users are punished for erroneous actions, they are going to blame the app, not themselves. Always plan your UI such that errors are handled gracefully and can be undone easily, and, where possible, without data loss.

10. Test Your UI on Non-Programmers

Almost as bad as testing your UI on yourself is having other programmers test it for you. Always test using non-technical users who have an interest in the problem the app solves. Do not coach them beforehand or provide instructions if you want to test how truly intuitive and easy-to-use your app’s UI is.

These are basic guidelines to achieving a useful app UI. If you follow these steps and perform further research into UI design, you may be astonished by how much science is behind these and other principles. Ironically, if you have done your job correctly, users will not even notice your excellent design. The ultimate goal of a quality UI should be to make it invisible so that users only focus on solving the problems for which your app was created.