The Greenest Cloud: Looking at Microsoft’s New Green Plan

Microsoft-Green-Plan-300x169 The Greenest Cloud: Looking at Microsoft’s New Green Plan

Microsoft’s Project Natick at the launch site in the city of Stromness. (Photography by Scott Eklund/Red Box Pictures)

With the material world being geared increasingly towards understanding climate change and carbon footprints, we often forget that the digital world contributes to fossil fuel emissions as well. Looking at the trends of today, we can predict that digital consumption will only increase over time, therefore adding to the carbon footprint of these companies. How are different cloud providers dealing with this issue? Let’s look at Microsoft. 

Microsoft, one of the Big Three cloud providers, has brought 2 plans to the climate change table: one to reduce the company’s carbon footprint to allow for a “greener” cloud, and the other to implement underwater data centers powered by renewable energy. Seems like a lot to process? Let’s break it down. 

Microsoft’s new “Green Plan” outlines how their data centres will run on 60% renewable energy by 2020. At the end of 2018, 50% of the company’s energy was renewable and carbon neutral. Wanting to cut their carbon emissions by 75% by the end of 2030, the company has implemented an internal “carbon tax” for business divisions. Taking tangible steps to reduce the effect of the buildings and resources used in the past, they are also redesigning their campuses into carbon neutral havens. Although Apple, Google, Amazon and the likes are all rapidly moving in the same direction, a great victory for companies of such large proportions, Microsoft will be the first company to reach a campus of zero-carbon and zero-waste goals.

What Does This Mean for the Future?

It doesn’t mean being satisfied with what they have done so far. Being at the forefront of technological innovation, one of the most remarkable stories of the digital age proves to be Microsoft’s “underwater data centers”.

natickfrance063-768x512-1-e1584646711233-300x171 The Greenest Cloud: Looking at Microsoft’s New Green Plan

Image by Frank Betermin/Naval Group


In the testing stages now, these centers are quick to deploy, and could provide internet connectivity for many years to come. Imagine a 40 foot long data pod, sleekly dropped into the depths of the ocean off the Coast of Scotland. Seems like something out of your sci-fi novel dreams? We agree. Not only is the concept of this data center a force to be reckoned with, but the implementation of this idea could also allow for ways of delivering AI applications and gaming content to end users. Operating without maintenance for 5 years, the pod’s cooling system uses ocean water, sustainably managing itself. 

Microsoft CEO, Satya Nadella, says “Since 50 percent of the world’s population lives close to water bodies, we think this is the way we want to think about future data center regions and expansion”. Data centers are now the backbone of the world. Rather than keeping them in tightly stored boxes lining facilities, Microsoft wants to integrate them into the natural world, and use this sustainable energy to help them get there. Using submarine technology and applying this research, Microsoft is tangibly changing the way that we look at storing data. The whole process of creating one of these underwater data centers took a total of 90 days as compared to the usual 2 years of production time. This would allow the company to react to demand without facing shortages. The Azure Cloud platform is becoming the computer of the world, and the world is Microsoft’s literal oyster.


Starting a Data Project

chart-close-up-data-desk-590022-e1583771218589-300x168 Starting a Data Project

It’s exciting to hear ‘Data is the new Oil’ or the ‘new Gold’ or the new ‘something valuable’. What I dread, though, is the day we hear ‘Data is the new fad and a complete waste of money’. I hope that day never comes!

A lot will depend on how businesses approach data projects. Right now, it could go either way. There are many organizations throwing money at data projects to ensure they are not left behind. There are many more who are not even getting started fearing the outcome or the futility of it. If you belong to either camp, I will share a simple process to maximize the return on your data projects.

Where Data Projects fail

Data projects are complex and resource intensive and hence have many failure points. Most failure points are like the failure points of any complex project. Data availability, data quality, team quality, team work, communication, and so on. There is one, though, which is unique to data projects and at the root of all failed projects. It’s what I call the ‘rabbit hole question’. If a data project begins with this, it is likely to fail.

The Rabbit Hole Question

This is the question I most often hear from companies wanting to start data projects. It is some variation of – ‘What can I do with my data?’. I agree that it is the most natural question to ask, however, not the question that is going to set you up for success. It is the dream question for the salesperson who can now engage the solution architects. Who will then build an exciting solution. A solution that is likely to cost a lot of money and take a lot of time. Worse, it may not yield any results. Why? because it’s the ‘rabbit hole question’.

This question propels everyone to start thinking about what all to do with the data. Or, where all to apply the algorithm or the tool. There are many possibilities and hence many potential projects. But there is no way to figure out what we will get at the  end of these projects. We will only discover it as we go along. And chances are we may not like what we see in the end, if we see anything at all.

Avoiding the Trap

So, how do we avoid the ‘rabbit hole question’? Where do we start and how do we proceed to maximize our chance of success? The answer is to flip the question – ask “What can my data do for me?”. Better still, use a top down approach of starting with your Business Objectives. The graphic on the right illustrates a more sensible approach to data projects.

info-03-e1583772519476-300x219 Starting a Data Project

The key is to break down the process into two phases – Planning and Execution. Planning requires little time but a lot of thinking but is crucial for success.

It is important during planning to stop thinking about the data you have and what to do it. Instead, start with what the key objectives for your business. Next is to think about the Actions required to achieve those objectives. That leads us to thinking about the kind of decisions we need to take. Then we can ask the question – “What insights do I need to take these decisions?”. These required insights then lead us to the relevant data and findings.

In this process, we may find that we do not have some of the required data. We can then start collecting those. In the meantime, we can then switch to execution with the data we already have. We can use the data and findings to generate relevant insights. These insights then drive the appropriate decisions. These decisions then guide us with the required actions to achieve our objectives.

Data Strategy Workshop

In our experience, the knowledge required for Planning is available in the organization. It usually sits in different silos though. Also, we find that the key stakeholders are usually not aligned.

Hence, we recommend conducting a Data Strategy workshop. Such a workshop aligns all stakeholders around the business objectives. It then allows the group to connect the objectives all the way to the Data they have.

Screen-Shot-2020-03-09-at-9.40.40-AM-300x78 Starting a Data Project

The outcome of the Workshop is an aligned Data & AI Roadmap. We can then jump into execution with the least effort and cost. The initial success then builds confidence in the organization for further projects. It also frees up time of critical resources to contribute to these projects.

Screen-Shot-2020-03-09-at-9.34.57-AM-300x79 Starting a Data Project

Optimus has already conducted Data & AI workshops for various organizations with fantastic results. If you would like your organization to have a clearly defined, cost effective, Data & AI Roadmap, please contact us at 



Digital Collaboration: Superhighway to Innovation and Economic Transformation

Federal and provincial governments in Canada along with the private sector are undertaking a major initiative to establish the nation as a global centre for digital transformation and innovation, generating 50 thousand jobs and boosting GDP by some 15 billion dollars over the next 10 years. 1.4 billion dollars is being committed in a collaboration of large and small companies, working together to generate solutions to some of the world’s most pressing health, productivity and sustainability issues.

Given the media attention being paid to technology and the Internet of Things today, it would be easy to assume that digital transformation throughout the Canadian economy – and elsewhere in the world – was happening quickly. But that’s not always the case.

Much of this digital transformation, of course, is taking place in the private sector, so one reason that the pace of change may be slower than we would assume is because the money to make it happen is coming from the private sector, itself. In other words, if and when corporate funds are available – and they’re not siphoned off to other projects – then investment in digital technologies can occur.

Another fundamental roadblock to speedier adoption of digital technology is a lack of leadership within a company[i]; there is often no single individual clearly in charge of leading such a project. Frequently, this is because corporate management lacks a firm knowledge and grasp of what is at stake.

Maintaining Software Integrity in Mission Critical Application

Optimus has developed rigorous standards for software testing along with a proprietary tool – the Azure Test Harness – which ensures integrity and faster time-to-market for companies rolling out applications.

For one client, a Finnish firm that provides detailed, real-time data on severe weather, like lightning strikes, for the airline industry, the ability to roll-out error-free software on a regular basis is critical.

The company frequently updated its software which meant manual testing to ensure the application remained error-free – a very time-consuming operation especially with airlines needing the latest data, instantly.

Optimus deployed an automated test framework on the Azure Test Harness which now allows the company to reduce its test cycles significantly while expanding its test coverage and maintaining the high integrity of its web-based app.

Azure Test Harness is one example of the innovative thinking Optimus is bringing to market.

It boils down to one simple fact: a failure to realize the powerful business advantages that digital technology brings to the table.

Digital Transformation Won’t Wait

International Data Corporation (IDC), a global research firm that provides marketing intelligence to technology markets, says that in 2018, only 30% of manufacturers investing in digital transformation will be able to maximize their advantages; the rest are held back by outdated business models and technology[ii].

There is one point on which even the late adapters can agree: digital transformation won’t wait.

In its 2017 budget, the federal government of Canada put forward a bold plan to supercharge innovation in the Canadian economy. The government earmarked nearly one billion dollars to be supported by money from the private sector for a combined total of 1.4 billion dollars[iii]. The money is being utilized across five “superclusters”, one of which is the digital technology supercluster.

At the Forefront of Innovation and Made in Canada

This cluster of companies, large and small – even start-ups – hails from healthcare, mining, financial services, telecom and other sectors of the economy. There are more than 270 companies that are part of the supercluster and they’ll collaborate on more than 100 projects, literally sharing talent and ideas. The goal is to generate commercial opportunities and economic prosperity that will keep Canada at the forefront of global innovation.

Optimus Information will be integrally involved, through our longstanding partnership with Microsoft, in assisting independent software vendors that are part of the digital technology supercluster as well as the large enterprise organizations that are participating. Many of the leading companies in these areas are already our clients. Optimus will offer expertise in technologies like blockchain, one of our growing areas of strength, through applications, another Optimus area of expertise.

What is exciting about this digital supercluster project is that the participants are operating at the edge of innovation, exploring and developing what’s never been done before. Optimus intends to leverage powerful AI and data analytics capabilities with which we work on a daily basis.

Optimus has breadth and depth of talent and experience in full stack and mobile development, software testing and business intelligence, data analytics and machine learning for industries from aerospace to transportation.

Contact us today to learn how we can bring value to your company and help you stay on the leading edge of innovation, as well.

How Cloud is Cooking Up Change in the Restaurant Industry

As more and more restaurant owners look for ways to cut costs, improve customer service and drive loyalty initiatives, application modernization including cloud applications, mobile apps and predictive data modelling are featuring at the top of their personal “must do” menus. Software companies and ISVs that serve the restaurant and hospitality industry are in a race to offer more, by migrating older legacy apps to the cloud and taking advantage of real-time data analysis tools to offer more features and functions to their customers – all while ensuring security and data privacy are still the “special of the day”.

Patrick Albrecht owns and operates four restaurants in Atlanta, Georgia. At one point, he says, he had tens of thousands of receipts, invoices, health department documents and payroll data stuffed into the various offices in his restaurants.

Born in the Cloud is Disrupting the Restaurant Industry

New “born in the cloud” restaurant applications are coming into the market. That is great news for restauranteurs. They will have a choice of vendors to select from – vendors who can offer new capabilities and new functionality that restaurant customers demand.

If you are a team supporting restaurant clients on an existing legacy application, you might need external help if you want to accelerate your own application modernization efforts in order to take on these new competitors.

Legacy restaurant applications, without an achievable plan in place to modernize quickly, are at risk of customer erosion.

By subscribing to [i] a cloud-based mobile app that could organize his overwhelming mountain of paperwork, he figures he’s now saving 2% each month in overhead costs from his four food operations. He can find any document in a matter of seconds and he can compare vendor delivery prices to determine who is giving him the best deal.

Albrecht is one of a growing number of smart restaurant owners who have embraced and employed some of the new cloud-based technologies to make operations more efficient and cost-effective.

Threats Abound

Restaurant chains and food companies that own various restaurant brands know how tough the business is. Competition is fierce, customer loyalty is fickle and profit margins are skinny. The difference between red and black on the bottom line is often as thin as the edge of a knife. Having the right information at the right time from POS, loyalty, vendor management and surveying systems is critical for owners who need to make decisions that can mean the difference between remaining in business and shutting the doors.

Social, Mobile and Data Drive Transformation in the Restaurant Industry

Cloud technology and third-platform innovation accelerators[ii] like social, mobile and big data are changing the landscape for the restaurant industry. Large chain firms and independent operators have experienced how mobile apps can boost business simply by making it convenient for customers to order, pay and review their experience, all with a smartphone.

For many restaurants, they have either directed their internal software developers to modify existing applications or they are looking to their existing POS vendor to do it for them.

There’s just one problem: many of these existing applications were designed long before cloud, social and mobile were part of the landscape. Moving a legacy application into the cloud while taking advantage of new functionality AND ensuring that security and data privacy are paramount is tough to do.

It requires specific skills and knowledge. Few companies can say they are masters of all the aspects needed to modernize.

What to Expect with Application Modernization

Data Available on Any Device

Restaurant owners Matt and Dianne Kraft can now check restaurant sales on their mobile phone while sitting beside a camp fire on a family vacation. “We rarely used to take vacation,” Dianne admitted. “We needed to be at the restaurant to stay on top of everything. But now we can see exactly what is happening in real time; we can even monitor problems as they happen. For example, issues that used to be buried – like a high number of orders being returned – are flagged. We can simply make a quick call to determine if it is a kitchen problem or a server problem, and by working with our manager, get it resolved quickly before customer satisfaction is impacted.

The ability to use cloud to amass multiple points and sources of data is making an enormous difference in the restaurant industry today. Cloud enables data pools to drive data analysis, unavailable before now. Restaurants can profile their customers in fine detail and then design promotions targeted specifically at them. Many legacy apps can’t achieve this sophisticated level of analytics because they weren’t written with current analytic models in mind.

Roughly 60% of consumers are making a restaurant selection – and a reservation – based on what they find on apps like Yelp and OpenTable. Because these apps are using cloud technology, the data these restaurants collect can determine booking patterns, eating tastes, the age of their customers and even their income levels. That’s valuable information for restaurant owners, who can harness this data and use it to drive promotions, communications, messages and offers.

Some cloud-based apps alert a restaurant when a customer is in the area by using what’s called geo-fencing. This creates an opportunity to immediately transmit a mobile ad to that customer.

A POS Cloud Facelift Cuts Costs, Boosts Sales

POS systems are another instance of cloud technology reducing costs.

Application Modernization – Legacy Moves to the Cloud

A large software vendor designing legacy POS applications for more than 30 years turned to Optimus when they needed to modernize their legacy POS. With more than 40,000 global customers to support, their internal IT team not only did not have the time to do the migration themselves, they also did not have the specialized expertise that this migration and design effort demanded. The results? By partnering with Optimus, this client has:

  • Reduced time-to-market and hosting costs
  • Capability to scale on demand to respond to peaks and seasonal fluctuations
  • Rolled out their new cloud app without disrupting existing customers
  • A future-proof roadmap with proximity to PaaS for machine learning, IoT frameworks, Blockchain technology and more.

According to Restaurant Insider[iii], cloud-based POS works for any size of restaurant. Updated sales, inventory, customer feedback and other real-time data can be accessed from the cloud via a mobile device, anywhere. In other words, there is no need for costly on-premises servers, holding valuable and often confidential data.

But moving your legacy application to the cloud is not as simple as a “lift and shift”. You need to assess the code base, making changes to it to take advantage of cloud services while also optimizing the application so that it runs more efficiently in the cloud.

The right upfront efforts mean greater agility and cost savings while taking advantage of trends such as machine learning and blockchain.

Mobile Ready Everywhere

mPOS is a payment system which allows customers to pay by using a mobile wallet –  simply a smartphone that contains encrypted credit or debit card information. Such a system appeals strongly to tech-smart millennials, who control more than 30% of gross income today [iv] and who eat out frequently.

When a food company fails to keep up with the capabilities that modern technology can offer, they are at risk of losing clients and profits very quickly. These restaurants will look for software vendors who can help them.

Software application companies that have legacy applications and cannot quickly make the shift to the cloud will fall behind the technology curve. Their clients will either fall behind as well or abandon them for another vendor.

Plan for Success

Because modernization technologies for the restaurant industry offer so many opportunities that you can take advantage of, it is vital to talk with a company that works in this space every day, delivering custom-tailored solutions and driving cost savings for clients.

Optimus Information delivers cloud-supported application development for the restaurant and hospitality industry. Optimus works on the leading edge and provides your company with full software development and testing, mobile development on any platform, and business intelligence designed to suit your needs and wishes.

Contact us today for more information and let us show you how we can employ our skill with cloud technology to drive your business forward.






Game Changers: The Role of Big Data in the Future of Credit Unions

In 2002, Billy Beane was the manager of the Oakland Athletics in Major League Baseball. Oakland was a small market club with a similar sized budget and it struggled to be competitive.

Because Oakland didn’t have the money of big market teams like the New York Yankees or Los Angeles Dodgers, Beane knew he couldn’t hope to attract the high-priced talent – the superstars – to play in Oakland.

Enter Paul Depodesta, aged 27, an economics graduate from Harvard, with an analytical mind and a love of baseball. His arrival on the doorstep of the Oakland A’s gave birth to data analysis in professional sports.

He analyzed player stats, using computer algorithms, and his results allowed Oakland to sign inexpensive players that other teams dismissed. The A’s were propelled into the stratosphere of success, thanks to big data.

The A’s finished the 2002 season with 103 wins, the same number as the New York Yankees – but with a budget about a tenth the size.

This is the “secret sauce” in data analytics: the ability to take substantial amounts of information – in the case of Oakland, endless baseball player statistics – look for patterns and capitalize on what is found.

Credit Unions, Machine Learning and Data Analytics

Credit unions in Canada are rapidly embarking on the same exploration. Using machine learning and data analytics, these financial firms are finding ways to improve service to their clients while, at the same time, discovering nuggets of information from the vast amounts of data they collect, that can then be turned into business opportunities.

Virtually every customer transaction within a credit union is electronic, and the amounts of data being collected are staggering. The need to analyze this information is what drives credit unions today to embrace machine learning and data analytics.

Matthew Maguire is the Chief Data Officer at Co-Op Financial Services, a California-based company that operates an interlinked system of ATM machines throughout the U.S. and Canada. He argues that machine learning and data analysis are critical for mid-sized credit unions as they work to reinforce current customer relationships and build new ones.

“Data is coming in from different places and the challenge is… how do you make it all connect?[i]” he said.

Credit unions are moving quickly into data analysis. Through machine learning, which unearths customer transaction patterns by using algorithms, credit unions are learning a great deal about their customers and are designing strategies to capitalize on that in order to drive sales.

But, for credit unions, data enables other capabilities. Patterns of fraud can be easier to spot and shut down through data analysis.

When a client invests with a credit union, regulations require the client to complete what’s called a Know Your Client form, which essentially draws a profile of risk tolerance and investment objectives. If the client’s portfolio strays from that profile and becomes riskier, big data can alert the financial institution and the problem can be corrected before any monetary loss accrues to the client – or to hundreds of thousands of clients.

Chris Catliff is the president and CEO of Blueshore Financial, a B.C.-based credit union with more than $3 billion in assets. His vision of the future of credit unions is predicated on the power of data analytics in combination with machine learning.

He envisions the day very soon when a client approaching a branch receives a text message saying the client is already checked in at the branch. As they walk through the door, their customer profile and picture pop up on a screen [ii] at a concierge desk and they’re greeted by name.

Blueshore’s ATM machines will respond to a customer’s biometrics and offer a transaction based on a pattern of previous transactions. Up-sell opportunities will present themselves, so staff can suggest options – situations that might never occur without data analysis.

Service, he said, “has to be electronic transactions with the introduction of superior, human touch at various critical points. It’s high tech and high touch.”

Explore Your Data Potential

Like the members they serve, every credit union is unique. It is imperative for a credit union to work with data specialists who can marry the individual needs of each credit union with high levels of expertise across big data, data analysis and machine learning.

One of our strengths here at Optimus is our track-record in the areas of data gathering, analysis, machine learning, dashboarding and data visualization, through which we help our clients tailor data mining and analysis to their business goals.

At the end of the day, it’s all about staying competitive and, like the Oakland Athletics, reaching the pinnacle of success by embracing and employing new strategies to achieve that success.




10 Tips for Success with Big Data – Part 2

The following is part two of a two part blog post. Click here to read part one.

6. Apps and Dashboards Beat Reports

BI owes much of its existence to the desire decision makers have to see reports. Reports, however, suffer from being boring and static. Whenever possible, try to move your BI output into apps and dashboards. This approach allows decision makers to revisit the data, fiddle with the knobs and dig deeper. While it can seem easier to supply a report and insist upon the rightness of a single viewpoint, most people who rely upon BI to make choices want to feel like they’ve seen all the angles.

7. Share

Whenever possible, you should try to involve other interested parties in the process. This means going outside your comfortable circles and asking for data sources. It means looking around at visualizations produced in-house and even by competitors. A good BI professional should never feel embarrassed to ask around and incorporate new ideas.

8. Identify Shocks

Good models are typically conservative and resistant to shocks. If the introduction of fresh data, especially in small amounts, upsets your analytics, you should know where the shock came from. This requires being aware of the quality controls you’re using and how specific data sources are weighted. Doing so can leave you better prepared to explain the differences between statistical noise and legitimate sources of concern.

9. Move Less

Folks who work with data have a tendency to move data around. They often drop it into cubes that make their immediate jobs simpler. The problem, however, is that good BI needs to be as close to its original data sources as possible. Copying a data set and trimming it down makes that information instantly less useful and harder to mine. Unless there’s a compelling technical reason, refrain from making duplicates of data that’s best left in one spot.

10. Smart Tools

Building a brilliant visualization is a great feeling. It’s important, however, to have intelligent tools in place to make the process easier. Always try to use tools that allow you to quickly search through data. Also make a point of using software that lets you to rapidly recompose your visualizations.


Working with big data can be an arduous and meticulous task, and working with partners and vendors that have the expertise and technical domain knowledge is critical to success.

Optimus’ BI specialists have worked with companies, large and small, to successfully implement their BI initiatives. We know what works, what doesn’t work and how your company can best leverage its data sources.

Call us today to find out how you can get more from your BI.


10 Tips for Success with Big Data – Part 1

The age of big data has been a boon for anyone in the business intelligence world. Creating reports, apps and visuals that motivate decision-making processes is simply easier when you have a large data set to draw upon. There are, however, a number of issues to keep in mind. Here are 10 tips for anyone who’s looking to more successfully utilize big data for BI purposes.

1. Maintain Data Formats

There’s a temptation when putting data to use to reformat the data set for greater ease of use. It’s not unusual, for example, to remotely retrieve a data set using JSON and then dump the key and value pairings because the information contained in the keys is seen as useless. Those value pairings often offer insights that may only be useful further down the road. By maintaining the original format of a data set, you can preserve information like time sequences and references. That can be beneficial if you’re asked later to track down a specific data point as part of a later discussion.

2. Security Matters 

It can be a lot of fun splashing around a big pool of data, but it’s also important to be prepared to be told no and to tell others no. Security protocols exist for a reason. Your BI systems should already have industry-standard security in place, and you should not ignore its usefulness by not setting limits on authorization. As exciting as it can be to share data, it’s always critical to be sure that you and those you share it with have a right to access it.

3. Price Traps

At the petabyte scale, the cost of storing and transmitting data can be staggering. It’s easy to buy into the argument from vendors that big data costs mere pennies per gigabyte. Likewise, vendors love to price SaaS systems on a per user basis. You always want to make sure that your operation is paying the most feasible prices possible for its BI systems, and that often means negotiating with vendors. Whenever possible, try to arrive at flat prices or low rates with strict limitations in place.

It’s also important to bear in mind that many vendors are hoping you’ll go over your limits. Make sure your BI implementations shut down access to resources before they cause your fees to go through the roof. Remotely hosted storage and processing providers have built their business models on the belief that people rarely show restraint when playing with a fun toy. Contain yourself.

4. Don’t Let Data Delay Decisions

There’s always a pressure in the world of BI to have the freshest data. The problem with this attitude is that it can inhibit decision-making processes and ultimately undermine the value of analysis. Your operation cannot afford to be impaired by a ceaseless wait for new data.

De-emphasizing the importance of using the absolute freshest data can also help you realize speed and efficiency gains. For example, it’s easy to see how caching your data can improve performance. It does, however, come at the cost of forgoing access to the absolute freshest data. If the arrival of a small sliver of data can disrupt the decisions that are being made based upon it, that fact often will open more questions about the volatility of what’s being studied than the freshness of the data.

5. Don’t Discard the Outliers

The presentation of most large data sets almost always leads to questions about outliers. There’s a strong temptation to discard the outliers and present the cleanest visuals possible. The willingness to retain outliers, however, can be a signal of quality and honesty. Outliers can and should be discussed when they’re discovered. If you expect to be asked about the lack of smoothness that outliers can create, the wisest choice may be to explain that throwing them out inhibits discussion and is frequently a sign of manipulation. It’s always better to follow the data wherever it goes. 

Stay tuned for part two of this two part blog post. 

How IoT Will Drive Big Data Adoption

According to Internet of Things true believers, the time is just around the corner when our cars, homes, appliances, TVs, PCs, phones and any other electronic or mechanical device in our lives will be spewing out data in all directions. That makes some sense, since IoT devices – at least those now envisaged – are designed for data spewing as they have minimal compute capacity presently.

Cisco estimates that already nearly 15 million connected devices comprise the nascent IoT, which will grow to 50 million by 2020. That sounds impressive until you realize it is less than 3 percent of the “things” on our planet potentially able to participate in IoT. Unfamiliar numerical terms such as zettabytes must enter our lexicon to describe the volume of data to be generated, consumed and analyzed.

What the IoT Data Wave Means for Big Data

The processing of the rivers of big data coming from today’s embedded sensors, telemetry, RFID chips, PCs, mobile devices, wearables, etc. already leaves 90 percent of these data in the dustbin. That is primarily because current big data hardware and software stacks are inadequate to manipulate it all let alone comprehend it.

Big data compute, storage and networking capabilities improve daily. However, even those enterprises on big data’s bleeding edge are today ill-equipped to handle the expected data flood gushing from the IoT let alone the larger Internet of Everything that Cisco tracks.

Even if IoT is realized in twice or thrice the time of most projections, then big data enterprises are going to be perennially behind the curve for the foreseeable future. The constant running to catch up will be the prime driver of the big data ecosystem beyond the next decade. If that does not kill big data, it will only make it stronger. Enterprises large and small will join the data mining gold rush if real-time analytics improve and a big data meta-architecture, as hinted at by Hadoop, emerges.

The Obstacles to a Happy Marriage between IoT and Big Data

Lack of Standards

Having to figuratively invent the wheel over and over again is the bane of any competitive industry. Without standards, IoT will struggle to reach escape velocity due to technology fragmentation. Standards must be in place for efficient access to “things”, consistent API interfaces, machine-to-machine communication, addressing privacy and security issues and lowering entry barriers to smaller, innovated players.

Closed or Inefficient Architectures

IoT is a game changer for big data architecture. All stakeholders are just now starting to recognize that dealing with IoT will require as much collaboration as competition.

The sheer magnitude of IoT data volumes dictate a layered hardware/software stack that is too gigantic, geographically dispersed and complex for a single enterprise or cloud providers. It begs for an ultra-distributed meta-architecture that step by step digests, absorbs and disperses unstructured data as it is collected, cleaned, normalized, correlated with other data, stored when necessary, deeply analyzed and presented. Along the way, vendors who today specialize in each of these processing layers will contribute via enormous arrays of small-scale data centers.

Analytics Capability Growth Rate

Above all else, business intelligence processing is the critical bottleneck to realizing the full potential of big data. The rate at which supporting analytics can improve is questionable without significant breakthroughs, but the search for data gold represents an immeasurable incentive. The deluge of IoT real-time data headed down the analytic pipeline will create even more pressure but is likely to engender even more opportunities for value extraction.


The Internet of Things is not an invention but a logical consequence of highly available, low-power, low-cost sensor technology and improvements in wireless connectivity penetration. Related technology improvements and cost-reductions in compute, storage and network hardware will complement the growth of IoT and make it something useful and valuable. And, finally, IPV6 is going to receive the appreciation it justly deserves.

All this power to generate, gather and process new, real-time micro-data is for naught, however, if it must be set aside awaiting analysis capabilities to catch up. Fortunately, although big data infrastructure and software are likely to be overwhelmed initially, that and analytic capabilities seem to have a bit of a head start. Increased collaboration among stakeholders, an effective, shared processing architecture and the inevitable analytical breakthroughs may just carry the day in the end.

Business Benefits from Big Data Analytics

Companies have a multitude of big data sources including internal databases, activity on their web and social media pages, mobile apps, and third-party sources. The growing collection of sensors coming online from the Internet of Things may soon overwhelm other sources.

Most enterprises are only now realizing how deep analysis of these data can benefit their business strategies. They may have developed a few business cases, but do not yet understand the full potential of a big data strategy for their enterprise.

The big data leaders have run pilot programs that taught them how to effectively acquire and analyze big data. These organizations are now reaping the benefits of improved decision-making, greater productivity, cost reductions and deeper comprehension of their markets and customers.

Big Data Strategic Value

Customer Segmentation

Identification of high-value customers is essential for fine-tuning a company’s sales and marketing efforts. Data from any site where customers research, buy and return products or request support are invaluable in discerning patterns of behavior, preferences and satisfaction.

Contextual information such as location, network connectivity, weather, time of day and other transactions before or after a sale provide valuable correlations. Real-time analytics of specific ad campaigns, monetary or non-monetary incentives can be tested with same-day feedback.

Decision-Making Support

Big data analysis is employed to test longer-term tactics and strategies too. Hypotheses can be proposed, experiments put in place and the results analyzed from multiple data pools to verify results. Depending on the degree of granularity, correlations can feed causal analysis to support current decisions or company forecasts. These experiments are also beneficial in judging the usefulness of data from various sources.

Product and Service Development

Big data is being utilized by product development teams for targeting new products that promote deeper customer engagement or meet consumer needs that were not readily apparent. Big data helps better understand the how, why, what and where of customer interactions with the products and the brand.

Such insights can transform a company’s product strategy from a reactive one in which improvements are made to existing products based on customer feedback or competitor’s enhancements to a proactive mindset from which entirely new features or product lines are created.

Additional big data insights can suggest and test the most successful distribution timing, locations and optimizations to the distribution chain in order to reduce the risks of product launch.

Big Data Use Cases


Large healthcare providers are augmenting knowledge from limited pharmaceutical clinical trials with actual usage data from the field that provide further indications of a drug’s benefits and risks. The same principle is being applied to the outcomes of interactions with caregivers such as doctors, physician’s assistants and nurses.

The big data streams from which these insights are gleaned include clinical activity, insurance claims, medical product research and records of patient behavior. These types of analyses generate tremendous benefits for drug makers, medical personnel as well as patients.


Data-centric retailers are tapping into oceans of data looking for clues to customer preferences and behavior. They analyze how customers research products, which products they buy, how transactions are completed, product returns, responses to marketing campaigns and to which sales channels they are paying the most attention. Nearly 60 percent of retailers reported to the Aberdeen Group that their number one priority is improving customer insights via improved data analytics.


Process-based manufacturers employ advanced analytics to improve productivity while cutting costs. They do so by extracting, correlating and visualizing data from their operational systems, production floors and warehouses. This assists in identifying the most significant determinants of process efficiency, which provides a basis for adjustments. Additionally, they are using external data streams to improve product demand forecasting and to evaluate supplier quality and performance.


Big data analytical power is growing exponentially in terms of raw power and sophistication. The largest information-driven enterprises are mining only the tips of the information available to them presently. They are concentrating first to those data that pertain to businesses processes, then to customer and market analysis. However, new uses for the growing volumes of data are being invented continuously.

It is no longer too early to tell what the impact of big data will be on companies’ efforts to streamline operations, improve customer assessments and optimize decision-making. The results coming in from enterprises big and small are positive and sure to improve as big data capabilities expand.

10 Hadoop Big Data Predictions for 2015

The forecast for the IoT analytics market is one of stupendous growth at a 30 percent compounded rate over the next five years. Furthermore, while the growth to this point has been driven by an increasing number of smart devices coming online, the next leg up results from an explosion of analytical tools, frameworks and hardware needed to collect, process and analyze the expected IoT data tsunami.

Apache’s Hadoop, the dominant, open-source, big data solution is expected to reap huge benefits from this phenomenon according to multiple research organizations including Forrester, IIA, IDC and more. Here are the top ten 2015 predictions regarding Hadoop from these and other sources.

1 – Hadoop Is Going Mainstream

Given that the likes of Verizon, Netflix and Wal-Mart are already fully on board, going mainstream means Hadoop is about to vanquish the final hold-out CIOs and CTOs who have yet to make Hadoop a priority.

2 – Distributions and Competition Are on the Rise

Oracle, SAP, Software AG, HP and Tibco will soon add their Hadoop distributions to those of IBM and Pivotal. Vertical Hadoop cloud solutions are just around the corner, all of which provide enterprises plenty of choice and leverage.

3 – Big Names Will Fill in the Gaps

Where Hadoop has solution gaps, there are plenty of big players now ready to swoop in, such as Informatica, Oracle, IBM, Microsoft, Teradata and the list goes on. These all have products out or on the drawing board for analytics, data management and governance that will work out of the box with enterprise apps.

4 – Hadoop Cloud Clusters Increase Cloud Demand

Many enterprises find that the variance in Hadoop cluster requirements for compute, storage and network resources is large enough that it pays to move it to the cloud. There, they can ramp up as many Hadoop nodes as needed on a short-term basis without the maintenance headaches of on-site machines.

5 – Linux and Windows Build-in Hadoop

Following the example of free, built-in HTTP servers, the big OS vendors will add Hadoop to their offerings and will do so en masse. This makes it a snap for IT or non-IT professionals to add a Hadoop node and join a cluster.

6 – The SQL-Hadoop Saga Comes to a Happy End

The relationship between Hadoop and SQL has been a multitude of SQL-on-Hadoop solutions until recently. Now, Apache Drill enables SQL over schema-less files, Hive, HBase, MongoDB and JSON. Hive offers enterprise-grade RDBMS and multi-engine execution. In short, both offer a tremendous boost to Hadoop adoption.

7 – Hadoop as an Application Platform

The newest YARN management features, HBase database options and in-memory Apache Spark all contribute to the concept of Hadoop as a mature platform supporting direct execution of database and middleware software.

8 – Hadoop Skills Will Come from Inside

Boiled down, Hadoop is a distributed file system built on a distributed compute platform. It runs on Java and SQL. Fundamentally, there is not much new there. Experts on both technologies already fill enterprise development teams, so their conversion to Hadoop experts is logical. Furthermore, Hadoop tools are improving rapidly to make it easier to use.

9 – Multimedia Analytics Will Triple

That is according to IDC. Already half of big businesses include videos, images and audio tracks in their big data analytics streams, but there is plenty of growth and opportunity for enterprises who can extract the higher value from these sources.

10 – Big Data Security Concerns

The use of predictive analysis over big data is already in use by defense systems looking for malicious network activity. Solutions in this area are predicted to rise dramatically in the fight against cyber-criminals. Additionally, new tools and services will grow rapidly to protect businesses and consumers from the privacy threats inherent in big data and the impending wave of IoT devices.


The onslaught of data from IoT and the thirst for improved big data analytics to support informed decision-making will drive Hadoop adoption like never before in 2015 and beyond. As it does, there will be a perceptible shift in enterprise focus starting this year from how to manage so much data to how can deeper, more valuable patterns be extracted from current and future data streams. This is what Hadoop is made for.

Those companies who have not yet embraced the advantages of the open, community-supported big data platform of Hadoop and the immense ecosystem it is spawning, will be coming on board soon.