Posts

How Cloud is Cooking Up Change in the Restaurant Industry

As more and more restaurant owners look for ways to cut costs, improve customer service and drive loyalty initiatives, application modernization including cloud applications, mobile apps and predictive data modelling are featuring at the top of their personal “must do” menus. Software companies and ISVs that serve the restaurant and hospitality industry are in a race to offer more, by migrating older legacy apps to the cloud and taking advantage of real-time data analysis tools to offer more features and functions to their customers – all while ensuring security and data privacy are still the “special of the day”.

Patrick Albrecht owns and operates four restaurants in Atlanta, Georgia. At one point, he says, he had tens of thousands of receipts, invoices, health department documents and payroll data stuffed into the various offices in his restaurants.

Born in the Cloud is Disrupting the Restaurant Industry

New “born in the cloud” restaurant applications are coming into the market. That is great news for restauranteurs. They will have a choice of vendors to select from – vendors who can offer new capabilities and new functionality that restaurant customers demand.

If you are a team supporting restaurant clients on an existing legacy application, you might need external help if you want to accelerate your own application modernization efforts in order to take on these new competitors.

Legacy restaurant applications, without an achievable plan in place to modernize quickly, are at risk of customer erosion.

By subscribing to [i] a cloud-based mobile app that could organize his overwhelming mountain of paperwork, he figures he’s now saving 2% each month in overhead costs from his four food operations. He can find any document in a matter of seconds and he can compare vendor delivery prices to determine who is giving him the best deal.

Albrecht is one of a growing number of smart restaurant owners who have embraced and employed some of the new cloud-based technologies to make operations more efficient and cost-effective.

Threats Abound

Restaurant chains and food companies that own various restaurant brands know how tough the business is. Competition is fierce, customer loyalty is fickle and profit margins are skinny. The difference between red and black on the bottom line is often as thin as the edge of a knife. Having the right information at the right time from POS, loyalty, vendor management and surveying systems is critical for owners who need to make decisions that can mean the difference between remaining in business and shutting the doors.

Social, Mobile and Data Drive Transformation in the Restaurant Industry

Cloud technology and third-platform innovation accelerators[ii] like social, mobile and big data are changing the landscape for the restaurant industry. Large chain firms and independent operators have experienced how mobile apps can boost business simply by making it convenient for customers to order, pay and review their experience, all with a smartphone.

For many restaurants, they have either directed their internal software developers to modify existing applications or they are looking to their existing POS vendor to do it for them.

There’s just one problem: many of these existing applications were designed long before cloud, social and mobile were part of the landscape. Moving a legacy application into the cloud while taking advantage of new functionality AND ensuring that security and data privacy are paramount is tough to do.

It requires specific skills and knowledge. Few companies can say they are masters of all the aspects needed to modernize.

What to Expect with Application Modernization

Data Available on Any Device

Restaurant owners Matt and Dianne Kraft can now check restaurant sales on their mobile phone while sitting beside a camp fire on a family vacation. “We rarely used to take vacation,” Dianne admitted. “We needed to be at the restaurant to stay on top of everything. But now we can see exactly what is happening in real time; we can even monitor problems as they happen. For example, issues that used to be buried – like a high number of orders being returned – are flagged. We can simply make a quick call to determine if it is a kitchen problem or a server problem, and by working with our manager, get it resolved quickly before customer satisfaction is impacted.

The ability to use cloud to amass multiple points and sources of data is making an enormous difference in the restaurant industry today. Cloud enables data pools to drive data analysis, unavailable before now. Restaurants can profile their customers in fine detail and then design promotions targeted specifically at them. Many legacy apps can’t achieve this sophisticated level of analytics because they weren’t written with current analytic models in mind.

Roughly 60% of consumers are making a restaurant selection – and a reservation – based on what they find on apps like Yelp and OpenTable. Because these apps are using cloud technology, the data these restaurants collect can determine booking patterns, eating tastes, the age of their customers and even their income levels. That’s valuable information for restaurant owners, who can harness this data and use it to drive promotions, communications, messages and offers.

Some cloud-based apps alert a restaurant when a customer is in the area by using what’s called geo-fencing. This creates an opportunity to immediately transmit a mobile ad to that customer.

A POS Cloud Facelift Cuts Costs, Boosts Sales

POS systems are another instance of cloud technology reducing costs.

Application Modernization – Legacy Moves to the Cloud

A large software vendor designing legacy POS applications for more than 30 years turned to Optimus when they needed to modernize their legacy POS. With more than 40,000 global customers to support, their internal IT team not only did not have the time to do the migration themselves, they also did not have the specialized expertise that this migration and design effort demanded. The results? By partnering with Optimus, this client has:

  • Reduced time-to-market and hosting costs
  • Capability to scale on demand to respond to peaks and seasonal fluctuations
  • Rolled out their new cloud app without disrupting existing customers
  • A future-proof roadmap with proximity to PaaS for machine learning, IoT frameworks, Blockchain technology and more.

According to Restaurant Insider[iii], cloud-based POS works for any size of restaurant. Updated sales, inventory, customer feedback and other real-time data can be accessed from the cloud via a mobile device, anywhere. In other words, there is no need for costly on-premises servers, holding valuable and often confidential data.

But moving your legacy application to the cloud is not as simple as a “lift and shift”. You need to assess the code base, making changes to it to take advantage of cloud services while also optimizing the application so that it runs more efficiently in the cloud.

The right upfront efforts mean greater agility and cost savings while taking advantage of trends such as machine learning and blockchain.

Mobile Ready Everywhere

mPOS is a payment system which allows customers to pay by using a mobile wallet –  simply a smartphone that contains encrypted credit or debit card information. Such a system appeals strongly to tech-smart millennials, who control more than 30% of gross income today [iv] and who eat out frequently.

When a food company fails to keep up with the capabilities that modern technology can offer, they are at risk of losing clients and profits very quickly. These restaurants will look for software vendors who can help them.

Software application companies that have legacy applications and cannot quickly make the shift to the cloud will fall behind the technology curve. Their clients will either fall behind as well or abandon them for another vendor.

Plan for Success

Because modernization technologies for the restaurant industry offer so many opportunities that you can take advantage of, it is vital to talk with a company that works in this space every day, delivering custom-tailored solutions and driving cost savings for clients.

Optimus Information delivers cloud-supported application development for the restaurant and hospitality industry. Optimus works on the leading edge and provides your company with full software development and testing, mobile development on any platform, and business intelligence designed to suit your needs and wishes.

Contact us today for more information and let us show you how we can employ our skill with cloud technology to drive your business forward.

 

[i] https://www.entrepreneur.com/article/223562

[ii] https://www.idc.com/promo/thirdplatform/innovationaccelerators

[iii] https://upserve.com/restaurant-insider/benefits-cloud-based-restaurant-pos/

[iv] https://www.michaelhartzell.com/restaurant-blog/how-restaurants-can-use-mobile-payment-technology-to-increase-profits

Game Changers: The Role of Big Data in the Future of Credit Unions

In 2002, Billy Beane was the manager of the Oakland Athletics in Major League Baseball. Oakland was a small market club with a similar sized budget and it struggled to be competitive.

Because Oakland didn’t have the money of big market teams like the New York Yankees or Los Angeles Dodgers, Beane knew he couldn’t hope to attract the high-priced talent – the superstars – to play in Oakland.

Enter Paul Depodesta, aged 27, an economics graduate from Harvard, with an analytical mind and a love of baseball. His arrival on the doorstep of the Oakland A’s gave birth to data analysis in professional sports.

He analyzed player stats, using computer algorithms, and his results allowed Oakland to sign inexpensive players that other teams dismissed. The A’s were propelled into the stratosphere of success, thanks to big data.

The A’s finished the 2002 season with 103 wins, the same number as the New York Yankees – but with a budget about a tenth the size.

This is the “secret sauce” in data analytics: the ability to take substantial amounts of information – in the case of Oakland, endless baseball player statistics – look for patterns and capitalize on what is found.

Credit Unions, Machine Learning and Data Analytics

Credit unions in Canada are rapidly embarking on the same exploration. Using machine learning and data analytics, these financial firms are finding ways to improve service to their clients while, at the same time, discovering nuggets of information from the vast amounts of data they collect, that can then be turned into business opportunities.

Virtually every customer transaction within a credit union is electronic, and the amounts of data being collected are staggering. The need to analyze this information is what drives credit unions today to embrace machine learning and data analytics.

Matthew Maguire is the Chief Data Officer at Co-Op Financial Services, a California-based company that operates an interlinked system of ATM machines throughout the U.S. and Canada. He argues that machine learning and data analysis are critical for mid-sized credit unions as they work to reinforce current customer relationships and build new ones.

“Data is coming in from different places and the challenge is… how do you make it all connect?[i]” he said.

Credit unions are moving quickly into data analysis. Through machine learning, which unearths customer transaction patterns by using algorithms, credit unions are learning a great deal about their customers and are designing strategies to capitalize on that in order to drive sales.

But, for credit unions, data enables other capabilities. Patterns of fraud can be easier to spot and shut down through data analysis.

When a client invests with a credit union, regulations require the client to complete what’s called a Know Your Client form, which essentially draws a profile of risk tolerance and investment objectives. If the client’s portfolio strays from that profile and becomes riskier, big data can alert the financial institution and the problem can be corrected before any monetary loss accrues to the client – or to hundreds of thousands of clients.

Chris Catliff is the president and CEO of Blueshore Financial, a B.C.-based credit union with more than $3 billion in assets. His vision of the future of credit unions is predicated on the power of data analytics in combination with machine learning.

He envisions the day very soon when a client approaching a branch receives a text message saying the client is already checked in at the branch. As they walk through the door, their customer profile and picture pop up on a screen [ii] at a concierge desk and they’re greeted by name.

Blueshore’s ATM machines will respond to a customer’s biometrics and offer a transaction based on a pattern of previous transactions. Up-sell opportunities will present themselves, so staff can suggest options – situations that might never occur without data analysis.

Service, he said, “has to be electronic transactions with the introduction of superior, human touch at various critical points. It’s high tech and high touch.”

Explore Your Data Potential

Like the members they serve, every credit union is unique. It is imperative for a credit union to work with data specialists who can marry the individual needs of each credit union with high levels of expertise across big data, data analysis and machine learning.

One of our strengths here at Optimus is our track-record in the areas of data gathering, analysis, machine learning, dashboarding and data visualization, through which we help our clients tailor data mining and analysis to their business goals.

At the end of the day, it’s all about staying competitive and, like the Oakland Athletics, reaching the pinnacle of success by embracing and employing new strategies to achieve that success.

 

[i] https://www.pymnts.com/big-data/2018/credit-unions-big-data-authentication-aml-kyc/
[ii] http://enterprise-magazine.com/features/betting-big-on-big-data/

 

GROUP BY vs DISTINCT in SQL Server

Yesterday, I was travelling in the metro where I overheard two college students puzzling over the difference between GROUP BY and DISTINCT.

GROUP BY and DISTINCT cause a lot of confusion and I couldn’t help myself so I jumped into their conversation and explained the differences.

Their problem was understanding where to use GROUP BY and where to use DISTINCT when both return same results in most scenarios when no aggregates are used, and which one had better performance.

GROUP BY and DISTINCT both generate the same query execution plan when no aggregates are used, so there is no appreciable difference in performance in this situation.

GROUP BY becomes important when aggregates are used. DISTINCT is just for filtering out duplicate records from query result sets.

Ideally, when aggregates are not used in a query, then one should use DISTINCT to enhance readability. But when aggregates are used GROUP BY performs much better.

Here are some examples:

DISTINCT

SELECT DISTINCT Name, RollNo
 FROM Student

GROUP BY

SELECT Name, RollNo
 FROM Student
 GROUP BY Name, RollNo

GROUP BY with Aggregate Function

SELECT Name, RollNo, Sum(Marks) TotalMarks
 FROM Student
 GROUP BY Name, RollNo

The post GROUP BY vs DISTINCT in SQL Server appeared first on OptimusBI.

Include more than 1000 Values in Oracle IN Clause

In Oracle we can’t include more than 1000 values in the “IN” clause.

To get around this limitation, you divide the string of comma delimited values to be included into two or more parts using SUBSTR or through any similar other function and then these new strings should be placed in the IN clause of different SQL statements that are later combined using the UNION operator.

Sample Code

First create two hidden parameter that include the initial first and second half of the selected values of the parameter. For this write a custom code can be used.

Sample code to break the parameter into two hidden parameter of half the original length.

Public Function Group1(ByVal parameter as Parameter) as String
 Dim s as String
 s=String.Empty
 If parameter.IsMultiValue then
 For i as integer = 0 to parameter.Count/2
 s = s "" CStr(parameter.Value(i)) + ","
 Next

Else
 s = CStr(parameter.Value(0))

End If
 Return s
 End Function

Public Function Group2(ByVal parameter as Parameter) as String
 Dim s as String
 s=String.Empty
 If parameter.IsMultiValue then
 For i as integer = (parameter.Count/2) to parameter.Count-1
 s = s "" CStr(parameter.Value(i)) + ","
 Next

Else
 s = ""
 End If
 Return s
 End Function

Sample Query

New we use two SQL queries including the two hidden parameters joined by the UNION operator

SELECT * FROM TABLE_1 WHERE VALUE_1 IN (:prmHiddenInspectorFirstGroup1)
UNION
SELECT * FROM TABLE_1 WHERE VALUE_1 IN (:prmHiddenInspectorFirstGroup2)

Obviously, this example breaks when you get to 2000 values, but it is presented here as a sample approach that can be easily modified to suit more advanced situations.

The post Include more than 1000 Values in Oracle IN Clause appeared first on OptimusBI.

Accessing MySQL from Java

How to access MySQL from Java including creating a database, setting the CLASSPATH, writing a Java database class and then checking the results.

Create MySQL Database

First we need to create a database.

C:\WDev\Research\Programming\Java\Database>mysql -u root -p
 Enter password: *****
 ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: Y
 ES)

C:\WDev\Research\Programming\Java\Database>mysql -u root -p
 Enter password: *****
 Welcome to the MySQL monitor. Commands end with ; or \g.
 Your MySQL connection id is 26
 Server version: 5.1.59-community MySQL Community Server (GPL)

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE School;
 Query OK, 1 row affected (0.00 sec)

mysql> quit

Setting CLASSPATH in Environment Variables on Your Windows System

Now, download the MYSQL JDBC Connector and set the class path.

Java Database Class

Create your Java database class.

import java.sql.*;

public class Database
 {
 public static void main (String[] args)
 {
 Connection conn = null;
 String userName = "root";
 String password = "mysql";
 try
 {

String url = "jdbc:mysql://localhost/School";
 Class.forName ("com.mysql.jdbc.Driver").newInstance ();
 conn = DriverManager.getConnection (url, userName, password);
 System.out.println ("Database connection established");

}
 catch (Exception e)
 {
 System.err.println ("Error message: " + e.getMessage ());
 }

try
 {
 Statement s = conn.createStatement ();
 int count;
 s.executeUpdate ("DROP TABLE IF EXISTS student");
 s.executeUpdate (
 "CREATE TABLE student ("
 + "id INT UNSIGNED NOT NULL AUTO_INCREMENT,"
 + "PRIMARY KEY (id),"
 + "name CHAR(40), phone CHAR(40))");

count = s.executeUpdate (
 "INSERT INTO student (name, phone)"
 + " VALUES"
 + "('John Rhodes', '123'),"
 + "('Bob Smith', '456'),"
 + "('Larry Jones', '789'),"
 + "('Sandra Bullocks', '4444')");
 s.close ();
 System.out.println (count + " rows were inserted");
 }
 catch (Exception e)
 {
 System.err.println ("Error message: " + e.getMessage ());
 }

try
 {
 Statement s = conn.createStatement ();
 s.executeQuery ("SELECT id, name, phone FROM student");
 ResultSet rs = s.getResultSet ();
 int count = 0;
 while (rs.next ())
 {
 int idVal = rs.getInt ("id");
 String nameVal = rs.getString ("name");
 if (rs.wasNull ())
 nameVal = "(no name available)";

String catVal = rs.getString ("phone");
 System.out.println (
 "id = " + idVal
 + ", name = " + nameVal
 + ", phone = " + catVal);
 ++count;
 }
 rs.close ();
 s.close ();
 System.out.println ("count = " + count + " rows were retrieved");
 }
 catch (Exception e)
 {
 System.err.println ("Error message: " + e.getMessage ());

}

// prepared statement
 try
 {
 PreparedStatement s;
 String nameVal = "Peter";
 String phoneVal = "555";
 s = conn.prepareStatement (
 "INSERT INTO student (name, phone) VALUES(?,?)");
 s.setString (1, nameVal);
 s.setString (2, phoneVal);
 int count = s.executeUpdate ();
 s.close ();
 System.out.println (count + " rows were inserted");
 }
 catch (Exception e)
 {
 System.err.println ("Error message: " + e.getMessage ());

}

finally
 {
 if (conn != null)
 {
 try
 {
 conn.close ();
 System.out.println ("Database connection terminated");
 }
 catch (Exception e) { /* ignore close errors */ }
 }
 }
 }
 }

Executing the Code

And then watch the code in action.

 C:\WDev\Research\Programming\Java\Database>javac Database.java

C:\WDev\Research\Programming\Java\Database>java Database
 Database connection established
 4 rows were inserted
 id = 1, name = John Rhodes, phone = 123
 id = 2, name = Bob Smith, phone = 456
 id = 3, name = Larry Jones, phone = 789
 id = 4, name = Sandra Bullocks, phone = 4444
 count = 4 rows were retrieved
 1 rows were inserted
 Database connection terminated

C:\WDev\Research\Programming\Java\Database>

The post Accessing MySQL from Java appeared first on OptimusBI.

Avoiding the SQL “Division by Zero Encountered” Error

When writing queries that perform lots of calculations, it is very easy to forget to ensure that our queries are set up to avoid the “Division by zero encountered” error.

Luckily, there are three easy workarounds to avoid getting this ugly error in our SELECT statements:

  • CASE statement
  • NULLIF/ISNULL functions
  • SET ARITHABORT OFF and SET ANSI_WARNINGS OFF

Using the CASE statement, your query will look like the following:

SELECT CASE WHEN [Denominator] = 0 THEN 0 ELSE [Numerator] / [Denominator] END AS [Percentage]
 FROM [Table1]

If the denominator or divisor is 0 in the above query, the result becomes 0. Otherwise, the division operation is performed.

Using the NULLIF and ISNULL functions, your query will look like the following:

SELECT ISNULL ([Numerator] / NULLIF ([Denominator], 0), 0) AS [Percentage]
 FROM [Table1]

What this does is change the denominator into NULL if it is zero. Then in the division, any number divided by NULL results into NULL. So if the denominator is 0, then the result of the division will be NULL. Then to return a value of 0 instead of a NULL value, the ISNULL function is used.

Lastly, using the SET ARITHABORT and SET ANSI_WARNINGS, your query will look like the following:

SET ARITHABORT OFF
 SET ANSI_WARNINGS OFF

SELECT [Numerator] / [Denominator]

With both ARITHABORT and ANSI_WARNINGS set to OFF, SQL Server will return a NULL value in a calculation involving a divide-by-zero error. To return a 0 value instead of a NULL value, you can put the division operation inside an ISNULL function:

SET ARITHABORT OFF
 SET ANSI_WARNINGS OFF

SELECT ISNULL ([Numerator] / [Denominator], 0)

The post Avoiding the SQL “Division by Zero Encountered” Error appeared first on OptimusBI.

Working with Clustered Index in MS SQL Server

Clustered indexes sort and store the data rows in the table based on their key values. There can only be one clustered index per table, because the data rows themselves can only be sorted in one order.

Creating a Table in Clustered Index

To better explain how SQL Server creates clustered indexes, lets create a new table by executing the following command.

Create Table Employee
 (
 EmpId Int,
 EmpName varchar(20)
 )

When you first create a new table, there is no index created by default, and a heap is created.

Now, Adding a few records in this table:-

Insert Into Employee Values (4, Replicate (‘a’,20))
 Insert Into Employee Values (6, Replicate (‘b’,20))

After inserting the data in the Employee table , we can view the contents of the table by executing the following command.

Select EmpID From Employee

The output should be as follows:

Empid
4
6

Clustering alters the data block into a certain distinct order to match the index, resulting in the row data being stored in order. Therefore, only one clustered index can be created on a given database table. A clustered index can greatly increase overall speed of retrieval, but usually only where the data is accessed sequentially in the same or reverse order of the clustered index, or when a range of items is selected.

Some points to remember when using clustered indexes:

  • The reordering of the index occurs every time the index changes (ie: on Update, Insert, Delete).
  • They affect the physical order of data so there can only one clustered index.
  • They reorder the way records in the table are physically stored.

When you create a clustered index, try to create it as a UNIQUE clustered index, not a non-unique clustered index. The reason for this is that while SQL Server will allow you to create a non-unique clustered index, SQL Server will make it unique for you by adding a 4-byte “uniqueifer” to the index key to guarantee uniqueness. This increases the size of the key and reduces performance.

The post Working with Clustered Index in MS SQL Server appeared first on OptimusBI.

SQL Query Optimization at the Stored Procedure Level

You can realize some big performance gains optimizing your SQL queries at the stored procedure level. Proper handling of local variables and floating point numbers as well as query and database settings all present opportunities for query optimization.

Here are some examples illustrating common SQL query optimization opportunities.

Local Variables at Stored Procedure Level

Using local variable at the stored procedure level we are able to enable or disable parameter sniffing as needed. SQL Server’s execution environment sniffs at the parameter values in an SQL statement call that triggered a compilation or recompilation and passes them to the query optimizer to check for opportunities to speed up the query execution plans.

If we see a bell curve distribution in the number of rows retrieved for the filtered column, we can use local variables to disable parameter sniffing.

Here is a stored procedure with parameter sniffing enabled:

@questionid as int
as
set nocount on
begin
select * from Answers where Question_Id=@questionid
end

The same stored procedure can be written this way to avoid parameter sniffing:

ALTER procedure [dbo].[GetAnswers_local]
@questionid as int
as
begin
declare @questionid_local as int
set @questionid_local =@questionid
select * from Answers where Question_Id=@questionid_local
end

For more information on parameter sniffing, check out this post.

Using Table Variables

Using table variable we can store intermediate result sets and perform various SQL operations like various types of joins on these table variables at the stored procedure level.

Floating Point at Application Level

If we are seeing performance issues, we can move floating point operations out of the stored procedure and handle them at the application level .

SELECT a.shampoobrands_id as SHAMPOOBRANDID,

SUM(Value* p.Weight) as ProportionAware,
                SUM(SQUARE(Value)*(p.Weight)) as    CorrectedSumOfSquares
FROM Answers(nolock) a, ShampooBrands(nolock) b, Panelists(nolock) p
                WHERE a.ShampooBrands_Id = b.Id and a.Panelist_Id=p.Id
                AND b.ShampooSegmentSet_Id IN (SELECT id FROM ShampooSegmentSets(nolock) where ShampooSegment_Id = 1)
                AND Question_Id = 2
                AND Panelist_Id IN
                (
                SELECT id FROM Panelists
                )
                group by a.ShampooBrands_Id

This sample query, which has been sanitized, came from an actual client project.

At the time of the import into a SQL table it was multiplied by 10^9 and converted into a numeric value to avoid floating point handling.

After operating at the stored procedure level, numeric data was sent to the application and then at application level the result was again divided by 10^9 to restore its floating points. If Weight is taken as a floating point number, SUM as ProportionAware Multiplication of Value and Weight will be an expensive operation since this particular operation takes place across all the rows in the result set.

Nolock

We can also use NOLOCK with table names to ignore lock on table by any transaction.

SELECT ShampooBrands.Id,ShampooBrands.Name FROM ShampooBrands

Using NOLOCK politely asks SQL Server to ignore locks and read directly from the tables. This means you completely circumvent the lock system, which is a major performance and scalability improvement.

However, you also completely circumvent the lock system, which means your code is living dangerously. You might read the not-necessarily-valid uncommitted modifications of a running transaction. This is a calculated risk.

Nocount

Finally, we can SET NOCOUNT ON at the stored procedure level. This stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored procedure from being returned as part of the result set.

When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is OFF, the count is returned.

The post SQL Query Optimization at the Stored Procedure Level appeared first on OptimusBI.

MySQL to Microsoft SQL Server Migration

Need to migrate data from MySQL into Microsoft SQL Server 2005 and later? Here’s how to do it with ODBC.

Setup ODBC Connection to MySQL Database

I’m going to assume that you already have MySQL and Microsoft SQL Server set up which makes the first step getting the MySQL ODBC Connector.

For this article, I downloaded the MySQL ODBC Connector 5.1.

The setup of this connector is pretty simple:

  1. Open your ODBC Data Source Administrator from the Control Panel -> Administrative Tools. Under the tab labeled as System DSN, press the Add button. odbc-data-source-administrator MySQL to Microsoft SQL Server Migration
  2. On the Create New Data Source dialog that appears, choose MySQL ODBC 5.1 Driver and then press the Finish button.create-new-data-source MySQL to Microsoft SQL Server Migration
  3. After that, a MySQL connection configuration dialog will appear. Add your MySQL database account information in it, preferably the “root” account which has full access to your databases in MySQL. In this case, my database is called “tigerdb”. Do not change the port to anything other than 3306, unless during your MySQL server installation, you have defined something else.mysql-connector-odbc MySQL to Microsoft SQL Server Migration
  4. Press the Test button to ensure your connection settings are set properly and then the OK button when you’re done.

Create a Microsoft SQL Link to your MySQL Database

In this state, you are ready to establish a link towards MySQL database from your Microsoft SQL Server Management Studio. Open a query window and run the following SQL statement:

EXEC master.dbo.sp_addlinkedserver
@server = N'MYSQL',
@srvproduct=N'MySQL',
@provider=N'MSDASQL',
@provstr=N'DRIVER={MySQL ODBC 5.1 Driver}; SERVER=localhost; _
	DATABASE=tigerdb; USER=user; PASSWORD=password; OPTION=3'

This script will produce a link to your MySQL database through the ODBC connection you just created in the previous stage. The link will appear in the Microsoft SQL Server Management Studio like this:

mssql-object-explorer MySQL to Microsoft SQL Server Migration

If it doesn’t show up in the treeview, press the refresh button.

Import Data between the Databases

Create a new database in Microsoft SQL Server. I called mine “testMySQL”. In the query window, run the following SQL statement to import table “users” from the MySQL database tigerdb, into the newly created database in Microsoft SQL called testMySQL.

SELECT * INTO testMySQL.dbo.users
FROM openquery(MYSQL, 'SELECT * FROM tigerdb.users')

That’s it!

The post MySQL to Microsoft SQL Server Migration appeared first on OptimusBI.

Outsourcing Business Intelligence: Hybrid Onshore-Offshore Delivery

I have been involved in many outsourced business intelligence projects for our customers with the full range of delivery models from onsite, offsite, offshore and hybrid. While there are still good reasons to choose each of these models, I find the hybrid model tends to work best.

At Optimus, the different deliver models typically work as follows:

  • Onsite: 100% work done at the client’s location
  • Offsite: almost 100% work done at our local office
  • Offshore: 95% work done at our offshore delivery center
  • Hybrid: 30% work done locally and 70% work is done in our offshore delivery center

The final quality of the work is the same for each delivery model.

However, I have noticed that the hybrid approach brings a significant benefit to our customers.

The main benefits are as follows:

  1. Lower Cost
  2. Quick Delivery Cycles
  3. Partner that takes greater ownership of projects and client’s risk
  4. Economies of scale, expertise, and best practices
  5. Increased flexibility

How Hybrid Onshore-Offshore Works

outsourced-bi-hybrid-delivery Outsourcing Business Intelligence: Hybrid Onshore-Offshore Delivery

We apply agile development methods to business intelligence to deliver on hybrid onshore-offshore projects.

Our local team starts with evaluation and planning of the technology, tools, architecture with your key team members and other stakeholders. A list of the project items is created based on the high-level requirements. These requirements are then fleshed out in the project backlog. The items in the backlog are then arranged in sprints based on priority and timelines.

The Optimus local team then performs interviews with the client teams and end users to document detailed requirements.

With detailed requirements in place, the offshore team starts the work on the design and development. The testing team is involved from the early cycle to receive quick feedback and plan the next delivery cycles. We perform daily scrum meeting to go over feedback and review sessions with the onsite team, offshore team and client team(if needed).

Once the delivery of the sprint is tested by offshore team, it is handed over to the onsite team to perform a walkthrough with the client team. The client team performs the final testing and the delivery item is deployed to the production environment.

This process allows client-facing elements of the projects to be performed during the business hours (local time zone) with the client. Elements that don’t require client interaction, like development and testing, are performed overnight at our facility in Noida, India.

In some cases, you can even give feedback at the end of the day and see it implemented the next morning.

Learn more about our report development services.

The post Outsourcing Business Intelligence: Hybrid Onshore-Offshore Delivery appeared first on OptimusBI.