Cloud Data Warehousing with Azure SQL

If you are running an on-premise data analytics stack on Microsoft’s SQL Server, but running into maintenance, cost and scaling issues, you can consider moving your data system to a cloud-based database service such as Azure SQL Database. Especially for your first data analytics stack, Azure SQL Database provides low startup costs with the ability to easily expand as business grows.

Advantages of Azure SQL Database

There are several benefits to moving on-premise SQL Server infrastructure to Azure:

  • Physical acquisition, provisioning and maintenance of SQL Server deployments are a thing of the past. Furthermore, decreasing or increasing data infrastructure is instantaneous with SQL Database elastic pools.
  • Azure assists existing database migration to Azure SQL Database with wizard-based tools.
  • All stored and transmitted data are encrypted via client-side keys.
  • Microsoft accommodates third-party and open-source technologies, such as Python, Java, node.js, PHP and Python.
  • SQL developers feel right at home using SQLCMD or SQL Server Management Studio for development.

SQL Database Limitations

Although all SQL Server components, SSIS, SSAS and SSRS are available on Azure, there are still areas where the Azure version is not completely fleshed out. For instance, only a growing subset of T-SQL features are yet available such as cursors, transactions, triggers, all data types, all operators plus logical, arithmetic and string functions.

Additionally, many T-QSL statements in SQL Database do not support every option available in SQL Server 2016, such as CREATE/ALTER for databases, logins, tables, users and views. Collation of system objects, cross-database queries with three- or four-part names, database collector, diagrams and mail, some events and certain database characteristics that were managed manually in SQL Server but are automatic in SQL Database are also missing.

For a full list of deficiencies, see Azure SQL Database Transact-SQL differences

Additional Azure Capabilities

SSRS is actually replaced with a separate service, SQL Reporting, which incurs a separate charge for reports. It is not a general reporting service since it only works with SQL databases. It does offer a nearly identical development interface to traditional SSRS.

Azure Tables is a storage service targeted at non-relational database storage, which is a type preferred for data analysis processes. It stores up to 100TB of data via an Azure Storage account and supplies data in row form. Additional advantages include less cost than straight Azure storage and easy scaling.

Built on top of Hadoop, HDInsight offers unstructured data storage plus a number of tools, such as Sqoop, Pig and Hive for query processing. Your in-house SQL Server, Excel or SQL Database are all able to connect to this service.

Data Factory is Microsoft’s SaaS analogue to SSIS. It visually coordinates other services to transform raw, unstructured data via data flow pipelines into clean, transformed data ready for analysis engines such as HDInsight or Azure Machine Learning for predictive analytics.

In lieu of SQL Reporting, you can utilize Microsoft’s SaaS Power BI for report, dashboard and visualization creation. You can use this tool in conjunction with your on-premise SQL Server installation or stored spreadsheets too.

Steps to Migrating from SQL Server to SQL Database

SQL Database is, in theory, backward-compatible all the way to SQL Server 2005. In spite of this, the first step in migration is to test and fix any compatibility issues that may exist with SQL Database V12.

There are several methods to determine compatibility including the use of SQL Server Data Tools, the SqlPackage utility, SQL Server Management Studio’s Export Data Tier wizard and the Azure SQL Migration Wizard. SSDT, SSMS and SAMW can be used to fix any migration issues with your database also.

The next step is to create an Azure SQL Database logical server and migrate your existing data to it. Although other methods exist, the use of SQL Server transaction replication is the recommended solution since it minimizes live database downtime. Other solutions are to export/import BACPAC files when connection bandwidth is low or unreliable or use the SSMS database deploy wizard for smaller databases.

Running SQL Server in the Cloud Directly

There is nothing stopping you to begin or continue your SQL Server-based data analytics development and deployment without Azure SQL Database. Amazon AWS provides any level of SQL Server instantiation online with the advantages of computational, networking and storage elasticity on a pay-as-you-go basis. With a bit more lifting, you could do the same thing on Azure or AWS by utilizing their Virtual Machine services directly for your own SQL Server deployment.


Running data analytics in the public cloud brings all the usual benefits of cloud-based operation, the most important of which are elastic storage for big data crunching systems and high availability in-house or mobile across the enterprise.

Whether or not your business should consider a cloud-based data analytics deployment depends on several factors including TCO, data volume, bandwidth requirements, security and the need to scale operations up or down quickly.

A wise approach is to work with an experienced Optimus data analytics consultant to collate all factors and develop a full data architectural solution. Our experts can guide you towards the best solution for your needs.




Power BI and SSRS: How Do They Pair?

SQL Server Reporting Services is a powerful reporting tool in its own right with strong programmability, extensibility and scalability features that have led to its widespread adoption. As a report authoring tool, however, SSRS has lacked the powerful dashboard creation features of its competitors. With the release of SQL Server 2016, it now has the capability to produce stunning, information-packed visualizations when paired with Microsoft’s Power BI suite of business analytics tools.

Pairing Up SSRS and Power BI

The only prerequisites to get SSRS and Power BI to play together is the installation of SQL Server 2016 CTP 3, a Power BI account of any level, registration of SSRS with Power BI and SQL Server Agent running. Register SSRS with Power BI via Reporting Services Configuration Manager by entering the SQL server name and RS instance and connecting. Then under Power BI Integration, register a work or school Power BI account. The account must have permissions for registering apps with Azure Active Directory.

Basic SSRS/Power BI Interactions

Once registration is complete, you can upload any Power BI file to the Reporting Services web portal. After uploading, you can open a Power BI file, interact with the report, update its model, refresh the data and save it back to RS web portal when operations are complete. All other consumers of the report will see the changes instantly.

Power BI Mobile App

There is also a Power BI mobile application that connects to Reporting Services as long as network access to the RS server is obtainable from your location. Once connected, you can interact with reports or already defined KPIs in the web portal just as you would from your desktop.

BI Visualization Pinning

Where the partnership between SSRS and Power BI really shines is the ability for users to pin any report visualization, such as charts, matrices, maps and so on to a Power BI dashboard. These pins can have additional annotations. To take advantage of this new feature, you need to configure RS with a BI account that has application add rights in Azure Active Directory.

Once you are signed into RS with Power BI credentials, you can begin pinning. The most common errors during this interaction occur because SQL Agent is not running, since the RS subscription subsystem is utilized for this feature. When the pairing is set up correctly, a Power BI icon appears in the RS toolbar.

You access “pinnable” items via an additional tab, which darkens everything in the display that you cannot pin. Selecting an item brings up a dialog popup from which you select the working group which contains the dashboard to which you will add the pin. You also specify an update frequency so the pin is kept up to date with regular data refreshes.

Note that pins may “expire,” which means they fail to show up at a later date. This situation occurs if you have not refreshed the authentication token during that period. This refresh is accomplished simply by signing into Power BI every 90 days or less at which time all your pins are updated.

The Future of SSRS and Power BI

One of the clear drawbacks to Power BI is that, for now, it is strictly a cloud-based solution. Users are clamoring for the capabilities of Power BI for their on-premise and hybrid systems. Microsoft is responding by rewriting SSRS to extend Power BI-like features to on-premise installations. This also requires that legacy SSRS users be accommodated to provide them with advanced features and an upgrade path.

The new SSRS, which has been making an impact since SQL Server 2016 RC1, supports all modern web browsers using HTML5. A single SSRS mobile app runs on all three major platforms. The RS interface and reports can be custom-branded for a more professional look.


As SSRS has become the de facto modern standard for BI analytics systems, its weakness in data visualizations has come to the fore, which Microsoft is softening by creating ways for SSRS to pair up with Power BI for great visualizations. This pairing, for now, is loosely coupled with the ability to interact with Power BI reports, a mobile Power BI app that connects to an RS server and the ability to attach SSRS visuals as a tile inside of a Power BI dashboard.

They are now adding more capabilities within SSRS itself for big customers with on-premise BI operations, which will include another mobile app and better visuals. It seems logical that at some point Microsoft will opt to unify UIs, report formats and internal subsystems within both SSRS and Power BI in order to reach their eventual goal of one billion Power BI users.


Check out our latest interactive Power BI demos. Play with real data to see how it translates into interactive visualizations:


Migrating Crystal Reports to SSRS

Most legacy ERP systems still utilize Crystal Reports for creating detailed, flexible and dynamic business reports. However, multiple acquisitions of the software over the years has resulted in it falling behind software trends in Data Analytics systems. Even its latest owner, SAP, is moving to Microsoft’s SQL Server Reporting Service as its go-to solution.

The SSRS add-on to SQL Server has evolved to a web-based report server featuring essential Data Analytics functions such as data-driven subscriptions and a multi-format rendering engine. Additionally, its scalability potential far exceeds that of Crystal Reports. Since many companies already own an SQL Server license and SSRS comes bundled for free, many are anxious to convert legacy Crystal Reports resources to SSRS.

Migration Process Overview

If your organization is fortunate enough to have only a handful of relatively simple reports to convert, you might employ one of several third-party automatic CR/RPT to SSRS/RDL tools. If you are using SQL Server 2016, you will need to further upgrade the RDL output via SQL Server Data Tools as Microsoft has recently made changes to the internal report object model.

Most companies, however, will have dozens or hundreds of reports of varying complexity for conversion and many of these will not lend themselves well to automatic conversion.

Step One: Report Analysis

In order to reduce the number of reports to convert and prioritize the remainder, follow these guidelines:

  • Determine if the report is still used directly or indirectly via another report
  • Record how the report is accessed and how often
  • Discover which individuals or group users are using the report
  • Prioritize which report items are most important to their business decision value

The results of this analysis will reveal the shortest path to converting the highest value reports and report data. This step often suggests ways in which reports can be consolidated or standardized by sharing common information across reports. This presents opportunities to share SQL stored procedures across reports as well.

In preparation for migration, also document various technical aspects such as the specific data sources being used, which may be databases offline and online or software products such as OLAP Intelligence or MicroStrategy Intelligence Server.

Step Two: Migration Implementation

You should now be prepared to recreate report characteristics as new SSRS reports. Proceed in this order:

  1. Create new SQL queries and then fetch and verify the data sets for accuracy
  2. Set up the SSRS report layouts noting that Crystal Reports is a banded report design, whereas SSRS uses a Table object
  3. Incorporate row-level data while building grouping relationships and subtotals
  4. Devise formulas and procedures to derive condensed results from primary data
  5. Create visual reporting artifacts such as charts, graphs and matrices
  6. Finally, set up any required sub-reports and linked reports

Unfortunately, completing these steps accurately may require serious sleuthing skills if the original reports are not well documented, which is often the case.

Difficulties of Automation

Although some conversion steps above may appear amenable to automation, such tools often run into serious obstacles even with reports of relatively low complexity. These problems often derive from the distinct software architectures used by each product. Furthermore, no Crystal Reports conversion tool performs the initial analysis step that’s so vital to ensuring your upgraded reporting system is cost-effective and efficient.

There are numerous examples of automated conversion issues that are difficult to debug and correct:

  • The use of shared data sources often generates exceptions
  • Differences in parameter names may interfere with data source connections
  • Parameter renaming often breaks dependencies in the same or other reports
  • Required parameters in SSRS are sometimes missing in Crystal Reports.
  • Layout fields and blocks often intersect, and overlapped objects render poorly
  • The Keep Together feature in Crystal Reports frequently translates incorrectly
  • Expressions and VB code in Crystal Reports often break or produce incorrect results in SSRS
  • Multicolumn reports and text boxes frequently require manual adjustment in SSRS

Even in cases where auto-conversion may appear to work at a 95% level, it is highly recommended that the conversion be overseen by a trained specialist who can uncover hidden flaws and correct them before they reach end users who depend on a clean, fully functional reporting system.

Employ a Phased Approach

If your Crystal Reports to SSRS conversion appears evenly moderately complex, approach the migration in a phased approach including upfront detailed analysis to uncover unforeseen, time-consuming obstacles.

After initial testing, deliver reports to a select group of knowledgeable users for beta testing before widespread deployment. Reduce inevitable end-user issues by documenting thoroughly changes in appearance or usage.


Migrating from Crystal Reports to SSRS provides an ideal opportunity to evaluate and streamline your entire reporting solution to ensure BI decision makers are getting the information they require in the most effective manner. SSRS will continue to offer more advanced features and grow with your business needs due to its superior scalability.

You will likely find that automated migration presents as many problems as it solves and that using an experienced conversion partner to implement a detailed manual conversion process pays the highest dividends.

SSRS Best Practices

Installing, configuring and running SQL Server Reporting Services is a non-trivial set of tasks. To successfully perform these, a company needs people with skills related to databases, system administration, report creation and data analysis. Most initial deployments and their usage can take advantage of several best practices in order to improve performance, maintenance and the production of quality reports.

Deployment Tips

Safeguard the Encryption Key

A number of items such as passwords and connection strings are encrypted within SSRS and stored within the Report Server database or configuration files. Many an installer has forgotten to back up the encryption key and later regretted it. Keep a copy of the key in a password-protected file and place copies in both the SSRS server and another safe spot outside the server. When migrating SSRS to another server, restoring this key will avoid having to re-create encrypted items such as your data sources.

Simplify Security Control

Sysadmins should be in the habit of granting access privileges by Active Directory groups rather than to individuals. Furthermore, reports themselves should be categorized into folders, and access privileges should be assigned per folder. Hierarchical access can be created via subfolder privilege cascading.

Use SQL Authentication in Lieu of Windows Authentication

Windows Authentication for connecting to report data sources is only reliable if the SQL Server database and the SSRS server are on the same machine, which often presents an inconvenience to users trying to access remote machines. To enable access from another machine, use SQL Authentication instead or supply account credentials that work on the remote machine.

Performance Improvement Tips

Optimize SELECT Statements

A bit of fine-tuning of SELECT statements goes a long way to improving SSRS performance by reducing the amount of data fetched to only that which is necessary. Get in the habit of specifying only the column names relevant to your query instead of using SELECT * statements.

Profile Slow Queries

Take advantage of the GUI-based SQL Server Profiler to analyze long-running queries and stored procedures to see if you can wring out even better performance. Note that the profiler will be deprecated in later releases of SQL Server except for Analysis Services workloads.

Use WHERE to Filter at the Database

Learn to use the WHERE clause in your queries for data filtering whenever possible. It is far more efficient for data filtering to take place at the database level and not at the report level. Filtering at the report level should only be used when multiple report items, such as matrices, lists and charts, are sharing a single dataset and each requires different methods of filtering.

Maintenance Tips

Shared Data Sources

Expert programmers know the old maxim that any problem can be solved by introducing another level of indirection. This is important in SSRS reports as well. For example, if at least several of your reports use a common data source connection, do not refer to this connection directly within each report. Instead, set up a project-level Shared Data Source and refer to that within reports. If the source of report data changes at a later time, you only have to modify it in one place for all reports to pick up the new data source.

Maintaining SSRS Report Server Catalog

Within the SSRS software architecture is an SSRS report server catalog, which caches data used to serve up previous reports without re-creating them afresh from the SQL Server database. The catalog tables should be regularly re-indexed and database statistics updated to improve query performance.

Report Design Tips

Improving Readability

There are several ways in which you can make SSRS reports more accessible to end users, such as maintaining uniformity of the font type and its size throughout the report, avoiding too many colors, saving space by proper textbox orientation and using alternating row colors. If the report is exported into multiple formats, always perform each export and check the output for readability.

Improving Content Quality

For reports that require the most up-to-date information, prominently display the time and date of the last refresh. This practice helps end users evaluate the report’s relevance. Also, display the data filter parameters, which assists readers’ report comprehension. When presenting visuals, such as pie charts and bar graphs, avoid using too many data categories as the smaller ones will clump together. If necessary, break out the smaller categories into a secondary visual.


SQL Server Reporting Services is a key component for modern enterprise Business Intelligence solutions. It provides a huge array of features essential to extracting and presenting the most relevant data to business decision makers. As such, its complexity requires careful deployment and optimization to achieve the best performance possible and high quality, informative reports for its end users. The tips presented here will help your organization utilize SSRS to best advantage.

SSIS Best Practices

SSIS represents a complete rewrite of its early predecessor Data Transformation Services. Usability, parallelism and performance have all been vastly improved over the years resulting in an SQL Server component aimed at high-volume, high-performance ETL applications. Here are several additional tuning steps that can further increase SSIS performance for your particular jobs.

Managing Parallel Execution

MaxConcurrentExecutables and EngineThreads are two SSIS properties used to optimize SSIS’ parallel execution capabilities. The first specifies the number of data flow tasks that are run in parallel by the SSIS runtime engine. Its default value is -1, which allocates an amount of threads equal to the number of processors/cores plus 2. If the value of MaxConcurrentExecutables is less than the number of parallel tasks available, only those number of tasks run simultaneously. When one completes, another is added.

EngineThreads limits the number of source and worker threads used to execute the data flow tasks in the data flow pipeline engine. Thus, its effect is “nested” within the overall data flow parallelism influenced by MaxConcurrentExecutables. Its default value is 10, which indicates up to 10 data source and 10 worker threads will be allocated.

In order to tune these values for an ETL job, turn on PipelineExecutionTrees event logging, which tells you how many execution trees are used per data flow task. Use this number to fine-tune the setting of EngineThreads.

One caveat here is that these values cannot override precedence constraints between tasks, which may decrease the amount of execution parallelism available.

Use Logging Judiciously

SSIS logging is very helpful in tracking down bugs in package code or for monitoring packages already debugged and deployed. It allows exclusive selection of events of interest and permits logging them to any of several destinations.

The LoggingMode property within a package enables and disables logging easily without package modification. Use it wisely as logging adds additional SSIS runtime overhead. Be aware also that a decrease in performance because of excessive logging may mask race conditions in code execution, which is typically the opposite effect that is desired when debugging intermittent package problems.

Taking Pressure Off TempDB and the Transaction Log

During high volume data transfers, changing the defaults for two SSIS settings will avoid uncontrolled growth of the transaction log and tempdb memory usage. By default, Rows per batch and Maximum insert commit size are set to maximum values of -1 and 2 GB, respectively. The first value permits all data rows to be processed as a single batch, whereas the second specifies whether all rows in the data transfer are to be committed in one go. Thus, for high volume transfers, set these to more reasonable values to match your execution environment.

Avoiding Defragmentation When Loading Large Data Volumes with Keys

When transferring several to hundreds of millions of rows from one table to another, you may experience a sharp slowdown in execution if the data contains a large number of clustered or non-clustered indices. Such fragmentation may reach as high as 90 percent.

It is possible to periodically halt the transfer and rebuild the indices, but a more efficient solution is to drop the indices completely, turn the target table into a heap and transfer all remaining data to this heap. Once the raw transfer is complete, rebuild the indices. This will decrease the overall execution time of the entire transfer several fold.

Save Time with Checkpoints

The Checkpoint features lets a package restart from the last point at which it failed instead of from the very beginning. When a Checkpoint is enabled on a package, execution status is logged to the file named in the CheckpointFileName property. Deleting this file will cause the package to start from the beginning.

Additionally, set CheckpointUsage, SaveCheckpoints on the package and FailPackageOnFailure for data flow tasks you want to have restart capability. Note that Checkpoint only works at the package level and not inside data flow tasks, which will start at the beginning of their internal sequence.

When performing long, high-volume ETL operations, Checkpoint can save many hours. It also is a great assist in debugging new packages. In the latter case, use it in conjunction with the Boolean DelayValidation property, which controls SSIS early and late validation of packages and components prior to execution.


SQL Server Integration Services’ creates data warehouses for the purpose of high-volume data ETL operations. The focal point of SSIS development is an SSIS package, created within Business Intelligence Development Studio. Packages are used for various workflows including data cleansing, standardization, integration and administrative tasks.

At the package, task and system levels, there are a number of tweaks possible to optimize SSIS performance to the job at hand or to assist the debugging of new or modified packages. Always bear in mind, that these tips are not hard and fast rules due to internal SSIS dynamic optimizations that happen at runtime. Thorough testing and monitoring is always a wise approach when applying these practices to your particular tasks.

This is just the tip of the iceberg for optimizing your SSIS, and there are many more tips and tricks to make your data systems clean and efficient. If you need help or have questions about your current system’s performance, our data analytics specialists will be able to guide you. Give us a call, we’re always happy to help.

Power BI vs QlikView

In the world of business intelligence, putting the right tools in the hands of the right people is critical to success. Two of the most popular tools in the industry today are Microsoft Power BI and QlikView. Both provide modeling capabilities and allow you to generate visualizations and dashboards, but there are key differences between the two products. Our BI experts at Optimus have conducted a side-by-side comparison.

Power BI Benefits

As tends to be the case with the majority of Microsoft products, the greatest strength of Power BI is its level of integration with other Microsoft products. If you’re already using Office 365 and SQL Server, you have a good chunk of the most common data sources for using Power BI already in place. It’s a fairly quick jump from entering data into an Excel spreadsheet to producing a visualization in Power BI.

Microsoft’s current commitment to cloud-based computing extends a lot of the advantages of Power BI further. For example, the Power BI mobile app can be fully connected to the rest of the software stack. If you want a sales person in the field to have access to real-time information that’s being updated on your SQL Server installation, the dashboards you create in the mobile app can stream that information live to a phone or tablet.

QlikView Benefits

QlikView is widely regarded as the more mature and robust of the two BI solutions. It’s built especially well for analytic purposes, and anyone who comes from a programming or data science background is likely to find it less limiting. QlikView allows you to get into the guts of its scripting system and tinker around with any models you create, and that can be very beneficial if you need low-level access. You can also produce a wider range of visualizations and presentations using QlikView.

One area where QlikView really excels is in raw speed. By keeping its analytics capabilities in-memory, it allows anyone with sufficient processing resources to crunch a ton of data and develop insights fast. The ability to use QlikView on locally based computers is also important to managed organizations that have strict requirements regarding the availability and storage of sensitive information.


Check out our case study on SSRS reports for operations management software

Power BI Disadvantages

Data science people will find Power BI’s interface limiting. The cube system makes it very hard to get into low-level elements of models to make minor tweaks. This can be a net benefit for users who don’t want to tear through the weeds, but it renders Power BI the more disadvantaged of the two when it comes to raw processing power and the ability to generate unique insights.

Power BI, like almost the entire modern Microsoft software stack, is very dependent upon cloud-computing resources. While this increases availability, it also imposes regulatory and legal limitations on its use. If you work in an industry that has strict requirements for data storage, using Power BI may simply not be an acceptable choice. Many companies also just prefer to house their storage and processing capabilities on-site.

QlikView Disadvantages

The features that make QlikView powerful also make it difficult. QlikView comes with a very steep learning curve. The situation can be worse for people who don’t have a data science background. For some users, QlikView is simply too complex and difficult to acquire.

QlikView also struggles with a lack of specific purpose. This is another case where it’s somewhat freeform approach to being an analytics platform is a double-edged sword. Its functionality can be a bit of a sprawl, and some companies simply don’t have the time or people required to make the most of all that power.

The Right Tools for You

The choice between Power BI and QlikView doesn’t lead to a decision where one is clearly better than the other. You need to be serious about evaluating your people and your organization before you adopt either one. If your company is already married to the Microsoft stack and doesn’t require advanced analytics and modeling capabilities, the Power BI is the choice for you. If you find the available capabilities in programs like Excel to be too limiting, you should take a long look at QlikView. Power BI excels in ease-of-use and deployment, while QlikView benefits immensely from allowing data-centric people to get into the weeds. The critical thing is to balance the pros and cons of each package against the demands of running your operation on a daily basis.

If you’re looking to switch BI stacks or add to your current roster, speak with our BI experts. We can help you determine the best tools for your needs and your organization.


Power BI for Office 365

If you’re like many folks in today’s business world, there’s a good chance you’re already using Microsoft’s Office 365 software suite. You may even already feel like you have a fairly data-centric approach to using Office, especially if your company is a heavy Excel user. Putting all that information to use, however, requires powerful data visualization and publishing tools, and that’s where Power BI comes in.

What is Power BI?

Power BI is a business intelligence package designed by Microsoft. BI systems are a way to allow companies to quickly convert their data into usable analysis, presentations and dashboards. For example, if you want to present real-time inventory tracking information to internal stakeholders, you can use Power BI to create a dashboard that allows those individuals to have access to visuals that explain everything in detail. Power BI also empowers you to keep that information up-to-date and even makes it easier to connect with your on-the-go stakeholders by presenting your dashboards within mobile apps.

Power BI actually refers to four components of the same service. As an end user, the most obvious component to you is likely to be the Power BI desktop. This is where you create visualizations and move data around. On the backend, however, are the Power BI gateways, where all the processing and magic happen. On the access side of things are presentations and apps. Tying all three of those elements together is the main Power BI service.


Microsoft actually makes a basic version of Power BI freely available. For many smaller businesses, this may be all they need. As is the case with the Office 365 ecosystem, Microsoft encourages companies to opt-in to a subscription-based system that grants them access to more usage and raw processing capabilities with Power BI.

The pro version costs $9.99 per month per user. It allows each user 10 Gb of data capacity versus 1 Gb for free users. Free users can stream 10,000 rows of data an hour to their dashboards, while pro users can stream 1 million rows per hour. Pro users also have much better access to collaborative tools.

Integration with Office 365

Power BI is available as part of the enterprise version of Office 365, making installation and integration easy. Any information you have access to in programs like Excel can easily be brought into Power BI. It’s also thoroughly integrated with products like SQL Server, so you can quickly pull information from your databases. Likewise, anything you produce in Power BI can be published as a presentation or an accessible dashboard within an app or programs like Excel, PowerPoint or Word.


It’s easy to get lost in the publishing side of Power BI and forget that it comes with a slew of powerful functions. Power BI allows you to run powerful analytics on your data. If you’re looking to produce a sales forecast, you can quickly pull live information from your SQL Server installation into Power BI.

Power BI allows you to develop a workflow that suits your existing approach. If you’d prefer most of your output to fit within existing Excel templates, Power BI is designed to allow you to readily drop calculated items into your spreadsheets. It also allows you to develop data models. If you want to check performance indicators, you can quickly establish criteria and ask Power BI to tell you how close to your targets you currently are. It can even help you project well into the future.

Visualizations are nice, but where Power BI exceeds expectations in by providing a high level of interactivity. Once you’ve created a dashboard, users have access to tools that let them sort through the information. You can even configure real-time updates, so users can check in regularly and see the current state of your business.

Power BI comes preconfigured with tools designed for a wide array of industries. If you need to collect human resources data and make it available to others, you’ll find an out-of-the-box solution for that with Power BI. Same goes for other common business functions, including marketing, finances and operations.

You can also use Power BI to quickly search all of your company’s available data. By using the unifying power of cloud-based solutions, Power BI can interpret your queries and sift through any data source your business has access to. This includes many non-Microsoft products, including applications like Google Analytics, QuickBooks and SalesForce.


Making your information more accessible, both physically and mentally, to stakeholders is one of the biggest challenges any company faces. Power BI allows you to quickly develop insights from your available data and produce dashboards and presentations that are compelling. Where a normal Office 365 user might produce a couple charts, a Power BI user can provide detailed, interactive graphics that drive decisions. Power BI makes your information more accessible in the most basic and human ways imaginable.


Steps to Getting Good BI Requirements

Half the battle for many projects is getting started. BI projects are no different in that respect. A critical phase in realizing your BI project vision is developing thorough project requirements in clear, crisp, implementable and measurable terms.

The temptation is to rush this phase, which often produces excessive churn or even failure as unguided interpretations and showstopper issues appear during implementation or testing. It is vital that you get your BI project off to the best possible start by establishing sound requirements via rigorous steps.

KPIs to Validate Requirements

Requirements are, frankly, meaningless if their intentions and results are not measurable. Key Performance Indicators, or KPIs, are a good approach to ensuring your requirements focus on the right values. The right KPIs are often derived from current organizational metrics from which your project would presumably improve upon.

For instance, if one of the strategic goals of your BI project is to improve customer retention, then a set of relevant KPIs might include data concerning new customer acquisition, customer relationship strength, ROI on loyalty programs, repeat sales and customer attrition among other metrics.

Improvement to these metrics and additional metrics based on new insights will be directly stated within the requirements, e.g. “A 25% decrease in customer attrition rates over the next two quarters.” It is important to also state the methods by which metrics will be evaluated to ensure they are on par with existing ones.

Acquiring Requirements

One often overlooked technique in developing BI requirements is their acquisition across teams and departments who have stakes in the project. Naturally, this requires that you have a priori identified all affected stakeholders. This “reqs” collection phase can be employed at any time, but conversations are often facilitated by an existing pro forma set of requirements. First meetings should be formatted as brain storming sessions, whereas later reiterations should be tightly structured.

During requirement review meetings, new requirements and metrics will be discovered. Additionally, you may find that data relevant to your project is restricted by departmental policy or that regulatory compliance issues for particular data that must be considered. Far better to discover such issues at the requirements phase than in the development or deployment stages.

Take it for granted that you will be unable to accommodate everyone’s requirements, especially those irrelevant to the strategic vision. During the first phase of acquisition, it is possible the vision can be tweaked, but you should resist major or ongoing changes to it as much as possible.

BI Data Acquisition

During high-level requirements discovery and definition, pay attention to the nuts and bolts of how the project will acquire the project data from a technical point of view. This is especially true if databases, operating systems, file formats and applications relevant to your project are not uniform across the enterprise, which is often the case with companies still in their early stages of digital transformation.

Assign the most technically-savvy members of your team to digging out the details of how project data will be acquired, stored and managed. Expect to hit snags, but do not expect those outside the project to change mid-stream their habits or constraints. This data acquisition analysis naturally leads to another set of project requirements that must be stated up front in order to minimize surprises later in the project.


To summarize, once you have developed and vetted strategic goals for a particular BI project, it’s time to roll up your sleeves to tackle a well-defined, thorough set of measurable requirements for said project. These must encompass the viewpoints of all stakeholders up to the point that they support the project’s strategic vision.

State a set of pro forma requirements related to the business decisions supported by the proposed BI system. Use these requirements to generate detailed discussions regarding metrics, data acquisition, data sharing and regulatory compliance with all stakeholders. Iterate with stakeholders on the requirements with increasing structure to avoid heading down rabbit holes. Also, evaluate technical issues relative to obtaining data, storing data and formatting data and make sure these are reflected in the requirements also.

Always bear in mind that despite the seeming tediousness of requirements development, a thorough job pays enormous dividends in subsequent project stages through avoidance of foreseeable issues. These are far more difficult to correct in later project stages and could even spell disaster for the entire project.

If you have questions about your next BI project’s requirements, contact us today.

How SSIS Fits into an ETL Process

MicrosoftSQL Server Integration Services – SSIS – is a common solution for many organizations’ data management requirements with regard to their Business Intelligence systems. Specifically, SSIS and SQL Server together streamline Extract, Transform and Load – ETL – operations on data warehouses. The process is essential to providing efficient, accessible data flows to BI system applications.

Using SSIS “As-Is” to Design Your ETL Process

Guidance about utilizing SSIS/ETL capabilities typically centers on the use of SSIS control and data flows to extract and apply multiple data transformations before storing processed results within the original or an alternative database. Its greatest strength is that it runs as a one-step process without the need for intermediate storage. However, that aspect leads to certain drawbacks that are discussed later for an alternative development approach with staging tables. 

Standard SSIS Extract/Transform

Data transformation employs SSIS’ large selection of data filtering operations, such as sorting, merging, unions, joins and data type conversions. Extraction is accomplished using either a flat file or OLE DB connection. Transformations are defined with graphical control and data flows that may include data multicasting to parallel processes. These process flows may have separate endpoints or may re-combine via a union operation.

The Use of Staging Tables

Staging table ETL architecture provides several benefits over standard SSIS ETL development, which has deficiencies especially with regard to slowing down development and debugging.

Standard SSIS Drawbacks

Because standard development utilizes the built-in one-step process, each data flow test run extracts a batch of data from the original source, which consumes additional time. This problem is compounded if your tests are running on a production database because extra care must be taken not to corrupt its state.

An additional problem is that SSIS data flows are specified at a very low level of abstraction, which makes their development and debugging tedious in the same way that assembly code slows code production compared to using C# or Python for example.

Separating Extract and Transform

Staging separates the steps of extraction and transformation in the top-level control flow. Rather than working with the entire data source, a copy of the data is pre-fetched into a staging table with minimal transformation, if any. The only filtering necessary would be to avoid performance issues such as an overly large table.

In SSIS, an easy way to create the staging table is to edit the data source destination table properties with the option to create a new table, which results in fetching all the correct columns. Data transformations are implemented as database views in lieu of specifying SSIS transformations. These can be written as SQL queries using CREATE VIEW with SELECT statements for the fields of interest.

Tips When Using Staging

While building a staging query, it is a good practice to change field names into more human-readable form as necessary and to rename IDs as Keys to accommodate BI application usage. The multiple staging tables created in this fashion are later joined into dimension and fact tables.

Note that the use of database views for larger data sets may create performance constrictions in production, although it is much more efficient at the development stage. You can get around this by later converting the views to stored-procedure tables that break down views operations into smaller pieces that are run during the ETL transformation step.

Read our case study on Correcting Microsoft SSIS and SQL Server Scripts here

Memory Performance Tips for Standard and Staging Approaches

Regardless of which architectural approach is used to develop your ETL process, a key aspect to bear in mind is that transformation processes should be small enough to run completely within memory. This practices avoids performance-crushing disk swaps.

  • If the SQL Service and SSIS are running on the same system, SQL Service typically gains priority, which means that SSIS becomes a bottleneck
  • Apply SELECT only to the columns actually needed
  • Avoid using a Commit size of zero if inserting into a Btree to avoid memory spill
  • Use narrow data types to reduce memory allocation but avoid excessive data casts
  • Move sorts to the end of the transformation chain

To determine if your implementation avoids touching the disk during transformations, keep an eye on the SSIS performance counter Buffers spooled. If transformation remains in memory but performance lags, check the obvious, which is whether or not the speed of the data source system can keep up.

Especially if you employ the staging approach, since it requires more disk space, be mindful of overall resource utilization such as CPU usage, I/O bandwidth and network usage.


SSIS ETL is an essential pillar in supporting BI information systems. It provides powerful data transformation capabilities to match raw, large data stores to the narrower, optimized data needs of BI applications that are so essential to modern enterprise IT. The architectural approaches to SSIS ETL process development presented here offer trade-offs between performance and development/debugging effort that may improve the efficiency of your organization’s SSIS utilization.

Contact us today if you have any questions about SSIS and your BI system. We’re always happy to help.


10 Tips for Success with Big Data – Part 2

The following is part two of a two part blog post. Click here to read part one.

6. Apps and Dashboards Beat Reports

BI owes much of its existence to the desire decision makers have to see reports. Reports, however, suffer from being boring and static. Whenever possible, try to move your BI output into apps and dashboards. This approach allows decision makers to revisit the data, fiddle with the knobs and dig deeper. While it can seem easier to supply a report and insist upon the rightness of a single viewpoint, most people who rely upon BI to make choices want to feel like they’ve seen all the angles.

7. Share

Whenever possible, you should try to involve other interested parties in the process. This means going outside your comfortable circles and asking for data sources. It means looking around at visualizations produced in-house and even by competitors. A good BI professional should never feel embarrassed to ask around and incorporate new ideas.

8. Identify Shocks

Good models are typically conservative and resistant to shocks. If the introduction of fresh data, especially in small amounts, upsets your analytics, you should know where the shock came from. This requires being aware of the quality controls you’re using and how specific data sources are weighted. Doing so can leave you better prepared to explain the differences between statistical noise and legitimate sources of concern.

9. Move Less

Folks who work with data have a tendency to move data around. They often drop it into cubes that make their immediate jobs simpler. The problem, however, is that good BI needs to be as close to its original data sources as possible. Copying a data set and trimming it down makes that information instantly less useful and harder to mine. Unless there’s a compelling technical reason, refrain from making duplicates of data that’s best left in one spot.

10. Smart Tools

Building a brilliant visualization is a great feeling. It’s important, however, to have intelligent tools in place to make the process easier. Always try to use tools that allow you to quickly search through data. Also make a point of using software that lets you to rapidly recompose your visualizations.


Working with big data can be an arduous and meticulous task, and working with partners and vendors that have the expertise and technical domain knowledge is critical to success.

Optimus’ BI specialists have worked with companies, large and small, to successfully implement their BI initiatives. We know what works, what doesn’t work and how your company can best leverage its data sources.

Call us today to find out how you can get more from your BI.