Posts

Cloud-Migration-1030x572 Top 5 Tips for a Successful Cloud Migration

Are you on the verge of starting your cloud migration journey? Here are a few tips to help guide you through the process.

1. Consider refactoring monolithic applications

Decomposing monolithic applications into services or microservices before moving to the cloud could bring a better return on investment than just using the cloud as another application co-location solution.

A monolithic application is a software system composed of a single, cohesive unit of code, normally self-contained and independent from other systems or applications. While moving to the cloud, this kind of application is possible, but it’s important to note that planning is critical as migration should be known to be an involved process.

It’s recommended to evaluate the specific needs of the application and infrastructure before deciding to move to the cloud and work with an experienced team of architects and developers who can help plan and execute the migration.

2. Consider a hybrid model

Hybrid models are a mix of on-premises and cloud technologies that can be best suited for legacy systems too complex for complete cloud migration.

We believe that all larger organizations will at least move toward a hybrid model if they choose not to migrate completely after evaluation and that smaller organizations will follow suit.

Still, this migration can be complex given the relation between systems. It is easier to move an on-premises system to the cloud completely than partially since the integration planning needs to consider the interrelations between on-premises and the cloud in the system to preserve its security, integrity, and availability, but hybrid models do and will work.

3. Evaluation and planning

Cloud migration requires planning and adequate resourcing. Every step of the process needs to be determined carefully. A well-mapped starting point will save headaches and smooth the entire process.

Companies must recognize that not every company has all the ‘teams’ ready to move from a traditional model to a cloud one.” The cloud has many roles and disciplines to consider: Architects, DevOps, Security, Networking, and Finance. These may not be typical roles in traditional companies, especially at the start-up stage.

You can choose between developing the right individuals within the organization or collaborating with a partner to obtain these abilities. Cloud migration is difficult to accomplish alone, especially a cloud project following the industry’s best practices and blueprints that have already been proven to work. As an organization, you can start alone and take the first steps in the cloud, but to ensure that it grows organically and is secure and highly available, you will require external guidance.

You will also need to acknowledge data protection and regulations best practices, among the most common motives preventing companies from moving to the cloud. Storing data in the cloud can be more secure than doing so in an on-premises system, BUT…you need to learn how to secure your data in the cloud and how contracts with cloud providers work. For example, regulators may tell you your company data can’t be stored outside the country.

4. Utilize already-validated designs

Minimize your efforts and maximize the vast cloud migration and implementation resources.

Using models and best practices validated by the industry will speed up the cloud migration process. Don’t lose time creating what is already available and what’s been tried and tested in the cloud; there are many resources you can utilize. Large cloud providers like Microsoft and Amazon already have blueprints for most cloud workloads.

5. Doing it at the right time

The right time for cloud migration is when it is the right time for your organization; you have the time and the resources. This will all depend on your individual business circumstances.

When considering cloud migration, organizations should not waste time analyzing how the cloud can help their business. The truth is that your competitors are planning to or have already moved to the cloud.

Signs that it is the right time to move to the cloud:

  • When building a new software product
  • When you need to keep an application or applications up to date
  • When making a large data center investment
  • Before renewing with a data center third-party service

Above all, scalability is the most incredible tool made available by the cloud. Cloud-enabled companies thrive on scalability: becoming able to process large quantities of data through machine learning and analytics can be highly challenging with on-premises resources. The cloud democratizes having access to tools too complex such as AI and machine learning to analyze tons of data.

Are you ready to start your cloud journey? For more information as you plan your migration to the cloud, please contact us.

SQL_Server_Migration_Azure-1030x344 Migration to Azure: SQL Server on VM, Managed Instance, and Database

Are you looking to migrate your SQL server to cloud platforms like AWS or Azure? SQL Server older versions are moved to the cloud to modernize their infrastructure, extend their support, and incorporate new tools. And while SQL Server can run proficiently in each of the major cloud computing providers, Azure is the best choice:

  1. It’s cheaper

    Azure is up to 5 times cheaper than AWS. Plus, Azure also offers options for cutting expenses even more, such as saving from existing licenses, paying reservations upfront, and getting free security updates.

  2. High availability

    Azure handles system maintenance (including product upgrades, patches, and backups) without affecting the database uptime.

  3. SQL Server optimized

    SQL Server and Azure are Microsoft-developed products. Consequently, Azure has the most complete support for SQL Server and its tools, as well as extended support to older versions.  

  4. Custom-fit options

    Azure is not a one-size-fits-all product. In the case of SQL Server, it provides different cloud solutions and service tiers to satisfy applications and organizations of all sorts.

In this article, we will explore the products available for running SQL Server on Azure. While we will focus on migrating from on-premise, these options are also open if you want to start a new database right in the cloud. Also, bear in mind that all Azure products for SQL Server use the same engine that SQL Server, the same language (Transact-SQL), and are —mostly— compatible with SQL Server tools and APIs.

 

Migrate to an Azure VM

The first option is migrating the workload as is to Azure Virtual Machine (VM) running SQL Server. Once in Azure, you can upgrade to a new version of SQL Server. If you prefer to keep your old version instead, Azure offers SQL Server 2012 and 2008 versions three and one additional years of security updates respectively.

When deploying SQL Server on Azure VM, you are granted an interface for deploying a VM with the OS and SQL Server version of your preference. Hence, you’re responsible for managing and purchasing the OS and SQL Server environment. But, since Azure hosts the VM, it is responsible for the host servers and hardware systems. At the same time, the Azure platform provides value-add services such as backup and patching automation and Azure Key Vault integration.

Running SQL Server on Azure VM is the first option for systems that depend to some level on on-premise applications or OS configuration. Therefore, it is also the go-to option for on-premise migration. Azure VM migration is presented as lift-and-shift ready since the process is fast and demands little to no changes for existing applications.

SQL Server on an Azure VM is best for:

  • Swiftly migrating from existing SQL Server on-premise installations no matter what version.
  • Working with SQL Server features specific to a legacy version or not supported by Azure SQL.
  • Administrating the OS, database engine, and server configuration.
  • Needing more than 100 TB of storage.

 

Migrate to Azure SQL

Like Azure VM, Azure SQL takes care of the hardware and hosting, while also delivering the software the user needs for running their database application or instance. In like manner, Azure SQL includes automated configurations such as SQL Server upgrades. In this regard, Azure SQL is versionless, meaning the SQL Server in it is always updated to the latest version.

Instead of running on a VM, Microsoft Azure SQL runs on Service Fabric, a distributed systems platform made especially for cloud-native applications and microservices. 

1. Azure SQL Managed Instance

In Managed Instance, Azure takes care of the host software, hardware, and VM. Yet, you are in charge of deploying and managing the SQL Server instance and databases.

Managed Instance is a more complete option for cloud migration since it adds instance-scoped (server) features. Since it is not a VM, system configuration or maintenance actions such as patching are not required by the user. When deploying, you can choose between two service model tiers (General Purpose & Business Critical) depending on the performance, resources, and features you’re looking for.

Azure SQL Managed Instance is best for:

  • Focusing completely on the SQL Server.
  • Modernizing your existing SQL Server without giving up tools such as Agent Jobs and Service Broker.

2. Azure SQL Database

In Microsoft Azure SQL Database, Azure takes care of the host, hardware, VM, and SQL Server. As such, users only need to worry about working with databases. However, in comparison with Managed Instance, in Database some engine instance features are missing, like SQL Server Agent jobs.

If we compare Database with the other options in the Azure lineup,  it is the hardest to migrate from on-premise. Likewise, it offers too few options to control the underlying details of the system.

Database has the most options for deployment, offering two purchase models, each with its own sub tiers to choose from depending on the business workload: 

  • Vcore. Similar to Managed Instance model with a third tier option known as Hyperscale.
  • Database Transaction Unit (DTU). Predetermine compute resources (CPU, I/O, and memory) combined to simplify scaling.

Database is best for:

  • Support modern cloud applications on an intelligent, managed database service, that includes serverless compute.
  • Cloud-native applications.
  • Very specific SQL Server applications.
  • Larger databases than Managed Instance.

Note: To see a detailed comparison of Database and Managed Instance features, check Microsoft docs.

The Bottom Line

We have reviewed the Azure lineup for SQL Server cloud migration. Moreover, when it comes to choosing between Azure options for cloud migration, we recommend having an open mind about how much control of your system you want. Cloud computing is meant for making things simpler for users: When in doubt, ask yourself if it is really necessary to have complete control of the server, OS, or SQL Server version.

In any case, the good news is that there is an SQL Server option for each case in particular. Choose Azure if:

  • You’re looking to integrate the SQL Server Microsoft tools
  • You’re new to the cloud and want the migration process to be as harmonious as possible
  • You want a reliable and cost-effective solution
  • You already have a SQL Server license

If you are still wondering about which product to choose, get in touch with us to learn more about how to migrate your workload to Azure and what option is more suitable for your organization.

 

Microservices_Scalablity_Banner-1030x431 Microservices Scalability as a Business Issue

Microservices are a popular option for application modernization. For one thing, they are cost-effective, making them ideal for smaller companies. Also, microservices can be cloud-native, saving the space required for on-premise systems. More importantly, microservices deal with the issues of monolithic applications. To name a few:

  • Hard and slow to maintain and test.
  • Fixing a feature with bugs or under maintenance represents downtime for the whole application.
  • Difficulty to manage different languages.

Still, the main perk of microservices in comparison with monolithic applications is scalability: microservices capacity can be configured to scale dynamically to match how much they are demanded in a given time.

In this article, we will discuss defining and administrating microservices. We will see that defining microservices and their scalability is as much of a business question as a technical one.

What is microservice scalability?

The microservice architecture is a way of designing applications in which each of its modules, components, or functions fit into separate programs. By design, each microservice is meant to have a single functionality. Despite microservices working together, they are independent: they have their own database and communicate with other microservices through their APIs instead of relying on language-level communication or function calls.

These design choices are what separate microservices from monolithic applications. In comparison with monolithic applications, the microservice architecture offers the following perks: 

  • Integration: Due to their modularity, microservices can communicate with the client-side without calling another microservice first. Also, since microservices are language agnostic, they can communicate with each other without problem despite which language they are written in.
  • Fault tolerance: Their independence ensures that a fault in one microservice won’t make another fail.
  • Maintenance: They are easier to fix given the size of their code and independent diagnosis. In addition, systems can be maintained with short downtimes before redeployment.

Withal, the most important feature of microservices is scalability. Monolithic programs share the resources of the same machine statically. Microservices, in turn, scale their specs when demanded. In this way, microservices architectures can administer their resources and allocate them where and when they are required.

How to define microservices and scalability

We interviewed Richard Reukema, software architect at Optimus Information with decades of experience in the IT field, about microservices scalability. For starters, he believes that granularity (how minimal microservices can be) can be detrimental if not properly defined: “I’ve seen microservices used to the point where they’re so granular, that there are so many microservices, that there are so many REST calls that the whole system just can’t operate at scale.” As the number of microservices increases, so does communication complexity. Plus, since they have separated databases and logs,  maintaining a large number of microservices is more demanding. Likewise, converting small applications into microservices is not worth it in most cases; in comparison to small applications, a microservices architecture is more complex to maintain and implement. Therefore, the main problem with microservices is correctly defining their size.

Yet, taking into account the application functions as a whole and the optimal balance between granularity and independence is just part of solving the issue. According to Richard, what defines applications as monolithic is their companies’ business operations. In his own words, “application architecture never defines implementation; it only defines the responsibilities of the business.” He understands that “if a business had a help desk and the help desk took calls from every aspect of the organization, it would be a monolithic application because every time the phone rang, somebody would have to answer that call.” In this way, an application will be as monolithic as the business processes of its organization let it be.

Hence, a way of avoiding microservices becoming monolithic is decentralizing their communication channels. “If you want the delivery calls to go to the delivery people, and sales calls to go to the salespeople and the inventory to the inventory people, you suddenly have a department that either has an operator that routes the calls to the right department or you give the phone number to the people, and say, this is our area of the business that handles these different aspects of the business, and that’s no different to the API.

 

How to define scalability

After defining the right size for the microservices, these can be implemented and mapped up to containers. According to Richard, “the benchmark for containers is scalability: If I have a very small API, but it is handling three hundred thousand requests per second, it’s got to be able to scale very quickly, and more importantly, it should scale in”. Container servers scale in when their system capabilities (CPU, memory, network bandwidth, etc) are increased to match the demand the system requires. Containers autoregulate their resources in two main ways:

  1. Scale up or vertical scaling: Increasing the application capacity by augmenting the capacity of your servers—virtual or physical. Preferred for stateful apps since these require keeping the client information between sessions to work.
  2. Scale-out or horizontal scaling: Increasing the number of server instances to manage demand. Preferred for stateless applications since these don’t store client info.

In the case of scaling out, there’s also the idea of “scaling in,” that is, reducing server instances when the demand goes down. In this way, scaling can also improve microservices’ cost-effectiveness. To Richard, learning when to scale in is as important from a business perspective as scaling out: “If you can scale out as you’re generating revenue, and scale in when you are not incurring revenue, you’re saving expenses.” Hence, the business practice of minimizing expenses should also be taken into account when designing computer systems.

 

The bottom line

Microservices have made it possible to administer a system’s capacity much more easily and efficiently than monolithic applications. However, as we have seen, microservices are not the end of monolithic applications. As such, when designing microservices, we need to keep an eye on our business operations, and not exclusively focus on the technical aspects of the implementation.

 

 

To learn more, check out our blog about Legacy Systems: The Risks of Postponing Application Modernization.

Have you ever struggled with controlling and governing complex systems, with ever-changing management tools? Some parts of running an organization that relies on cloud environments can be really difficult. It’s becoming increasingly hard to manage new, complex environments with all their moving pieces. Azure Arc is a game changer for hybrid cloud. It delivers a multi-cloud and on-premises management platform. In this article, we’ll be going over the details of Azure Arc, how it works, and how it can help you. Keep reading to learn what Azure Arc is, the benefits of using it, and more resources to get you started. 

 

What is Azure Arc

As previously mentioned, Azure Arc is a multi-cloud and on-premises management platform that consistently delivers high performance. It helps to consolidate all your data and systems into Azure Manager. Some key features that Azure Arc includes are implementing inventory, management, governance, and security across all servers as well as managing Kubernetes structures at a scale. It can also manage virtual machines and more as if they were running in Azure. As well as this, it supports servers running anywhere, on-premises and in any cloud. This can include Windows, Linux, and more. To learn more about the benefits of these key features and more, keep reading.

azure-arc-control-plane-2-1030x352 Why Azure Arc is a Game Changer for Hybrid Cloud

Azure Arc control plane diagram provided by Microsoft https://docs.microsoft.com/en-us/azure/azure-arc/overview

 

Benefits

There are many benefits that demonstrate how Azure Arc is a game changer for hybrid cloud; however, in this article we’ll only cover a few of them. 

  1. One of the biggest upsides to using Azure Arc is that all resources associated/registered with Azure Arc send data to the main, cloud-based Azure Manager. This consolidates the information in a succinct and useful manner. Enterprises can guarantee compliance of resources registered with Azure Arc no matter where they are deployed. This leads to quick problem solving, and less time lost.
  2. Azure Arc can also be used to take care of the smallest to most complex maintenance operations across all forms of the cloud. For example, it can help to manage security and governance and on the other hand, it can also manage updating the operating systems for your servers, a tedious task.
  3. Customers also benefit from all the aforementioned key features of Azure. They can manage resources within or outside of Azure through one consolidated control plane. 

To grasp the full extent of the power that Azure Arc provides, take a look at the resources in the next section. Learn the full list of benefits, how to take the next steps with Azure Arc, and some background information on Azure in general.

 

Resources 

  • If you or your organization want to learn more about Azure Arc, check out the Azure Arc blog for recent updates and what they entail. 
  • For more details about Azure Arc and how it might look as a part of your enterprise, this page from Microsoft Azure outlines the key features of Azure Arc and their uses in real world scenarios. 
  • Interested in getting started on your Azure Arc journey and want to see how Azure Arc is a game changer for hybrid cloud? Look at this page that helps identify which Azure Arc plan works best for you.  

Finally, for more background on Azure as a whole, check out some other articles from the Optimus Information blog such as the 5 benefits of cloud migration using Azure SQL or our common cloud adoption missteps series, the first of which is linked here.

 

 

Need help creating a cloud adoption roadmap? Reach out to us at info@optimusinfo.com for a complimentary assessment.

 

CanadianCloud-copy Microsoft Makes a Significant Investment in Canadian Cloud

Microsoft Cloud Services has been a core contributor to the growth and development of numerous Canadian organizations and is steadily making more investments in Canadian Cloud. Recently announcing their first Canadian Azure Availability Zone in the Azure Canada Central region and an Azure ExpressRoute in Vancouver, this expansion will provide Canadian businesses greater access to new innovations to accelerate their development. 

An Availability Zone consists of one or more data centres equipped with independent power, cooling, and networking. Microsoft says it’s the only cloud provider in Canada to offer Availability Zones and disaster recovery with in-country data residency. In addition, this will be the largest expansion of its Canadian-based cloud computing infrastructure since the launch of the first data centre in Canada in 2016. According to Microsoft, this expansion will increase computing capacity by an incredible 1300%.

While Azure ExpressRoutes already exists in Toronto, Montreal, and Quebec City, this is an important investment for the West Coast. A service that primarily provides a private connection between an organization’s on-premises infrastructure and Microsoft Azure data centre, Azure ExpressRoutes provides more reliability, speed, and lower latency for users. Organizations in Vancouver will now have a secure network connection into Azure without having to cross the country. 

The new Azure Availability Zones and ExpressRoute services are set to go live by the end of March. 

 

For more information email us at info@optimusinfo.com or read more here:

Microsoft announces Canadian Azure Availability Zone, and Azure ExpressRoute in Vancouver (IT World Canada)

Microsoft Makes Significant Investments in Canadian Cloud to Fuel Innovation In Canada (Microsoft News Center Canada)