SQL_Server_Migration_Azure-1030x344 Migration to Azure: SQL Server on VM, Managed Instance, and Database

Are you looking to migrate your SQL server to cloud platforms like AWS or Azure? SQL Server older versions are moved to the cloud to modernize their infrastructure, extend their support, and incorporate new tools. And while SQL Server can run proficiently in each of the major cloud computing providers, Azure is the best choice:

  1. It’s cheaper

    Azure is up to 5 times cheaper than AWS. Plus, Azure also offers options for cutting expenses even more, such as saving from existing licenses, paying reservations upfront, and getting free security updates.

  2. High availability

    Azure handles system maintenance (including product upgrades, patches, and backups) without affecting the database uptime.

  3. SQL Server optimized

    SQL Server and Azure are Microsoft-developed products. Consequently, Azure has the most complete support for SQL Server and its tools, as well as extended support to older versions.  

  4. Custom-fit options

    Azure is not a one-size-fits-all product. In the case of SQL Server, it provides different cloud solutions and service tiers to satisfy applications and organizations of all sorts.

In this article, we will explore the products available for running SQL Server on Azure. While we will focus on migrating from on-premise, these options are also open if you want to start a new database right in the cloud. Also, bear in mind that all Azure products for SQL Server use the same engine that SQL Server, the same language (Transact-SQL), and are —mostly— compatible with SQL Server tools and APIs.

 

Migrate to an Azure VM

The first option is migrating the workload as is to Azure Virtual Machine (VM) running SQL Server. Once in Azure, you can upgrade to a new version of SQL Server. If you prefer to keep your old version instead, Azure offers SQL Server 2012 and 2008 versions three and one additional years of security updates respectively.

When deploying SQL Server on Azure VM, you are granted an interface for deploying a VM with the OS and SQL Server version of your preference. Hence, you’re responsible for managing and purchasing the OS and SQL Server environment. But, since Azure hosts the VM, it is responsible for the host servers and hardware systems. At the same time, the Azure platform provides value-add services such as backup and patching automation and Azure Key Vault integration.

Running SQL Server on Azure VM is the first option for systems that depend to some level on on-premise applications or OS configuration. Therefore, it is also the go-to option for on-premise migration. Azure VM migration is presented as lift-and-shift ready since the process is fast and demands little to no changes for existing applications.

SQL Server on an Azure VM is best for:

  • Swiftly migrating from existing SQL Server on-premise installations no matter what version.
  • Working with SQL Server features specific to a legacy version or not supported by Azure SQL.
  • Administrating the OS, database engine, and server configuration.
  • Needing more than 100 TB of storage.

 

Migrate to Azure SQL

Like Azure VM, Azure SQL takes care of the hardware and hosting, while also delivering the software the user needs for running their database application or instance. In like manner, Azure SQL includes automated configurations such as SQL Server upgrades. In this regard, Azure SQL is versionless, meaning the SQL Server in it is always updated to the latest version.

Instead of running on a VM, Microsoft Azure SQL runs on Service Fabric, a distributed systems platform made especially for cloud-native applications and microservices. 

1. Azure SQL Managed Instance

In Managed Instance, Azure takes care of the host software, hardware, and VM. Yet, you are in charge of deploying and managing the SQL Server instance and databases.

Managed Instance is a more complete option for cloud migration since it adds instance-scoped (server) features. Since it is not a VM, system configuration or maintenance actions such as patching are not required by the user. When deploying, you can choose between two service model tiers (General Purpose & Business Critical) depending on the performance, resources, and features you’re looking for.

Azure SQL Managed Instance is best for:

  • Focusing completely on the SQL Server.
  • Modernizing your existing SQL Server without giving up tools such as Agent Jobs and Service Broker.

2. Azure SQL Database

In Microsoft Azure SQL Database, Azure takes care of the host, hardware, VM, and SQL Server. As such, users only need to worry about working with databases. However, in comparison with Managed Instance, in Database some engine instance features are missing, like SQL Server Agent jobs.

If we compare Database with the other options in the Azure lineup,  it is the hardest to migrate from on-premise. Likewise, it offers too few options to control the underlying details of the system.

Database has the most options for deployment, offering two purchase models, each with its own sub tiers to choose from depending on the business workload: 

  • Vcore. Similar to Managed Instance model with a third tier option known as Hyperscale.
  • Database Transaction Unit (DTU). Predetermine compute resources (CPU, I/O, and memory) combined to simplify scaling.

Database is best for:

  • Support modern cloud applications on an intelligent, managed database service, that includes serverless compute.
  • Cloud-native applications.
  • Very specific SQL Server applications.
  • Larger databases than Managed Instance.

Note: To see a detailed comparison of Database and Managed Instance features, check Microsoft docs.

The Bottom Line

We have reviewed the Azure lineup for SQL Server cloud migration. Moreover, when it comes to choosing between Azure options for cloud migration, we recommend having an open mind about how much control of your system you want. Cloud computing is meant for making things simpler for users: When in doubt, ask yourself if it is really necessary to have complete control of the server, OS, or SQL Server version.

In any case, the good news is that there is an SQL Server option for each case in particular. Choose Azure if:

  • You’re looking to integrate the SQL Server Microsoft tools
  • You’re new to the cloud and want the migration process to be as harmonious as possible
  • You want a reliable and cost-effective solution
  • You already have a SQL Server license

If you are still wondering about which product to choose, get in touch with us to learn more about how to migrate your workload to Azure and what option is more suitable for your organization.

 

Necessity_over_capacity-1030x343 Necessity Over Capacity: How to Get Cloud-Enabled Applications Right Cloud-enabled applications are on-premise software adapted to work in the cloud. However, despite being cloud-enabled, moving a legacy application to the cloud isn’t enough to truly take advantage of the benefits of a cloud application. To enjoy these benefits, businesses need to avoid treating cloud-enabled applications in the way they did when these were hosted on-premise. 

This article will review how on-premise and cloud systems work and learn the right approach for migrating applications to the cloud. To give us insight into the subject, we interviewed Richard Reukema, a software solutions architect at Optimus Information with decades of experience in the IT field.

 

The Difference Between Cloud and On-premise Applications

To begin with, on-premise applications work very differently once they are moved to the cloud. When running on-premise, the application’s performance is tied to the physical infrastructure of the organization. More importantly, legacy applications are not in the hands of their developers, but in the hands of IT operators.

Operators make sure that the hardware has enough resources to handle the many applications that run simultaneously on it. As such, during the process of implementing a new application, they measure its consumption requirements. After that, the IT department handles a list of the hardware they need to run this application properly. To run these applications, companies can choose between relying on the equipment they already have or buying additional hardware for the payload of the application. To this extent, applications depend on the hardware configuration, which can bottleneck the application when under high demand or delay the deployment of newer versions of the software. This process is known as capacity planning.

It is crucial here to acknowledge that on-premise capacity planning takes into account not hourly or daily usage but yearly. Richard Reukema asserts that companies buying hardware have to take into consideration possible dates that will increase applications’ payload. For example, for some companies, Christmas is when some companies increase their sales in comparison to the rest of the year and need the most of their systems. Hence, IT operators have to make sure their applications will be able to withstand the occasional increase in their payload. If they fail to do so, the system’s capacity will not be able to meet demand. Increasing capacity in minutes like cloud applications is off the table

Necessity_over_capacity_quote-1030x300 Necessity Over Capacity: How to Get Cloud-Enabled Applications Right 

Aside from the possibility of increasing capacity infrequently,  the most IT operations can do is schedule to shut down some applications when others require more capacity. They’re limited by the equipment they have. In comparison, cloud applications don’t share resources, or, as Richard puts it, “we build cloud applications or clouded services for the capacity of the application that is required when it’s required.” For this very reason, he maintains that “legacy applications grow on capacity; cloud applications grow on necessity.”

Moving from Servers to Services

As we have seen, traditional IT is locked into capacity. As such, according to Richard, even after moving to the cloud and away from on-premise hardware, operators might still view cloud services as servers. He emphasizes that “they don’t understand software, they understand hardware; they see the cloud as a virtual datacenter”. Virtual datacenters allow companies to swiftly deploy additional infrastructure resources when needed without the time constraints of acquiring and installing new hardware. Scaling on-premise systems like that demand a whole process of picking hardware from vendors, waiting for it, and installing it. In comparison with cloud-enabled scaling, the whole thing takes months.

However, while cloud-enabled scaling eclipses the limitations of on-premise scaling, managing applications like this doesn’t take advantage of the cloud to its fullest. To Richard, virtualizing infrastructure makes no difference, since it keeps IT treating applications as servers instead of services.

Richard understands that to take advantage of cloud capabilities to their fullest, IT operators need to think of necessity instead of capacity. He uses the case of ride-sharing company Uber as an analogy: “Uber goes into a city. How many VMs do you think they have to configure to start up another city or even ten? They don’t. It’s a service. It will dynamically grow not on how much capacity there is but on the system’s load.”

For this reason, Richard points out that “you cannot move a legacy application to the cloud and have the benefits of a cloud-native application or cloud-powered applications.” Moreover, he also thinks that replicating the server mentality in the cloud can even be more expensive than on-premise hardware: “You will get charged more on AWS than you would on your servers in your room. Why? Because the servers in the physical room are all shared; The servers in the cloud are all independent of each other.”

Necessity_over_capacity_quote2-1030x300 Necessity Over Capacity: How to Get Cloud-Enabled Applications Right 

The Solution: DevOps

For Richard, making this paradigm shift from server to services and necessity over capacity requires a reinterpretation of the IT and software development teams’ role. That’s why Richard is in favour of DevOps: “IT operations have no place in services, it only has a place in servers because DevOps, which stands for developer operations, is managed by developers and they can manage operations because it’s their application that’s running on services, not servers.” DevOps emphasizes developing and maintaining software through a chain of short feedback loops —starting from the customer and through production, testing—, automation, and collaboration. DevOps achieves this by integrating developers and IT operators to shape IT infrastructure and the application simultaneously.

When it comes to the cloud, DevOps can administer deployment automation tools to ensure continuous delivery and reliability to the production environment, in a model known as Infrastructure as a Service (IaaS). Likewise, the Platform as a Service (PaaS) model provides serverless services that allow developers to work without the constraints of dealing with a server. IaaS and PaaS minimize the workload of operation teams and fit a business model based on scaling and pricing fixed on computing usage.

 

The Bottom Line

We have learned how cloud-enabled applications differ from on-premise applications, and how to move forward in making the most of our cloud-enabled applications. Still not sure about what you can do with the cloud? Get in touch with us to learn more about the perfect cloud solution for your company.


Power_platform_updates_Banner-1030x430 Power Platform Updates 2022

Introduction

Microsoft is well known for its constantly evolving products and their 2022 release wave 1 Plan which outlines Power Platform updates is a great way to get a look into what’s coming this year. There have been countless improvements to Power BI, Power Apps, Power Automate, and the newest addition to the Power Platform team, Power Pages, as well as a myriad of benefits that come along with using these products. We’re going to share how they can help your organization with digitization so keep reading to hear the most noteworthy Power Platform updates and what they can do for you.

Power BI

Power BI is a product that allows everyone to access data analysis tools, regardless of experience level, complementing Excel through a Microsoft Office type workspace. The most recent updates include benefits for individuals, teams as well as organizations as a whole. With more accessible work sharing and collaborating using OneDrive, Power BI is even more simple to use when joining forces with other team members. In teams, the new capabilities come with enhancements to integration with PowerPoint, allowing for a seamless transition between data analytics and presentations. Finally, organizations benefit from this round of updates, including increased visibility, greater data protection capabilities, and more. 

Power Apps

Power Apps allows all levels of developers to create both web and mobile applications within the Power Platform. Its updates include new built-in collaboration features and large improvements to governance capabilities, helping with safer rollout and scalability. The most notable addition to their features is that organizations can now deliver flagship apps company wide in a safe and dependable manner. 

Power Automate

Power Automate allows businesses to cut down on time sucking, repetitive tasks, by aiding in automation. With this wave of updates, Power Automate can be used on many more interfaces rather than just integration through interfaces such as Microsoft Teams or Windows 11. This makes it easier for organizations to manage user accounts as well as credentials. While sticking to an application-programming-interface (API) first approach, all the features are becoming more easily automated, which allows organizations more flexibility in how they choose to use this product. 

A New Addition: Power Pages

Microsoft Power Pages is the new, 5th addition to the Microsoft Power arsenal. It allows anyone, regardless of technical background, skill, or experience in the industry, to create secure, data powered websites. With a low code design and aesthetic appeal, it still permits more experienced developers to expand the website further if desired. Power Pages includes a Design Studio, Learn Hub, and Templates Hub. The Design Studio and Templates Hub contain pre-made templates to form site designs with ease and the Learn Hub shows users just how to apply those templates for the maximum benefit. It’s already creating waves in the industry since its official launch in May 2022 (some of the features were already in use through the Power Apps portal).

Conclusion

This article covered most of the new Power Platform updates on all of their products from Power BI to the recently launched Power Pages; however, the products themselves are constantly evolving and growing to fit user and company needs, so there’s always more to learn about the newest features coming out. Microsoft’s blog is a great place to go for all the most recent updates and to learn how to make the most of them. What are your thoughts on the Power Platform products, and how does your organization leverage them? We’d love to hear, so let us know in the comments!

Microservices_Scalablity_Banner-1030x431 Microservices Scalability as a Business Issue

Microservices are a popular option for application modernization. For one thing, they are cost-effective, making them ideal for smaller companies. Also, microservices can be cloud-native, saving the space required for on-premise systems. More importantly, microservices deal with the issues of monolithic applications. To name a few:

  • Hard and slow to maintain and test.
  • Fixing a feature with bugs or under maintenance represents downtime for the whole application.
  • Difficulty to manage different languages.

Still, the main perk of microservices in comparison with monolithic applications is scalability: microservices capacity can be configured to scale dynamically to match how much they are demanded in a given time.

In this article, we will discuss defining and administrating microservices. We will see that defining microservices and their scalability is as much of a business question as a technical one.

What is microservice scalability?

The microservice architecture is a way of designing applications in which each of its modules, components, or functions fit into separate programs. By design, each microservice is meant to have a single functionality. Despite microservices working together, they are independent: they have their own database and communicate with other microservices through their APIs instead of relying on language-level communication or function calls.

These design choices are what separate microservices from monolithic applications. In comparison with monolithic applications, the microservice architecture offers the following perks: 

  • Integration: Due to their modularity, microservices can communicate with the client-side without calling another microservice first. Also, since microservices are language agnostic, they can communicate with each other without problem despite which language they are written in.
  • Fault tolerance: Their independence ensures that a fault in one microservice won’t make another fail.
  • Maintenance: They are easier to fix given the size of their code and independent diagnosis. In addition, systems can be maintained with short downtimes before redeployment.

Withal, the most important feature of microservices is scalability. Monolithic programs share the resources of the same machine statically. Microservices, in turn, scale their specs when demanded. In this way, microservices architectures can administer their resources and allocate them where and when they are required.

How to define microservices and scalability

We interviewed Richard Reukema, software architect at Optimus Information with decades of experience in the IT field, about microservices scalability. For starters, he believes that granularity (how minimal microservices can be) can be detrimental if not properly defined: “I’ve seen microservices used to the point where they’re so granular, that there are so many microservices, that there are so many REST calls that the whole system just can’t operate at scale.” As the number of microservices increases, so does communication complexity. Plus, since they have separated databases and logs,  maintaining a large number of microservices is more demanding. Likewise, converting small applications into microservices is not worth it in most cases; in comparison to small applications, a microservices architecture is more complex to maintain and implement. Therefore, the main problem with microservices is correctly defining their size.

Yet, taking into account the application functions as a whole and the optimal balance between granularity and independence is just part of solving the issue. According to Richard, what defines applications as monolithic is their companies’ business operations. In his own words, “application architecture never defines implementation; it only defines the responsibilities of the business.” He understands that “if a business had a help desk and the help desk took calls from every aspect of the organization, it would be a monolithic application because every time the phone rang, somebody would have to answer that call.” In this way, an application will be as monolithic as the business processes of its organization let it be.

Hence, a way of avoiding microservices becoming monolithic is decentralizing their communication channels. “If you want the delivery calls to go to the delivery people, and sales calls to go to the salespeople and the inventory to the inventory people, you suddenly have a department that either has an operator that routes the calls to the right department or you give the phone number to the people, and say, this is our area of the business that handles these different aspects of the business, and that’s no different to the API.

 

How to define scalability

After defining the right size for the microservices, these can be implemented and mapped up to containers. According to Richard, “the benchmark for containers is scalability: If I have a very small API, but it is handling three hundred thousand requests per second, it’s got to be able to scale very quickly, and more importantly, it should scale in”. Container servers scale in when their system capabilities (CPU, memory, network bandwidth, etc) are increased to match the demand the system requires. Containers autoregulate their resources in two main ways:

  1. Scale up or vertical scaling: Increasing the application capacity by augmenting the capacity of your servers—virtual or physical. Preferred for stateful apps since these require keeping the client information between sessions to work.
  2. Scale-out or horizontal scaling: Increasing the number of server instances to manage demand. Preferred for stateless applications since these don’t store client info.

In the case of scaling out, there’s also the idea of “scaling in,” that is, reducing server instances when the demand goes down. In this way, scaling can also improve microservices’ cost-effectiveness. To Richard, learning when to scale in is as important from a business perspective as scaling out: “If you can scale out as you’re generating revenue, and scale in when you are not incurring revenue, you’re saving expenses.” Hence, the business practice of minimizing expenses should also be taken into account when designing computer systems.

 

The bottom line

Microservices have made it possible to administer a system’s capacity much more easily and efficiently than monolithic applications. However, as we have seen, microservices are not the end of monolithic applications. As such, when designing microservices, we need to keep an eye on our business operations, and not exclusively focus on the technical aspects of the implementation.

 

 

To learn more, check out our blog about Legacy Systems: The Risks of Postponing Application Modernization.

Azure Container Apps have skyrocketed into popularity since their launch at the 2021 Microsoft Ignite Conference. In this article we’ll be covering a few topics integral to understanding the Azure Container Apps service, starting off with what container apps are and what this service entails. We’ll look at the prominent features of Azure Container Apps as well as both the benefits and limitations. Finally, we’ll examine how the Microsoft container apps compare to similar container options on the market. As Azure Container Apps is still in public preview, we look forward to new updates and features. For more details and products that were introduced at the 2021 Microsoft Ignite Conference, check out our full article here.

What Is a Container?

Imagine that you’re moving houses. It’s an exhausting process organizing everything and moving it from one location to the next. You have to pack all your belongings into boxes. Generally, you’d pack similar things from one room into the same box, for example, you may have a “kitchen” box with all your dinnerware, utensils, and pots. Everything that you would need for the kitchen would be contained in that single package. Containers function in the same way. They are packages of software that bundle an application’s code with everything else that it would need, for example files, libraries, and other dependencies.

Screen-Shot-2022-04-28-at-2.51.13-PM-1030x476 Understanding the Azure Container Apps Service

Image Source: Microsoft 2022

What Are Azure Container Apps?

The Azure Container Apps service allows you to run containerized applications and microservices on a serverless platform, without managing complex infrastructure. In doing so, it boasts a range of features, expanded on in the following section, such as autoscaling, splitting traffic, and support for a variety of application types.

Relevant Features and Benefits

The following are a few features and benefits of Azure Container Apps:

  1. Autoscaling: As mentioned above, Azure Container Apps has powerful auto-scaling capabilities based on event triggers or HTTP traffic. As the container app scales out, new instances of the container app, named instances, are created as required. This service supports many scale triggers.
  2. Splitting Traffic: Azure Container Apps has the capability to activate multiple container revisions with different proportions of requests directed to each one for testing scenarios. This feature is very beneficial when you want to make your container app accessible through HTTP.
  3. Allowing for Application Flexibility: The containers within Azure Container Apps are incredibly versatile and are able to use any programming language, runtime, or development stack. The flexibility that this provides users in implementing the product and users can define multiple containers within a single container app for use of shared disk space, scale rules, and more. 
azure-container-apps-example-scenarios-2 Understanding the Azure Container Apps Service

Image Source: Azure Container Apps: Example Scenarios, Microsoft 2022

Potential Drawbacks

There are a few things to take note of before using Azure Container Apps. One limitation of the product is that it cannot run privileged containers. If you try to run a program that requires root access, there will be a runtime error that occurs within the app. The other thing to note is that in order to run this product, Linux-based container images are required.

Azure Container Apps as Compared to Other Contenders

It’s important to analyze whether Azure Container Apps is the best fit for your organization and its needs. The following options are other services that would be better suited for specific use cases.

  1. Azure Kubernetes Service: If it’s necessary for you to have access to Kubernetes APIs and control plane, it may be better to use this product rather than Azure Container Apps, which doesn’t allow for direct access to the aforementioned Kubernetes APIs.
  2. Azure Container Instances: This service is often referred to as the simpler, building block version of Azure Container Apps, and is ideal if your needs do not match up with the Azure Container Apps scenarios.
  3. Azure Functions: When building Functions-as-a-Service (FaaS) style functions, Azure Functions is the best way to go as it’s more optimized for functions distributed as containers or code. As well as this, it possesses a base container image, which allows for the reuse of code even as the environment changes.

Conclusion

After understanding the multitude of features that the Azure Container Apps service contains and the optimal use cases, you probably have a better sense of whether it’s something that your organization would benefit from. Flexible, scalable, and easily integrated, it’s a service that covers a range of needs for deploying microservices using serverless containers. If you’re interested in finding other Azure services in addition to Azure Container Apps, click here.

 


LegacyApps_Risks-of-Postponing-Application-Modernization-1030x429 Legacy Systems: The Risks of Postponing Application Modernization

Legacy systems are systems near the end of their life cycle. While still functional, legacy systems are potentially detrimental from both a technical and business point of view. This can cause problems from internal communication issues to setbacks going to market. If ignored, legacy systems can even drive companies out of business.

However, knowing these issues with aging systems hasn’t stopped organizations from keeping them running. Modernization is indeed an arduous process, often dismissed as systems remain operative. Therefore, companies must acknowledge that modernizing not only improves their processes and increases profit but modernizing their legacy applications will also help them preserve the well-being of their business and stay competitive.

In this article, we will tackle what organizations can lose when postponing modernizing a legacy system. We’ll also provide a short review of the available solutions for modernization, including cloud technologies.

 

Why should companies modernize?

Coming back to the system evolution scale, software modernization is a step after software updating and at the doorstep of system replacement. Modernizing a system is a more cost-effective choice than replacing it altogether. Moreover, organizations may be too reliant on these applications —despite their issues— to retire them.

However, legacy systems can harm organizations in many ways. The following is a list of problems legacy systems may bring. Take a moment to acknowledge if these problems exist in your organization’s systems:

  • Losing clients due to poor, outdated, or non-functioning product UI/UX.
  • Keeping business operations becomes too costly in comparison with competitors who rely on newer systems
  • System maintenance monopolizes the IT budget
  • Security issues that cannot be fixed
  • Applications are built on outdated languages restricting integration with new technologies and processes such as inventory for payments.

As we can imagine, not addressing these issues can be extremely detrimental, Even in ways that were initially unthinkable.

Richard Reukema is a software solutions architect at Optimus Information with decades of experience in the IT field. According to him, relying on outdated technologies has the potential for hurting business so massively that you simply can’t ignore it. Richard quotes the cases of Uber and Airbnb as examples of this behaviour: “How many CEOs of the taxi industry and hotel industry saw a dynamic scale application eating 40% of their market? Zero”. Richard understands that these companies based their success on improving customer experience with new technologies: “The level of information that is available to the customer and the convenience of the transaction and the customer experience is vastly different because the old companies just couldn’t imagine a new process in which we use technology to simplify the experience of the customer”.

 

How to start application modernization

Application modernization starts in the planning stage. To modernize, it is necessary first to learn about the full landscape in which the application runs. In other words, learning about what applications it interacts with and how it does so. This isn’t an easy task: Usually, the landscape of these applications is as cryptic as the applications themselves. In his Master on Computer Science academician Simon Nilsson understands that there are two approaches for starting the modernization process:

  • White-box modernization or software reengineering: reverse-engineering the application to learn about its components and ecosystem and how it works.
  • Black-box modernization: looking at how the application interacts in its operating context (inputs and outputs) to learn about it.

Once the application becomes known, it is possible to choose a fitting solution. Simon divides modernization solutions into four categories:

  • Automated migration: using tools such as parsers to migrate languages, databases, and platforms.
  • Re-hosting: hosting the legacy system on a different platform.
  • Package implementation: using off-the-shelf software.
  • SOA Integration: taking the legacy application’s business logic and data embedded and turning them into services.

When figuring out the most appropriate solution for modernization, more often than not, the available budget ends up being as important as meeting the very specific requirements of the system. Yet there are solutions that can fit some cases and are not as overly expensive as others. For instance, low-code development platforms are a plausible and economic solution for modernization. Consulted about this technology, Richard Reukema finds room for low-code development platforms in modernization, yet he defines them as double-edged swords. Richard understands that low-code environments increase the complexity of the organization by being inaccessible to the different departments in which they operate. In his words, “low code environments in corporate settings are extremely difficult to manage because managers have credit cards and (therefore) they have to compute thousands of operations”.

 

Solutions for modernizing legacy systems

The following is a brief overview of different solutions for application modernization.

Azure Platform as a Service (PaaS)

PaaS is a complete cloud solution that includes everything from infrastructure and middleware to development tools and database management systems. The idea behind PaaS is to cover the full application cycle, including building, testing, and deploying.

Thanks to PaaS, organizations don’t have to manage a multitude of software licenses, and instead, rely on a unified set of applications and services managed by the PaaS provider.

Containerization

Containers are software code packages made of the bare minimum software for running the application proficiently in any infrastructure. Containers are isolated from the rest of the system, making them a great solution for dealing with incompatibilities, a common problem found in legacy systems. When at scale, containers are managed and automated by other programs, Kubernetes being one of the most popular.

Microservices

Containers have made it possible to build scalable architectures out of numerous services. By disassembling software, more efficient use and maintenance of it is possible. This can make us think that microservices are the solution for modernizing every legacy monolithic application. However, microservice architecture is far from perfect. According to Richard, the problem with microservices is scalability. “I’ve seen microservices used to the point where they’re so granular, that there are so many microservices, that there are so many REST calls, the whole system just can operate at scale”, he says.

 

We have seen how critical modernizing legacy applications is, how it starts, and what solutions are available. The next step is to find the right technology and implementation for your case. Get in touch with Optimus to learn more about solutions for your organization.