Cloud

3 reasons not to repatriate cloud-based apps and data sets


Repatriation seems to be a hot topic these days as some applications and data sets return to where they came from. I’ve even been tagged in some circles as an advocate for repatriation, mostly because of this recent post.

Once again I will restate my position: The overall goal is to find the most optimized architecture to support your business. Sometimes it’s on a public cloud, and sometimes it’s not. Or not yet. 

Keep in mind that technology evolves, and the value of using one technology over another changes a great deal over time. I learned a long time ago not to fall in love with any technology or platform, including cloud computing, even though I’ve chosen a career path as a cloud expert.

How do you find the most optimized architecture? You work from your business requirements to your platform and not the other way around. Indeed, you’ll find that most of the applications and data sets that are going through repatriation never should have existed on a public cloud in the first place. The decision to move to the cloud was more about enthusiasm than reality.

So, today is a good day to explore reasons why you would not want to repatriate applications and data sets back to traditional systems from public cloud platforms. Hopefully this balances the discussion a bit. However, I’m sure somebody is going to label me a “mainframe fanboy,” so don’t believe that either.

Here we go. Three reasons not to move applications and data sets off public clouds and back on premises:

Rearchitecture is expensive

Repatriating applications from the cloud to an on-premises data center can be a complex process. It requires significant time and resources to rearchitect and reconfigure the application, which can negatively impact the value of doing so. Yet rearchitecting is usually needed to allow the applications and/or data sets to function in a near-optimized way. These costs are often too high to justify any business value you would see from the repatriation.

Of course, this is mostly related to applications that underwent some refactoring (changes to code and/or data) to move to a public cloud provider but not away. In many instances, these applications are poorly architected as they exist on public clouds and were poorly architected within the on-premises systems as well.

However, such applications are easier to optimize and refactor on a public cloud provider than on traditional platforms. The tools to rearchitect these workloads are typically better on public clouds these days. So, if you have a poorly architected application, it’s typically better to deal with it on a public cloud and not repatriate it because the costs and trouble of doing so are typically much higher. 

Public clouds offer more agility

Agility is a core business value of remaining on a public cloud platform. Repatriating applications from the cloud often involves making trade-offs between cost and agility. Moving back to an on-premises data center can result in reduced flexibility and a slower time to market, which can be detrimental to organizations in industries that value agility.

Agility is often overlooked. People looking at the repatriation options often focus on the hard cost savings and don’t consider the soft benefits, such as agility, scalability, and flexibility. However, these tend to provide much more value than tactical cost savings. For example, rather than just comparing the cost of hard disk drive storage on premises with storage on a cloud provider, consider the business values that are less obvious but often more impactful.

Tied to physical infrastructure and old-school skills

Obviously, on-premises data centers rely on physical infrastructure, which can be more susceptible to outages, maintenance issues, and other disruptions. This can result in lost productivity and decreased reliability compared to the high availability and scalability offered by public cloud platforms.

We tend to look at the rather few reports of cloud outages as proof that applications and data sets need to be moved back on premises. If you’re honest with yourself, you probably remember far more on-premises outages back in the day than anything caused by public cloud downtime recently.

Also, consider that finding traditional platform talent has been a challenge for the past few years as the better engineers reengineered their careers to cloud computing. You could find that having less-than-qualified people maintaining on-premises systems causes more problems than you remember. The “good old days” suddenly become the time when your stuff was in the cloud.

Like all things, there are trade-offs. This is no different. Just make sure you’re asking the questions, “Should I?” and “Could I?” While you’re answering the fundamental questions, look at the business, technology, and cost trade-offs for each workload and data set that you’re considering.

From there, you make a fair call, taking everything into consideration, with returning business value being the primary objective. I don’t fall in love with platforms or technologies for good reason.

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.