Digitalisation & Disruption

Navigating multi-cloud realities: beyond the hype and misconceptions

Dear reader, you probably found your way to this article because like many others you are considering the implementation of a multi-cloud strategy for your organisation. Perhaps you were led to believe that this is necessary for adequate risk control and to avoid vendor lock-in. But does a multi-cloud strategy live up to its promises?

Insight from a server room background
8 minutes to read
With insights from...

Based on our experience on cloud transformation, we will take a look at what multi-cloud actually is, why so many business leaders appear to think they are already in a multi-cloud setup, and how they got there. We’ll also dive into all the hype and who is fuelling it. Next, we’ll examine the concept of cloud vendor lock-in and how far it’s really worth going towards being vendor agnostic. Finally, we’ll examine whether multi-cloud is the best (or even necessary) solution for business continuity. 

Everyone is doing multi-cloud so it must be right, right?

You have probably seen the often referred to figure of 87% of companies already being multi-cloud. Plus, you will have seen it used as justification for a multi-cloud approach because ‘if so many others are going that way, it must be the right thing to do’. 

But stop, hang-on, what do we actually mean by a multi-cloud approach? Many will say they’re using multiple clouds if they are, for instance, using one public hyperscaler and one or two more SaaS services like Microsoft O365 or Salesforce Cloud. 

That is neither ill-advised or the kind of multi-cloud scenario we wish to address in this blogpost, rather we will examine an approach where there are multiple public clouds being used for IaaS or PaaS. Plus, we’ll try to understand the drivers that cause companies to opt for such an approach and explore why it might be advisable or not. 

How do companies approach multi-cloud?

Let’s briefly come back to that 87% figure and its use in justifying multi-cloud strategies. Often, the justification for multi-cloud strategies is based on a hidden assumption that all those companies made an informed and strategic decision to go multi-cloud. Think about that for a minute and consider how likely that is as opposed to all the other scenarios that might have played out resulting in a company having their workloads distributed across several public clouds. In our experience, the reality is that for a multitude of different reasons, it just happened and was not at all done in accordance with some overarching clever strategic decision.  

Sometimes, corporate mergers and acquisitions will result in a multi-cloud approach as an acquired company may be on a different public cloud from the buyer and the commitment to transfer workloads onto one is too large. After all, everything is already in place in the target company, from contractual arrangements to talent and so on. Hence, the investment to homogenise is likely not justified or desirable. 

In other instances, we’ve seen departments simply go rogue, especially in larger organisations. Perhaps, there is a central IT function that has indeed decided that a particular public cloud shall be used but a specific department is semi-autonomously managing its own IT and its vendors convinced the team to go for another cloud provider. 

So, the next time someone tries to use the 87% figure to convince you that multi-cloud is the right strategy, exercise critical thinking and consider why they are trying to sway your opinion. 

This brings us neatly to the next part of this article – narrative control. 

The narrative around multi-cloud: separating fact from fiction

The narrative control surrounding multi-cloud strategies can sometimes resemble propaganda, an attempt to hoodwink the audience into a particular viewpoint. So, who’s doing that and what are their motives? 

In a hilarious article on this same topic, ‘Multi-cloud is the Worst Practice’, Corey Quinn notes that multi-cloud advocates are either: 

  1. Declining vendors, realising that if you don’t go multi-cloud, they’ll have nothing left to sell you. 

  1. Niche players, i.e. any one of the cloud vendors outside of the three (AWS, Azure and GCP) main ones. 

(Corey has more interesting musings so please do read his article after this one) 

Narrative control takes many forms, from argumentations that can often be reduced to a straw-man with a bit of critical reasoning to outright terminology co-opting. 

To give you a couple of examples of the latter, how do you define a ‘cloud native application’? Likely, you’ll answer that it’s an application that can run on any cloud. But hang on now, the same question five years ago would have probably yielded the right answer which is that it’s an application that is designed and developed to strongly and advantageously use cloud native managed services. We will leave it as an exercise to the reader to consider who has engendered this changed definition and why. 

Another example of this is ‘hybrid multi-cloud’. If you do a web-search for ‘hybrid multi-cloud success stories’, the majority of the results will be examples of companies that have successfully employed a vendor’s automations to transfer Virtual Machines (VM) from on-premise hypervisors to a public cloud. But if all we have now is a scenario where some VMs are running on-premise and some others on a (single) public cloud then isn’t that just what we have been calling ‘hybrid cloud’? How and why did that ‘multi-‘ term creep into this? An on-prem hypervisor running some VMs does not meet the definition of cloud so ‘multi-cloud’ has no business here. 

There are many reasons for business leaders to decide that they need to be following a multi-cloud approach, some more valid than others. We’ll examine the most popular ones below. But for now, the take-away should be to always be mindful and exercise critical thinking. 

Debunking the multi-cloud strategy: beyond the lock-in fallacy

For many organisations, one of the most important reasons for choosing a multi-cloud strategy lies in its promise to mitigate vendor lock-in risks by using the services of multiple major cloud providers. This approach is often driven by concerns over the potential expenses and operational setbacks associated with migrating services in the future. 

However, the adoption of a multi-cloud strategy is not without its own hurdles and expenses. Each leading cloud provider, including AWS, GCP, and Azure, while offering similar services, has distinct operational and management styles, especially in areas like automation, identity and access management, and security controls. 

To mitigate the risk of dependence on a single provider, organisations frequently establish cloud abstraction layers for each major cloud service. However, it’s becoming clear that this method, while addressing vendor lock-in, comes with significant costs. 

Interestingly, the effort required to develop vendor-neutral workloads using abstraction layers can limit the ability to fully leverage the specialised services of each provider. Organisations may find themselves confined to basic services, tasked with the ongoing effort to bridge the gaps and add missing higher-level functions. 

This raises a critical question – is the multi-cloud approach, with its inherent challenges and required investments, genuinely an effective solution to vendor lock-in or is it an illusion? A detailed examination often reveals that for some, the expected return on investment is unattainable, prompting a reconsideration of the multi-cloud appeal. A more cost-effective approach may involve re-architecting and refactoring workloads during the transition between major cloud providers. 

In summary, the appeal of multi-cloud strategies lies in their promise of flexibility and risk mitigation, but a deeper dive reveals complexities and challenges. A careful evaluation often reveals that the freedom from lock-in might be more perception than reality. If time to market is a priority, swiftly moving workloads to production without the complications of building abstraction layers might prove to be more practical. 

If you want to dive deep into the fallacies of lock-in avoidance, you might want to read Gregor Hohpe’s article next: https://martinfowler.com/articles/oss-lockin.html

h2 hydrogen water concept The multi-cloud strategy, touted as a solution to vendor lock-in, poses challenges and expenses, potentially questioning its efficacy.

Multi-cloud for business continuity: a critical evaluation

In cloud computing, anticipating and strategising for potential risks is crucial, making the multi-cloud approach a topic of significant debate. A large, though highly improbable, risk is the possibility of a major cloud provider like AWS, GCP, or Azure going out of business. The impact of such an event could be substantial. Hence, for businesses running critical operations in the cloud, having a contingency plan is not only wise but sometimes even legally required. 

To safeguard continuous operations, businesses often consider two strategies in the event their primary cloud provider becomes unavailable – switching to an on-premises backup site or using a backup cloud region operated by a different provider. The latter often leads to the adoption of multi-cloud operations, where workloads are run in either active-active or active-passive modes across different cloud providers. This approach, however, involves replicating the application architecture in both environments and avoiding provider-specific services not available in the alternate cloud. Despite the complexity and increased costs, which can dilute the cloud's value proposition, some argue that the benefits of multi-cloud strategies might outweigh these challenges. 

But let's take a step back and challenge this narrative. The assumption that hyperscalers will abruptly go out of business, leading to a rapid shutdown of their data centres, is flawed. In reality, even in scenarios of bankruptcy, mergers, or regulatory shutdowns, the process would be gradual, potentially overseen by interim management for years. No entity would enforce a shutdown causing extensive collateral damage to customers that rely on these cloud infrastructures. Such events would likely be preceded by warning signs, giving businesses ample time to respond—likely years rather than weeks. 

Therefore, active-active or active-passive multi-cloud setups are not the sole solution to ensuring business continuity if a hyperscaler faces demise. A more viable strategy involves migrating from one provider to another, with your original solution continuing to operate in the original cloud for some time. In our view, the better preparation is to fully embrace automation and infrastructure as code. This approach allows for quicker adaptation and eases migration to a new cloud home. 

To sum up, multi-cloud isn't the only or necessarily the best answer to business continuity. Alternatives often come with less complexity and upfront costs and also reduce the cost of post-disaster migration compared to multi-cloud setups. The key takeaway is – automate everything for greater agility and resilience.  

PS: If you want to make more out of your cloud, you should read what our colleague Mark Venn wrote in his blogpost on cloud optimisation.  

Bild von Peter Bäck

Peter Bäck

Principal Cloud Consultant

Peter joined Zühlke in 2018 as the first Head Competence Unit in Singapore, in 2021 he moved to Switzerland to join the newly formed Cloud Practice unit in the capacity of Principal Business Consultant. He has extensive prior cloud experience with GCP during his time with Sonoport and AWS at Kaplan Singapore. Peter holds a Masters Degree in Computer Science from Abo Academy University and looks back on a software engineering career spanning over two decades. 

Contact
Thank you for your message.