Are we rowing backwards from public cloud?

For the last few years the mantra has been the same: “cloud is the destination”, by which we assume public cloud.  The economics are obvious; why would organisations choose to maintain their own fixed cost data centres when they could use someone else’s facility and only pay for what they use?

For startups and smaller organisations who had no technical legacy to worry about, or who ran mainly on SaaS offerings, this was great – they could remove those servers lurking under desks or in cupboards.  However for most of the customers I deal with (i.e. large corporate enterprises) it wasn’t this simple.

Colocation

Back in 2014 there were confident predictions that over 80% of enterprise workloads would be running in public cloud within 5 years.  3 years later, and only a handful of those workloads have made the switch, and for the most part that’s been ‘lift and shift’ – moving from VMWare on-premise to VMWare in the cloud, for example.  Enterprise customers aren’t seeing the often-promised flexibility and performance improvements which come from true refactoring of applications for cloud – their code is just too complex.

But it’s not just the code that’s a problem.  There is a reluctance on the part of many business leaders to commit to ‘letting go’ of their data.  I’ve heard it said that this is about mindset and perception (after all, pretty much every major data breach has been from a customer’s own data centre).  However there are often challenging legislative and regulatory reasons why this may be the case.

In the EU, for example, data about EU citizens has to stay within the European Economic Area, unless the country it is sent to can maintain an “adequate level of protection”.  What does that mean?  How do you prove it?  What controls do you have over the nationality of data centre employees in third party countries?

Yet enterprises want the flexibility that cloud brings just as much as anyone else – after all, their survival could depend on it.  New disruptors with no technical debt are using cloud-native technology to take customers from the incumbents.

And so we’re seeing a subtle shift – new offerings bringing the benefits of cloud, but in the customer’s own data centre – not just VMWare, but containers, Cloud Foundry, even serverless compute platforms, all designed to run on kit which the customer already has.

In June IBM announced IBM Cloud Private, a software-based IaaS and PaaS platform combining Kubernetes (for containers) and Terraform (for VMWare).  It’s aimed squarely at those customers who run their business with Java Enterprise applications, and who want to migrate to more cloud-like technologies in their own space.  Microsoft has Azure Stack, for customers coming from a .NET world.

No surprises there – IBM and Microsoft have always been big in the enterprise space.  What’s interesting is that Google have teamed up with Pivotal and VMWare to bring their flavour of containers to an on-premise audience, and even AWS have acknowledged that customers may still want to use their own servers.

The end of the private data centre will happen, one day.  But that day may just be a little further off now, as enterprises are given more options for flexible deployment behind their own firewall.  When legislation catches up with technology, or encryption improves, that may change, but it’s still a long way off.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s