Tag Archives: cloud

Are we rowing backwards from public cloud?

For the last few years the mantra has been the same: “cloud is the destination”, by which we assume public cloud.  The economics are obvious; why would organisations choose to maintain their own fixed cost data centres when they could use someone else’s facility and only pay for what they use?

For startups and smaller organisations who had no technical legacy to worry about, or who ran mainly on SaaS offerings, this was great – they could remove those servers lurking under desks or in cupboards.  However for most of the customers I deal with (i.e. large corporate enterprises) it wasn’t this simple.

Colocation

Back in 2014 there were confident predictions that over 80% of enterprise workloads would be running in public cloud within 5 years.  3 years later, and only a handful of those workloads have made the switch, and for the most part that’s been ‘lift and shift’ – moving from VMWare on-premise to VMWare in the cloud, for example.  Enterprise customers aren’t seeing the often-promised flexibility and performance improvements which come from true refactoring of applications for cloud – their code is just too complex.

But it’s not just the code that’s a problem.  There is a reluctance on the part of many business leaders to commit to ‘letting go’ of their data.  I’ve heard it said that this is about mindset and perception (after all, pretty much every major data breach has been from a customer’s own data centre).  However there are often challenging legislative and regulatory reasons why this may be the case.

In the EU, for example, data about EU citizens has to stay within the European Economic Area, unless the country it is sent to can maintain an “adequate level of protection”.  What does that mean?  How do you prove it?  What controls do you have over the nationality of data centre employees in third party countries?

Yet enterprises want the flexibility that cloud brings just as much as anyone else – after all, their survival could depend on it.  New disruptors with no technical debt are using cloud-native technology to take customers from the incumbents.

And so we’re seeing a subtle shift – new offerings bringing the benefits of cloud, but in the customer’s own data centre – not just VMWare, but containers, Cloud Foundry, even serverless compute platforms, all designed to run on kit which the customer already has.

In June IBM announced IBM Cloud Private, a software-based IaaS and PaaS platform combining Kubernetes (for containers) and Terraform (for VMWare).  It’s aimed squarely at those customers who run their business with Java Enterprise applications, and who want to migrate to more cloud-like technologies in their own space.  Microsoft has Azure Stack, for customers coming from a .NET world.

No surprises there – IBM and Microsoft have always been big in the enterprise space.  What’s interesting is that Google have teamed up with Pivotal and VMWare to bring their flavour of containers to an on-premise audience, and even AWS have acknowledged that customers may still want to use their own servers.

The end of the private data centre will happen, one day.  But that day may just be a little further off now, as enterprises are given more options for flexible deployment behind their own firewall.  When legislation catches up with technology, or encryption improves, that may change, but it’s still a long way off.

Advertisements

What’s Your Cloud Exit Strategy?

One of the promises of cloud is workload portability; the ability to move workloads seamlessly between on- and off-premise and between different cloud providers. But sometimes, making the wrong architectural choices can hinder that flexibility. If your cloud provider suddenly increases their prices or has technical issues, can you quickly and easily move to another? 
 

Have you taken advantage of the underlying platform? Cloud providers compete not just on price and availability, but on additional features above and beyond the standard VM platform (such as Amazon’s IAM). It can be tempting to build your virtual machines and applications to use these features, but what happens if you want to change provider? Will these features still be available, or will you have to undertake an expensive redevelopment exercise? I met recently with Fedr8, who have a useful code analysis tool which can tell you whether your application will run in a given cloud environment. 

Have you used standard PaaS services? Most PaaS providers will offer backend services such as databases, reporting, monitoring etc, but make sure you choose well-supported options. MongoDb and Redis might be widely available, but if you choose a more esoteric database, you’re potentially limiting your choice of providers.  Of course, this can be a trade-off. If you want to take advantage of IBM’s Watson cognitive computing, for example, you’re going to have to use IBM’s Bluemix.

How easy is it to ‘flick the switch?’ OK, so you’ve decided to move from Amazon to Rackspace. How do you configure that within your own systems? How do you stop users from doing what they’ve always done? The answer lies in a good cloud orchestration platform – a front-end which hides all the complexity from the users. If a developer needs a new dev/test platform, they can have one – it will exist with a new provider, but the user doesn’t need to know, nor should he care. An orchestrator can also manage long-running environments – shutting them down and moving them without disrupting users. 

What about commercials? It may sound obvious, but if you’ve signed a contract with a cloud provider that commits you to future payments, then nothing architectural is going to help you. Be aware of the contract terms you’re signing up to, and try to stay on pay-per-use until you’re sure you’re happy with your provider. 

What happens to ‘dead’ environments? One of the reasons organisations choose to leave a provider is regulatory – a government decides that customer data can only be held in-country, for example. So what happens when you click the ‘delete’ button on a virtual machine? Is the provider contractually responsible for securely destroying that data? If not, then it should definitely be part of your lifecycle management processes. 

In summary, moving from one cloud provider to another should be easy, but can have challenges. A little up-front thinking, combined with a good orchestration platform for governance can go a long way to help. 

Public, Private, Hybrid, SaaS? Dispelling Some Myths

There seems to be much confusion over the various cloud business models, so let’s start by dispelling the biggest myth of all: there is no cloud.  All of your ‘cloud’ applications are running on a real CPU, with real RAM and real hard disk, in a real data centre.

Once you accept that, it becomes easier to understand the models.  Firstly, on-premise and off-premise.  If an application is running on-premise then it’s in your data centre – you are paying for power, cooling, tech support, etc.  If it’s off-premise then it’s running in somebody else’s data centre.

Now we need to consider the deployment and business model – private, public, hybrid or SaaS?

Private cloud means that your workloads are running on a dedicated physical machine.  That could be in your data centre (in which case you probably own the machine), or someone else’s data centre (in which case you’re probably renting it).  That machine could be ‘bare metal’ (i.e. you have to install the operating system, hypervisors and everything else), or it could be pre-loaded with a full cloud stack such as OpenStack.  Whichever deployment you choose, with a private cloud there is no one else on the server.  But it typically costs more.

Public cloud means that your workloads are running on a shared physical server.  Your bit will be isolated in one or more VMs (virtual machines), but you may get issues with ‘noisy neighbours’ hogging the resources of the physical server with their heavy workloads.  Public cloud is always rented – you never own these machines – but like private cloud you can get either ’empty’ VMs or pre-built cloud stacks.  It depends on how much work you want to do in setting up.

Hybrid cloud is a growing phenomenon.  This involves running workloads in a private, on-premise cloud for most of the time, but ‘bursting’ out to an off-premise, public cloud at certain time – to cover peak trading periods, for example, or to cover downtime due to server maintenance in your own data centre.

Note that for private, public and hybrid cloud, you will need to manage the licences for whatever you’re running in those clouds.  You will also almost certainly need some cloud orchestration capability, or you’ll spend a lot of effort on administration and monitoring of your cloud workloads.

Software-as-a-Service (SaaS) is different.  With this model you don’t have to set up a load at all – the software vendor does that.  You pay to use their cloud, and everybody else’s data is in the same cloud as you – just separated by a username and password.  If you’ve used Gmail, or Netflix, or Adobe Creative Cloud – then you’ve used SaaS.  The advantages are that there is no set up time – just create an account and go – and that you don’t have to think about licensing.

Generally, private cloud gives you more control than public or SaaS, but is more expensive and more complex to set up.  Choose the model that best fits each particular workload.

What is Cloud Anyway?

Most organisations have legacy applications, running in a data centre, usually on mainframes or other servers.  These so-called ‘Systems of Record’, such as payroll, banking transactions, inventory, insurance policy data, and so on, often use specific applications which were not written with flexibility in mind. They are frequently tied to specific pieces of hardware or specific platforms, and moving them to more modern platforms can be difficult.

This creates inefficiencies.  If one of the servers is running at capacity, then you need to buy a bigger server to run that application, even if there is spare capacity elsewhere in the data centre.

This brings us to workload portability.  Industry standard technologies such as OpenStack and Docker can help us to make our application workloads portable – in other words, we can take them off their fixed, legacy system and run them on any suitable machine in our data centre – typically a rack of commodity servers with OpenStack installed on them.

Congratulations! You’ve created a cloud environment, and your portable application workloads can be placed there.  Now, if you run out of space, you can simply add more.

cloudblog1

Even better, the same applies to other data centres.  You can rent a cloud environment from an external provider, such as Amazon, SoftLayer or Rackspace, and deploy the same portable workloads there as well – so you can cover peak periods, or simply remove some of the fixed costs from your own data centre.

But what if your workloads are very predictable and unchanging? Innovation will change all that.  A single app used by your customers to access order data, bank transactions, etc, can radically alter the workload profile of your critical applications.  Moving to a more flexible and elastic environment can help to address that.

Of course, with great flexibility comes great responsibility.  In the old world, your mission-critical application ran on ‘that box over there’.  In the new world, you may not even know which country it’s running in.  You therefore need to make sure you have good visibility and governance of your cloud environments.  You need to understand what is running where, what the policies are for data and application placement, and who has agreed to any changes.

A cloud orchestration layer will provide this.  It can make sure that, for example, you always have your critical applications spread across multiple, geographically separate data centres – in case there is a power failure or some other disaster at one of them.  Or, if you’re a European company, that an application with Personally Identifiable Information is always placed within the European Union.

In summary, making your workloads portable and properly orchestrating them will help to create a flexible and elastic environment for your critical applications, move fixed costs out and convert them to variable, and provide a solid foundation for doing innovation.

Hello Cloud!

For the last few years I’ve blogged on Retail IT Architecture.  Now my job role has changed to be more focused on cloud, and less on retail.  So from now on I’ll be blogging here, at Architecting the Cloud, bringing you my thoughts, observations and general comments on what’s happening in the world of cloud and IT innovation.  Enjoy!