Even though public cloud is mainstream, and more businesses are going “all in”. They are going for a multi-cloud, hybrid or blended cloud – yet there are still many misunderstandings in those who have not started, and many cloud myths persist.

For businesses that are early on their journey to cloud, here are common beliefs that I need to puncture;

Cloud is not secure

The first assumption that people make is that Cloud is inherently un-secure. The truth is that Cloud technologies are considerably more capable and functional – yet it is very easy to miss-configure to be wide-open.

There is an assumption that Cloud datacentres are realms of neatly labelled racks, where each customer’s systems are identified so a malicious person could grab a hard drive full of their data. Or, that all their data is in a public pool where a wandering script kiddie can just search for their data and scrape off specific files.

In reality, even the operators of cloud datacentres are unlikely to be able to pinpoint which hard drives contain a specific customer’s data, or even which physical server they are located on – as data and servers are distributed and replicated widely between racks, rows and halls.

Cloud security has different paradigms, which need to be considered differently from physical infrastructure – network perimeters and trust layers are no longer the method of protection. Security for cloud changes direction to AAA, encryption and signatures, protecting data and applications instead of a single firewall being the focus. There is a move to “zero trust” models which are inherently more secure than office networks – as nothing is trusted as safe and secure, and everything is checked and verified at every stage. In recent years there have been hacks that only target the remaining on-premises systems.

READ ARTICLE:   Change is about people, not policies

Cloud is cheaper

Unfortunately people get burned with this one – they assume that they can simply transition their servers to a public Cloud service (in a “lift and shift” approach) and save on costs. Well, they might be able to save – if they completely get rid of their physical datacentre, move everything to public Cloud, and right-size their systems, evolving applications to SaaS and reduce both staff and complexity.

However, many people get “bill shock” from their first use of public Cloud – where they have higher than expected costs. This often is with; development or test systems that they leave running 24×7, systems that are over-sized (because they need a 16 CPU server), or creating multiple standby systems that they are used to designing for physical infrastructure.

Cost management is a new responsibility when you move to Cloud – monitoring that systems are not costing more than expected, and that new new costs have popped up from an engineer self-serving…

The better option is to transform applications to take advantage of the new technologies that are better in cloud, such as; microservices, containers, SaaS, auto-scaling and elasticity. After all, IaaS is dead.

The point of cloud is not to be a cheap location to run your servers, it is a new approach that is more flexible, convenient and rapid to deploy. To get the best from cloud, you need to use the best of cloud – instead of dumping your legacy VMs on it and expect that it will save you money.

Server backups

This is an interesting concept that people get wrong in both ways – they either don’t do backups at all, or they back up everything.

Some assume that backup is not required, because cloud service providers promise multiple copies and datacentre availability zones. But these measures are protection methods of the cloud, not for your data. Backups are not just for DR, they are also needed for data corruption, deletion, and compliance.

cloud myths

The alternative that I see is when businesses backup everything, sometimes to a different cloud provider (that is a good idea), to on-premises datacentres (not such a good idea), which effectively transfers every byte over the wire, using data bandwidth (horrible idea). So, a massive bill for data transfer, plus the need to store cloud VMs that cannot be powered on when outside the source cloud.

READ ARTICLE:   Middle management - holding back innovation

Again, this is the legacy of old-world thinking, and not taking into account the technologies of cloud. The operating system is no longer something that needs to be backed up – stateless systems with automated deployment is the way to go. The focus of cloud backup should switch to the data and configuration – instead of backing up the operating system and full application install of IaaS servers. This makes backups faster, smaller and more effective – after all a backup’s function is to be restored.

Too hard to manage

At a later stage in the cloud journey, when a business has started to consume microservices, loosely coupled and message based systems, stateless auto-scaling systems and serverless architectures. The perceived result is that the final combination is difficult to manage and sprawling, multiple interfaces and paradigms.

This may be true in some cases, but this is where an effective architecture and documentation are essential, supported by tagging and naming. Third party tools can help, and the public cloud providers are rolling better tools to manage this multi-level product use.

Locked in

There is a concern that many raise when the discussion changes from a lift and shift IaaS deployment towards an evolution to cloud services PaaS and serverless technologies. The concern is that by re-architecting servers to applications that are using cloud platforms, this locks you in to that one cloud provider.

This is probably largely true, but if this is a concern then you can factor this in to your design. Using containers and multi-cloud designs can relieve this concern.

READ ARTICLE:   Obvious PCI-DSS benefits

Cloud is unreliable

This concern is often driven by two things – news of major outages, or experience tainted by poor Internet connection. The reliability of cloud is the major focus of the cloud providers – AWS, Azure and GCP invest millions into making their services more reliable and able to tolerate failures.

It is far more likely that the service that is unreliable is your own application – unless you re-design for failure. Something as simple as a small delay in your application’s communication to another component can be seen as a “cloud failure”.

This is again a re-focus on the capabilities of cloud, instead of just dumping your servers in the cloud and expecting it to be cheaper, more secure, faster, more reliable and portable on a whim.

Cloud myths that need to die

These are just a few cloud myths that need to die: cloud is unsecure, cheap, unreliable, difficult to manage and you are locked in. If you approach cloud with the same mentality as physical servers and datacentres, then you will have these misunderstandings, the same as people did with virtualisation.

Share this knowledge