AWS Series: Kubernetes & The Portability Illusion

In Practise Weekly Analysis

Print

Amazon Web Services

This is the first article of a short series in our more recent dive into AWS. Ultimately, we attempt to understand the durability of Amazon Web Services' cash flow.

We will be exploring 3 areas that we feel are particularly interesting and somewhat less-well covered.

This first piece is centred around the growth in Kubernetes and the risk that open source technologies could commoditise Amazon's EC2 and S3 services.

In the second piece we explore how and why Amazon created Graviton, its proprietary ARM-based core processor, and the potential competitive advantage of being fully vertically integrated.

Finally, we will attempt at breaking out the ROIC and unit economics of both AWS and 'non-AWS' as it's reported by Amazon.

Kubernetes & The Portability Illusion

In the early 2010’s, just as AWS was gaining mass adoption, investors feared infrastructure as a service could become commoditised. The logic was that companies could easily switch their virtual machines between public cloud providers.

At the time, this rationale was hard to argue with. In the prior decade, VMWare’s vSphere enabled companies to easily shift workloads between virtual machines on different host servers in data centers. This effectively neutralized IBM and HP's role in the value chain, commoditising the infrastructure layer.

In 2015, Docker drove the mainstream adoption of “containers”, a potential vSphere for cloud computing. A container is a standardised unit of software that enables developers to run and deploy applications across multiple computing platforms.

Containers let you shortcut that whole process because it was this wrapper around your application code that included all of the things that needed to run. That meant that it would always run regardless of what system you are running it on. Docker provided this little bubble for your application to run within and it just was not disturbed by the settings of the app or the operation systems layer around the application - Former Managing Director at AWS

Developers could now rapidly deploy software across thousands of container instances. This created complexity; it was hard to manage and optimize containers simultaneously. This led to Kubernetes: the open source container orchestration platform launched by Google.

One of the promises of Kubernetes, just like vSphere, was cloud portability. However, this hasn’t been the case: the IaaS layer for AWS and the major public cloud companies is proving far stickier than some expected.

This is because both workload and data portability is difficult.

Workload portability effectively means a company can seamlessly switch a workload from one cloud to another. Because all of the public cloud vendors are proprietary, this is difficult. Each provider has different APIs, semantics, syntax and other nuances that make portability expensive and time-consuming.

To give a more tangible example, you might need a different configuration in Kubernetes to use a GCP load balancer versus a kWS load balancer, depending on which load balancer type and how you want it configured. There is also a very common pattern of, within Kubernetes, you integrated with managed cloud server that your cloud provider offers and those integrations look different depending on which cloud you are running in - Former Director EKS, at Amazon Web Services

Although Kubernetes allowed developers to build and manage containers efficiently on any cloud, it was still hard to switch providers.

This also highlights the value of AMZN’s PaaS layer. Any company that switches on-prem to AWS will typically 'lift and shift' code via Amazon’s Kubernetes managed service (EKS). Inside EKS, Amazon offers many AWS-native microservices to make the transition. Databases, load balancers, controllers, provision servicers, etc. These middleware ‘add-ons’ are the sticking point: any customer who wishes to leave would have to rewrite the code to integrate with Google or Azure’s microservice APIs.

AWS have got a database available; they have got a load balancer option available; they have got a security solution available. Some of these things are native AWS services; some are using these Kubernetes controllers to call and provision AWS services. These platform teams are building these workflows for developers to use. A lot of it just has logic built in that is specific to EKS or does have a dependency on specific AWS servers. The same applies to GCP and EKS. - Former Director EKS, at Amazon Web Services

And then there is data portability. Every customer using AWS has huge amounts of data stored in S3. Moving large amounts of data also requires huge amounts of bandwidth, takes time, and costs money. On top of this, AWS charges significant egress fees to move the data to another provider. 'Data gravity' is real.

Sign up to read the full analysis and hundreds of interviews.

Forum

The forum a trusted place for investors to debate and share ideas on quality companies.

FORUM

Company Channels
Did you like this article ?