Content Published Last Week

1. AWS: Containers, Kubernetes, & EC2 Stickiness

2. ACV Auctions, Backlot, CarOffer, & US Wholesale Auto Competition

3. Oxford Cryosystems: Selling to Judges Scientific PLC

4. AWS: Culture, EC2, EBIT Margins & Market Share Outlook

5. Maxcyte's Technology, Pricing, & Growth Outlook

6. Boozt Assortment and Strategic Positioning

Amazon Web Services

This is the first article of a short series in our more recent dive into AWS. Ultimately, we attempt to understand the durability of Amazon Web Services' cash flow.

We will be exploring 3 areas that we feel are particularly interesting and somewhat less-well covered.

This first piece is centred around the growth in Kubernetes and the risk that open source technologies could commoditise Amazon's EC2 and S3 services.

In the second piece we explore how and why Amazon created Graviton, its proprietary ARM-based core processor, and the potential competitive advantage of being fully vertically integrated.

Finally, we will attempt at breaking out the ROIC and unit economics of both AWS and 'non-AWS' as it's reported by Amazon.

Kubernetes & The Portability Illusion

In the early 2010’s, just as AWS was gaining mass adoption, investors feared infrastructure as a service could become commoditised. The logic was that companies could easily switch their virtual machines between public cloud providers.

At the time, this rationale was hard to argue with. In the prior decade, VMWare’s vSphere enabled companies to easily shift workloads between virtual machines on different host servers in data centers. This effectively neutralized IBM and HP's role in the value chain, commoditising the infrastructure layer.

In 2015, Docker drove the mainstream adoption of “containers”, a potential vSphere for cloud computing. A container is a standardised unit of software that enables developers to run and deploy applications across multiple computing platforms.

Containers let you shortcut that whole process because it was this wrapper around your application code that included all of the things that needed to run. That meant that it would always run regardless of what system you are running it on. Docker provided this little bubble for your application to run within and it just was not disturbed by the settings of the app or the operation systems layer around the application - Former Director EKS, at Amazon Web Services

Developers could now rapidly deploy software across thousands of container instances. This created complexity; it was hard to manage and optimize containers simultaneously. This led to Kubernetes: the open source container orchestration platform launched by Google.

One of the promises of Kubernetes, just like vSphere, was cloud portability. However, this hasn’t been the case: the IaaS layer for AWS and the major public cloud companies is proving far stickier than some expected.

This is because both workload and data portability is difficult.

Workload portability effectively means a company can seamlessly switch a workload from one cloud to another. Because all of the public cloud vendors are proprietary, this is difficult. Each provider has different APIs, semantics, syntax and other nuances that make portability expensive and time-consuming.

To give a more tangible example, you might need a different configuration in Kubernetes to use a GCP load balancer versus a kWS load balancer, depending on which load balancer type and how you want it configured. There is also a very common pattern of, within Kubernetes, you integrated with managed cloud server that your cloud provider offers and those integrations look different depending on which cloud you are running in - Former Director EKS, at Amazon Web Services

Although Kubernetes allowed developers to build and manage containers efficiently on any cloud, it was still hard to switch providers.

This also highlights the value of AMZN’s PaaS layer. Any company that switches on-prem to AWS will typically 'lift and shift' code via Amazon’s Kubernetes managed service (EKS). Inside EKS, Amazon offers many AWS-native microservices to make the transition. Databases, load balancers, controllers, provision servicers, etc. These middleware ‘add-ons’ are the sticking point: any customer who wishes to leave would have to rewrite the code to integrate with Google or Azure’s microservice APIs.

AWS have got a database available; they have got a load balancer option available; they have got a security solution available. Some of these things are native AWS services; some are using these Kubernetes controllers to call and provision AWS services. These platform teams are building these workflows for developers to use. A lot of it just has logic built in that is specific to EKS or does have a dependency on specific AWS servers. The same applies to GCP and EKS. - Former Director EKS, at Amazon Web Services

And then there is data portability. Every customer using AWS has huge amounts of data stored in S3. Moving large amounts of data also requires huge amounts of bandwidth, takes time, and costs money. On top of this, AWS charges significant egress fees to move the data to another provider. 'Data gravity' is real.

The challenge of workload and data portability highlights why few large enterprises are running a truly portable production-scale multi-cloud infrastructure. Once a workload is within a cloud, churn is very low.

Customers shift specific workloads to the public cloud with the best infrastructure for the specific use case. For example, a large customer like Delta Airlines may shift their on-prem contact center workload to AWS Connect but its operations and MRO workload to Azure.

Also, the performance, reliability, and expertise in-house to operate the infrastructure matters just as much as the value of any PaaS layer on top:

I think that there's still a lot of brand recognition and this sense of stability and reliability with AWS that goes very far and some of the additional value with very niche services that fill a specific requirement I have or maybe that's the tip of the spear. One thing that I think can't be understated is when you're building a team of developers, and you have this operations base of talent in your organization. I think a lot of the bulk of the reasoning still comes from those operational and maturity components. - Former Director EKS, at Amazon Web Services

With Graviton, Amazon has taken performance and value to another level. It wasn’t enough for Amazon to run the compute and storage for customers, it now integrates the IaaS layer down to the core processor. AWS’ EC2 C7g instances powered by Graviton3 promises ~30% lower costs and higher performance than x86 chips.

Any company running on Graviton instances is now using proprietary AWS cores. Many of AWS’ managed services are also now running on Graviton, whether the customer realizes it or not. This makes it even more difficult to port workloads or data to other clouds once it’s in AWS.

From Hashicorp to Fivetran, there are now more and more software companies built to help customers port and manage workloads across clouds. This will always be a risk for an infrastructure software vendor like AWS.

Could there be a future open-source platform that enables customers to easily switch workloads across clouds with little cost?

Possibly.

But even if this was the case, the market will still be an oligopoly. Workloads still need to run on a layer of compute, and the operational scale, expertise, and overall performance required to meet the demand of large enterprises seems possible for only the top 3 players.

The future “unknown unknown” risk of a new technology that seamlessly ports workloads across clouds may remain a major hurdle for some investors. But with AWS’ market share and Amazon’s culture of pushing the boundaries to delight customers, any future open source project is likely to also provide Kubernetes’ illusion of portability.