AWS and Amazon Bedrock: AI Growth Strategy
Executive Profile Hidden
Summary
Subscribe to access hundreds of interviews and primary research
Interview Transcript
This is a snippet of the transcript.to get full access.
Moving on to AWS, I'll start with some high-level questions, and then some more detailed questions about the offerings. I'm curious about your perspective on AWS's strategy in the AI world. What are the key points of that strategy at a high level? Are they aiming for the broadest range of offerings? Is it more about cost competition? Or are they focusing on serving specific customers? I'd love to hear your thoughts on these angles.
It feels like AWS took a calculated risk, taking their time to ensure the service they launched was not haphazard but done the right way. The strategy is working out because customers today aren't asking for just one model, like ChatGPT. They want options, whether it's an open-source model, a smaller model, or a larger model. We started with five models, and today there are 15 different models, including small and large language models. Because of the abstraction AWS provides, they can swap hardware behind the scenes. Many services and LLMs are running on AWS's own hardware, like Trainium. Regarding strategy, I think it's right because Amazon doesn't think short-term. That's something I learned in my five years there: never think just for a quarter or even just for a year; think long-term and plan backward from there.
This is a snippet of the transcript.to get full access.
I'm curious, when you say customers want options for models, how is that evolving in terms of what they value most in that assortment? Is it cost, open sourcing, larger models, or more frontier models? What is the state of that, at the latest?
The decision between using Llama or Claude depends on whether the model is open-source, closed-source, small, or large. Sometimes, low latency and fast response are more important than accuracy, leading customers to use smaller models. For greater accuracy, larger models are preferred. AWS, with Bedrock, is becoming a marketplace, similar to Amazon.com, where they sell their own services and products, but 95% of the products are from the marketplace. This concept is applied in the AI ecosystem, and they are mastering and expanding it significantly.
This is a snippet of the transcript.to get full access.
Let's switch a bit to the impact of AI workflows. How much is the AI era impacting the traditional workflows that still run in these clouds? How important is it to be the best in AI for choosing AWS?
The impact on cloud providers is that GPUs and other chips are more expensive than CPUs, leading to better revenue. The margin depends on their negotiations with Nvidia, AMD, or their own chips. In two years, if the shift moves towards non-CPU workloads, AWS is well-positioned with Trainium, and Google is with TPU. However, Microsoft may face challenges as they do not have their own chips. They are reportedly working on MIA, possibly launching in 2026, but they are still behind Trainium and TPU. This could impact their margins unless they maintain a close relationship with Nvidia with healthy margins.
Free Sample of 50+ Interviews
Sign up to test our content quality with a free sample of 50+ interviews.
Related Content

Amazon AWS and Anthropic: History of SageMaker & Bedrock AI
Former Generative AI Director at Amazon Web Services

Google: TPU & Broadcom Relationship
Former Chip Engineer at Google

AWS re:Invent Takeaways: Trainium, Graviton 5, & Bedrock AI
Former Generative AI Director at Amazon Web Services

NVIDIA, Trainium, Claude Training & Inference Demand
Former Accelerated Computing Leader at AWS
© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.