In Practise Logo
In Practise Logo - Blue
In Practise Logo
Partner Interview
Published November 30, 2025

AWS and Amazon Bedrock: AI Growth Strategy

Executive Bio

Former Generative AI Director at Amazon Web Services

Summary

Subscribe to access hundreds of interviews and primary research

Or contact sales for full access

Interview Transcript

Disclaimer: This interview is for informational purposes only and should not be relied upon as a basis for investment decisions. In Practise is an independent publisher and all opinions expressed by guests are solely their own opinions and do not reflect the opinion of In Practise.

This is a snippet of the transcript.to get full access.

Moving on to AWS, I'll start with some high-level questions, and then some more detailed questions about the offerings. I'm curious about your perspective on AWS's strategy in the AI world. What are the key points of that strategy at a high level? Are they aiming for the broadest range of offerings? Is it more about cost competition? Or are they focusing on serving specific customers? I'd love to hear your thoughts on these angles.

It feels like AWS took a calculated risk, taking their time to ensure the service they launched was not haphazard but done the right way. The strategy is working out because customers today aren't asking for just one model, like ChatGPT. They want options, whether it's an open-source model, a smaller model, or a larger model. We started with five models, and today there are 15 different models, including small and large language models. Because of the abstraction AWS provides, they can swap hardware behind the scenes. Many services and LLMs are running on AWS's own hardware, like Trainium. Regarding strategy, I think it's right because Amazon doesn't think short-term. That's something I learned in my five years there: never think just for a quarter or even just for a year; think long-term and plan backward from there.

This is a snippet of the transcript.to get full access.

I'm curious, when you say customers want options for models, how is that evolving in terms of what they value most in that assortment? Is it cost, open sourcing, larger models, or more frontier models? What is the state of that, at the latest?

The decision between using Llama or Claude depends on whether the model is open-source, closed-source, small, or large. Sometimes, low latency and fast response are more important than accuracy, leading customers to use smaller models. For greater accuracy, larger models are preferred. AWS, with Bedrock, is becoming a marketplace, similar to Amazon.com, where they sell their own services and products, but 95% of the products are from the marketplace. This concept is applied in the AI ecosystem, and they are mastering and expanding it significantly.

This is a snippet of the transcript.to get full access.

Let's switch a bit to the impact of AI workflows. How much is the AI era impacting the traditional workflows that still run in these clouds? How important is it to be the best in AI for choosing AWS?

The impact on cloud providers is that GPUs and other chips are more expensive than CPUs, leading to better revenue. The margin depends on their negotiations with Nvidia, AMD, or their own chips. In two years, if the shift moves towards non-CPU workloads, AWS is well-positioned with Trainium, and Google is with TPU. However, Microsoft may face challenges as they do not have their own chips. They are reportedly working on MIA, possibly launching in 2026, but they are still behind Trainium and TPU. This could impact their margins unless they maintain a close relationship with Nvidia with healthy margins.

This is a snippet of the transcript.to get full access.

Free Sample of 50+ Interviews

Sign up to test our content quality with a free sample of 50+ interviews.

Or contact sales for full access

© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.