Partner Interview
Published December 19, 2023
CoreWeave, Lambda & GPU-Specific Providers vs Hyperscalers
inpractise.com/articles/lambda-coreweave
Executive Bio
Executive Profile Hidden
Interview Transcript
Disclaimer: This interview is for informational purposes only and should not be relied upon as a basis for investment decisions. In Practise is an independent publisher and all opinions expressed by guests are solely their own opinions and do not reflect the opinion of In Practise.
This is a snippet of the transcript.to get full access.
For which customers would accessing GPU compute through a hyperscaler make more sense?
Primarily, larger enterprises like Intel, Nvidia, and Cadence would prefer hyperscalers. These companies are deploying ChatGPT equivalent generative AI for internal use, which is not available externally. They can do coding and validation among other internal uses. These enterprises prefer hyperscalers due to existing relationships and a desire to avoid disruption. They also appreciate the common interface. So, larger companies, in terms of size and revenue, still go with hyperscalers. Smaller companies, which are making partial increments to the models and don't have as much funding, tend to choose CoreWeave.
Free Sample of 50+ Interviews
Sign up to test our content quality with a free sample of 50+ interviews.
Or contact sales for full access
Related Content

Microsoft Azure: AI CapEx Strategy, GPU Economics & OpenAI Contract Structure
Former VP, Azure AI Infrastructure, Optimized Workloads and Storage at Microsoft

AWS Bedrock vs Microsoft Copilot: Enterprise AI Integration & Competitive Positioning
Former Product and Strategy Lead, AWS Agentic AI at Amazon

Amazon DSP: Gaining Share from Google & The Trade Desk in Europe
Former Manager at Amazon Ads

The Trade Desk: Challenges from Amazon DSP & Big Tech Competition
Former Manager at The Trade Desk
© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.