Partner Interview
Published December 19, 2023
CoreWeave, Lambda & GPU-Specific Providers vs Hyperscalers
inpractise.com/articles/lambda-coreweave
Executive Bio
Data Center Product Architect and Senior Director at Nvidia
Interview Transcript
Disclaimer: This interview is for informational purposes only and should not be relied upon as a basis for investment decisions. In Practise is an independent publisher and all opinions expressed by guests are solely their own opinions and do not reflect the opinion of In Practise.
This is a snippet of the transcript.to get full access.
For which customers would accessing GPU compute through a hyperscaler make more sense?
Primarily, larger enterprises like Intel, Nvidia, and Cadence would prefer hyperscalers. These companies are deploying ChatGPT equivalent generative AI for internal use, which is not available externally. They can do coding and validation among other internal uses. These enterprises prefer hyperscalers due to existing relationships and a desire to avoid disruption. They also appreciate the common interface. So, larger companies, in terms of size and revenue, still go with hyperscalers. Smaller companies, which are making partial increments to the models and don't have as much funding, tend to choose CoreWeave.
Free Sample of 50+ Interviews
Sign up to test our content quality with a free sample of 50+ interviews.
Or contact sales for full access
Related Content

Azure, AWS, GCP: AI, Capacity, & Cloud Budget Allocation
Former Engineering Leader at American Airlines

Semiconductor Manufacturing & Packaging: Hybrid Bonding
Former IBM Engineering Manager & GlobalFoundries Executive

TSMC: Customer Switching Costs & Mature-Node Economics
Former VP of Foundry Operations and Technology at Analog Devices

The Trade Desk: Strategic Value of Ventura OS
Former Product Lead, Ventura CTV OS at The Trade Desk
© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.