In Practise Logo
In Practise Logo - Blue
In Practise Logo
Partner Interview
Published December 19, 2023

CoreWeave, Lambda & GPU-Specific Providers vs Hyperscalers

Executive Bio

Data Center Product Architect and Senior Director at Nvidia

Interview Transcript

Disclaimer: This interview is for informational purposes only and should not be relied upon as a basis for investment decisions. In Practise is an independent publisher and all opinions expressed by guests are solely their own opinions and do not reflect the opinion of In Practise.

This is a snippet of the transcript.to get full access.

For which customers would accessing GPU compute through a hyperscaler make more sense?

Primarily, larger enterprises like Intel, Nvidia, and Cadence would prefer hyperscalers. These companies are deploying ChatGPT equivalent generative AI for internal use, which is not available externally. They can do coding and validation among other internal uses. These enterprises prefer hyperscalers due to existing relationships and a desire to avoid disruption. They also appreciate the common interface. So, larger companies, in terms of size and revenue, still go with hyperscalers. Smaller companies, which are making partial increments to the models and don't have as much funding, tend to choose CoreWeave.

This is a snippet of the transcript.to get full access.

Free Sample of 50+ Interviews

Sign up to test our content quality with a free sample of 50+ interviews.

Or contact sales for full access

© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.