Interview Transcript

This is a snippet of the transcript, sign up to read more.

Can you run through what those costs are for providing the AI service? I mean, obviously the chips, I'm guessing, but what else goes into it?

Setting up an AI data center or cloud is complex. Many people just focus on the GPUs or accelerators. Nvidia, for example, controls 90% of the market for supplying these GPUs. But besides this, you need a networking infrastructure, memory to store and pass information to the GPUs, which is costly. Then there's the power to run these robust GPUs, which are power-hungry devices like 500 watts, 800 watts, or even 1,000 watts. They exceed the needs of a single GPU. Then there are many cables to connect all these devices, storage boxes, robust networking components, and cabling, including optical or copper cables. Everything resides in a single rack, and there are multiple such racks in a big hall or room in a data center. You need cooling for that. So, there are many variables here. It's not just about the compute; everything that touches or passes through the GPUs adds to the cost of your data center, including power, cooling, racks, cabling, networking, storage, general processors, GPUs, and memory.

This is a snippet of the transcript, sign up to read more.

On the topic of chips and GPUs, as an investor, what concerns me is the depreciation periods, which are around five to six years. Amazon recently reduced the useful life of GPUs from six to five years. Is that a reasonable timeframe for depreciating AI chips, given how much they're going to change over the next three to five years?

The trends I'm seeing with the rollout of new chips suggest that five to six years is very optimistic. It's not even staying relevant for three years. Just look at Nvidia chips like the H100, H200, and they're already discussing the GB200. In four years, we've seen many generations of chips, each promising better price performance. Who wouldn't want to use a newer chip that saves money, power, and delivers output quickly? The way things are operating in data centers and with customer expectations, the useful lifecycle of a chip is certainly much shorter. However, when we talk about inferencing beyond the training phase, the lifecycle of a chip might be longer. But based on current trends and realities, the chip lifecycle is much shorter.

This is a snippet of the transcript, sign up to read more.

From what you've said, it sounds like the cloud providers are subsidizing their customers with their pricing if they are basing their assumptions on a five to six-year depreciation cycle. If they aren't, then maybe not. We'll see. Regarding Nvidia's lead in AI, there seems to be more competition now. How do you see it evolving?

When we look at Nvidia's biggest competitor, AMD has been releasing very smart chips. However, Nvidia excels not only in producing top-tier chips but also in the entire infrastructure around them. By infrastructure, I mean the software and software libraries. Nvidia has excelled in this area, and no competitor has matched it so far. For example, AMD chips like the AMD 350 and AMD 400 are powerful, but AMD lacks the equivalent of the CUDA or NCCL libraries. These compatible software solutions keep Nvidia ahead.

This is a snippet of the transcript, sign up to read more.

Sign up to test our content quality with a free sample of 50+ interviews