This is a snippet of the transcript, sign up to read more.
Setting up an AI data center or cloud is complex. Many people just focus on the GPUs or accelerators. Nvidia, for example, controls 90% of the market for supplying these GPUs. But besides this, you need a networking infrastructure, memory to store and pass information to the GPUs, which is costly. Then there's the power to run these robust GPUs, which are power-hungry devices like 500 watts, 800 watts, or even 1,000 watts. They exceed the needs of a single GPU. Then there are many cables to connect all these devices, storage boxes, robust networking components, and cabling, including optical or copper cables. Everything resides in a single rack, and there are multiple such racks in a big hall or room in a data center. You need cooling for that. So, there are many variables here. It's not just about the compute; everything that touches or passes through the GPUs adds to the cost of your data center, including power, cooling, racks, cabling, networking, storage, general processors, GPUs, and memory.
This is a snippet of the transcript, sign up to read more.
The trends I'm seeing with the rollout of new chips suggest that five to six years is very optimistic. It's not even staying relevant for three years. Just look at Nvidia chips like the H100, H200, and they're already discussing the GB200. In four years, we've seen many generations of chips, each promising better price performance. Who wouldn't want to use a newer chip that saves money, power, and delivers output quickly? The way things are operating in data centers and with customer expectations, the useful lifecycle of a chip is certainly much shorter. However, when we talk about inferencing beyond the training phase, the lifecycle of a chip might be longer. But based on current trends and realities, the chip lifecycle is much shorter.
This is a snippet of the transcript, sign up to read more.
When we look at Nvidia's biggest competitor, AMD has been releasing very smart chips. However, Nvidia excels not only in producing top-tier chips but also in the entire infrastructure around them. By infrastructure, I mean the software and software libraries. Nvidia has excelled in this area, and no competitor has matched it so far. For example, AMD chips like the AMD 350 and AMD 400 are powerful, but AMD lacks the equivalent of the CUDA or NCCL libraries. These compatible software solutions keep Nvidia ahead.
This is a snippet of the transcript, sign up to read more.
This document may not be reproduced, distributed, or transmitted in any form or by any means including resale of any part, unauthorised distribution to a third party or other electronic methods, without the prior written permission of IP 1 Ltd.
IP 1 Ltd, trading as In Practise (herein referred to as "IP") is a company registered in England and Wales and is not a registered investment advisor or broker-dealer, and is not licensed nor qualified to provide investment advice.
In Practise reserves all copyright, intellectual and other property rights in the Content. The information published in this transcript (“Content”) is for information purposes only and should not be used as the sole basis for making any investment decision. Information provided by IP is to be used as an educational tool and nothing in this Content shall be construed as an offer, recommendation or solicitation regarding any financial product, service or management of investments or securities. The views of the executive expressed in the Content are those of the expert and they are not endorsed by, nor do they represent the opinion of In Practise. In Practise makes no representations and accepts no liability for the Content or for any errors, omissions, or inaccuracies will in no way be held liable for any potential or actual violations of laws, including without limitation any securities laws, based on Information sent to you by In Practise.
© 2025 IP 1 Ltd. All rights reserved.