Interview Transcript

This is a snippet of the transcript, sign up to read more.

Because the utilization is naturally higher, you can better match your capacity and usage.

Exactly. In many cases, specific hardware serves a particular application or user in a specific location. We need to consider where the edge is happening. In the cloud, it's somewhere undefined. For example, think about a highway monitoring system that reads license plates for payment. In a rural area, you might have multiple cameras around the highway, and every time you get a frame, you send it to the same server. This single server serves all cameras across miles.

This is a snippet of the transcript, sign up to read more.

Because the utilization is naturally higher, you can better match your capacity and usage.

However, in a dense urban area, where every second or multiple times a second you get 10 different cars to read license plates, utilization is much higher. It's also localized, not spread across the highway. It makes more sense to put computation near the camera, whether in the camera itself or on a nearby server or pole, rather than far away in the cloud. The cost of ownership is more sensible from this perspective.

This is a snippet of the transcript, sign up to read more.

Can you talk us through Hailo's perspective about why they wanted this collaboration?

We identified a gap in the industrial sector. When people wanted AI, they usually opted for high-end CPUs, either x86 or sometimes high-end ARM or TI. However, for lower-cost IoT industrial solutions, they don't need a very strong CPU or SoC. If they want to add AI, the balance isn't right. They don't want an expensive SoC but still want significant AI capabilities.

This is a snippet of the transcript, sign up to read more.

Sign up to test our content quality with a free sample of 50+ interviews