Partner Interview
Published October 16, 2022
Confluent, Amazon MSK, & Apache Kafka
inpractise.com/articles/confluent-amazon-msk-and-apache-kafka
Executive Bio
Former Director, Engineering at Confluent
Interview Transcript
Disclaimer: This interview is for informational purposes only and should not be relied upon as a basis for investment decisions. In Practise is an independent publisher and all opinions expressed by guests are solely their own opinions and do not reflect the opinion of In Practise.
This is a snippet of the transcript.to get full access.
This gets into the other part of the discussion, which is the monetization of the open source. I don't want to call it a silver bullet in terms of monetizing the open source but do you feel like that's a step change for Confluent in accessing that open-source base?
You have a larger pie that you can sell into of the existing open-source Kafka users, especially the organizations that are like, wow, it's actually pretty hard to run Kafka; many places have teams dedicated to that. Maybe they want to free up those engineers to work on something else. I think that’s pretty compelling, and the ease of getting going with Confluent Cloud is the thing that will drive growth there.
This is a snippet of the transcript.to get full access.
Could you compare Confluent Cloud to Kinesis? I'd love to hear your thoughts on it.
Kinesis is more in that SaaS model. You turn it on in the AWS console. You point your client or your API, here's your code at their API, and you fire messages, and it works. The difference, of course, is that if you're doing a lot of volume, Kinesis is egregiously expensive. Kinesis makes a ton of sense at a smaller scale and is definitely compelling. That is the comparison. Kinesis is closer to Confluent Cloud than, I think, MSK is in some sense. Of course, you want to compare MSK because they're both Kafka.
Free Sample of 50+ Interviews
Sign up to test our content quality with a free sample of 50+ interviews.
Or contact sales for full access
Related Content

Hyperscaler AI Inference Cost: Custom ASICs vs GPUs
Former Data Center Product Architect and Senior Director at Nvidia

NVIDIA, Trainium, and GPUs vs Custom ASICs
Former Data Center Product Architect and Senior Director at Nvidia

Azure, AWS, GCP: AI, Capacity, & Cloud Budget Allocation
Former Engineering Leader at American Airlines

Games Workshop: North America Operations & Capacity Constraints
Former Director at Games Workshop
© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.