Google Cloud & Datadog: Cloud Providers vs DevOps SaaS | In Practise

Google Cloud & Datadog: Cloud Providers vs DevOps SaaS

Former Product Leader at Google Cloud Platform

Learning outcomes

  • Core drivers and challenges of the shift to multi-cloud and hybrid
  • Layers of the DevOps stack and the trend towards one vertical solution
  • How GCP is the most open, innovative and technically astute cloud provider
  • Why third-party SaaS solutions like DataDog or Splunk have an advantage over the public cloud solutions
  • The impact of serverless technologies on the SaaS DevOps layer
  • The long run underlying risk of cloud providers entering PaaS and SaaS more aggressively as IaaS growth matures
Print

Executive Bio

Deepak Tiwari

Former Product Leader at Google Cloud Platform

Deepak is a computer engineer and has experience building products for public clouds and devops stacks as a customer. Deepak spent 6 years as Product Management Leader on Google Cloud Platform where he built enterprise devops products working on Stackdriver, Cloud Logging and integrating with BigQuery. In 2017, Deepak joined Lyft as Head of Product Management where he was leading the core platform for machine learning and responsible for building out the devops infrastructure. Deepak currently works at Facebook as the Group Leader of Core ML and Ads Platform. Read more

View Profile Page

Could you tell us about your background and experience working at Google, Lyft and Facebook?

Over the past 10 years, I have primarily worked in product development and management. I started at Google Cloud after which I moved to Lyft, where I led the product team, building internal infrastructure and platforms related to data and machine learning. At Google I worked in DevOps, specifically building tooling for the Google Cloud platform. Today I am at Facebook, working on a machine learning platform for the advertising product.

How has the level of commodification between the underlying compute layer and platform evolved over the past decade?

It has been very interesting. There are three different layers: the IaaS layer or infrastructure as a service, the platform as a service and SaaS. The overall cloud market, including SaaS, is growing quickly. The core difference of infrastructure as a service is the control over the underlying machines, allowing you to manage the operating system, hardware layer and management layer, such as virtual machines. Platform as a service focuses on the application and everything above it such as data and users, without worrying about managing the operating system kernel and other underlying objects.

SaaS removes the need to worry about the application layer, allowing you to focus on the administration control of your company. Today there is more focus on platform as a service and SaaS because that is what companies need. They are not experts in all areas and increasingly prefer to outsource. The reason why SaaS has been on fire is that organizations are now comfortable using multiple SaaS solutions. I read, recently, that companies with over 1,000 employees manage more than 100 SaaS applications, spending on average almost half a million dollars.

Infrastructure will eventually be commoditized and people will move towards platform as a service and SaaS fully-managed serverless solutions. I have also seen counter-patterns such as GCP, which started as a platform as a service. Eight years ago, the Google Cloud Platform skipped the infrastructure as a service space and went directly to platform as a service with the app engine. Developers want flexibility and want to do things on their own terms. They need that level of access to configure the infrastructure the way they prefer. Google focused more on infrastructure as a service, which is where most of the revenue comes from today.

How does the shift to multi-cloud or hybrid change cloud infrastructure?

The future is multi-cloud or hybrid, meaning multi-public cloud, on-prem plus public cloud, or on-prem plus multi-cloud. Several years ago, there was much hype around hybrid but it has not materialized that much. We felt many companies would choose the best solutions from different clouds and would patch them in a totally hybrid way, which would offer fail-over mechanisms and allow them to not run all their applications on one cloud. What if that single cloud fails, as happened with S3 a few years ago?

Everything went down, including the service which monitors all these services, because it was also dependent on S3. There are many reasons to choose multi-cloud but it has not materialized because it is hard to manage. This is why GCP is building tooling around Anthos, to offer a hybrid solution seamlessly. Multi-cloud is the future and will happen, but currently managing an application across different clouds is very difficult. Multi-cloud is being adopted by large companies such as Disney, who have different divisions running on AWS, GCP and Azure.

If they are autonomous it is possible for a ride-hailing company to run on one cloud, but their bikes and scooters unit could run on something else because of better tooling. Over the same application, companies or divisions are not yet truly multi-cloud. Some of it is also the sales muscle, because if you sign a good deal with Amazon AWS – the more you spend the higher your discount – you create the incentive to keep spending on one cloud. It is more of an 80/20 split rather than a 60/40 or 50/50.

Why is it technically so difficult to have multi-cloud?

If you deploy an application on machines or clusters, even if you create a load balance and have multiple clusters across different clouds, managing that is difficult. The APIs and machine operations are different. If you move data from one cloud to another, there are egress and ingress charges you have to pay.

Security also plays a role – whether there is a strong interconnect between GCP and AWS or Azure – to ensure secure data transfer. Cloud users do not understand this and are often surprised about the consequences of operating in a multi-cloud environment. Interesting things happen when you choose different point solutions. For example, if I put all my compute in AWS but use BigQuery for data. An interconnect needs to be created to send all my application data to BigQuery.

That is more doable than spreading compute for a single application across different clouds. It is also relatively easy if I am running five different applications, three on one and two on the other, as long as they are independent and do not require communication. If they do, it will add to latency. This is true even within the same cloud where you could improve it by deploying in the same zone or region. If it is across cloud you lose that because it will cross a boundary and will go over the net. It may sound good and have advantages but the tooling and technical understanding is not there yet.

Do you think this shift to multi-cloud or hybrid is the end game or is it more transitory as companies shift from on-prem to the cloud?

It is a long-term trend because companies prefer not being locked into one solution or cloud provider. Innovation is happening in many places and it is possible to put compute and S3 on AWS, but use Spanner because of the amazing ML products or big data solutions from Google. There will be point solutions suited to a company's specific needs. Multi-cloud is a reality and people will choose the best they can find. Cloud providers should enable this by providing the right tooling and interconnects.

Is that what GCP are trying to do with Anthos?

Yes, that is a part of their strategy, to be open and provide a good solution to organizations who do not want to be locked in. It is also practical because it understands that people will have different use cases and needs. When the need arises to work across boundaries, like on-prem and multi-cloud, Google have tooling and a place to go where they can do it. That also works as a good channel to entice new customers who are thinking about it, if Google are first to provide tooling which works well across clouds.

Does that make independent third-party software players, like Datadog and Splunk, more attractive?

Independent solutions are attractive but less about how cloud solutions evolve. There is a willingness to pay for key services, to do the extra work of managing another third-party solution rather than staying native, which is why they are doing so well. Datadog has a stock market valuation of $25 billion and Splunk $35 billion. They add a lot of advantage as they can be run and monitored on any cloud, whereas native public cloud solutions are built for their own services.

Sign up to read the full interview and hundreds more.

Audio

Google Cloud & Datadog: Cloud Providers vs DevOps SaaS

September 2, 2020

00:00
00:00
Sign up to listen to the full interview and hundreds more.

PREMIUM

Speak to Executive

Join waiting list for IP Premium
Did you like this article ?