In Practise Logo
In Practise Logo - Blue
In Practise Logo
Partner Interview
Published December 12, 2023

DataDog, Snowflake, & DevOps Tooling: a Customer's Workflow

Executive Bio

Former Head of DevOps at Capital One

Interview Transcript

Disclaimer: This interview is for informational purposes only and should not be relied upon as a basis for investment decisions. In Practise is an independent publisher and all opinions expressed by guests are solely their own opinions and do not reflect the opinion of In Practise.

This is a snippet of the transcript.to get full access.

One aspect of your profile that caught my attention is your involvement with Capital One Software. I've had numerous conversations with individuals from Capital One's technology and IT departments. However, could you clarify what Capital One Software is in this context?

In the process, we developed numerous tools for internal use. Capital One Software was our attempt to commercialize many of these products. We created these tools for our regulated environment, and we wondered if there was value in selling them to the public and if we could monetize them. Capital One Software was established with this goal in mind. Our first tool, a data tool known as Slingshot, was launched. This tool was developed from a suite of backend tools we used internally. We devised a go-to-market strategy for these tools, and that's what Capital One Software is all about - bringing these tools to the market.

This is a snippet of the transcript.to get full access.

Let's discuss that toolchain. Could you guide me through the components of the toolchain you managed? This could range from development tools to operations and observability. What was under your purview in that toolchain?

I was in charge of the DevOps team, responsible for building out the team. We built most of this stuff internally. When we decided to become a software engineering firm, we didn't turn to vendors. We even had an explicit no-vendor policy. We hired engineers and built everything internally. We did reuse and modify some things, like our secret management and Chamber of Secrets. Some HashiCorp was integrated into the pipeline. A lot of DSL was set up in Jenkins. It was a home-built application, tailored to our specific needs. We didn't seek out tooling; in fact, we were explicitly directed not to. We were told to build this internally to foster our behavior as an engineering shop. I wanted to clarify that from the start. Does that resonate with you?

This is a snippet of the transcript.to get full access.

What did the DevOps pipeline look like overall when you left? Which technologies were used for source code management, CI/CD, observability, and so on?

For monitoring and alerting, we went through a series of options in the organization and ultimately settled on Datadog. We used New Relic for a while, which provided many opportunities for application monitoring. However, Datadog offered the consistency we needed for container visibility. As we transitioned from a predominantly server-based to a more containerized and serverless environment, we found Datadog to be the best for monitoring and alerting. We also integrated it with PagerDuty and Slack for our notification services.

This is a snippet of the transcript.to get full access.

Free Sample of 50+ Interviews

Sign up to test our content quality with a free sample of 50+ interviews.

Or contact sales for full access

© 2024 In Practise. All rights reserved. This material is for informational purposes only and should not be considered as investment advice.