Interview Transcript

This is a snippet of the transcript, sign up to read more.

One aspect of your profile that caught my attention is your involvement with Capital One Software. I've had numerous conversations with individuals from Capital One's technology and IT departments. However, could you clarify what Capital One Software is in this context?

In the process, we developed numerous tools for internal use. Capital One Software was our attempt to commercialize many of these products. We created these tools for our regulated environment, and we wondered if there was value in selling them to the public and if we could monetize them. Capital One Software was established with this goal in mind. Our first tool, a data tool known as Slingshot, was launched. This tool was developed from a suite of backend tools we used internally. We devised a go-to-market strategy for these tools, and that's what Capital One Software is all about - bringing these tools to the market.

This is a snippet of the transcript, sign up to read more.

Let's discuss that toolchain. Could you guide me through the components of the toolchain you managed? This could range from development tools to operations and observability. What was under your purview in that toolchain?

I was in charge of the DevOps team, responsible for building out the team. We built most of this stuff internally. When we decided to become a software engineering firm, we didn't turn to vendors. We even had an explicit no-vendor policy. We hired engineers and built everything internally. We did reuse and modify some things, like our secret management and Chamber of Secrets. Some HashiCorp was integrated into the pipeline. A lot of DSL was set up in Jenkins. It was a home-built application, tailored to our specific needs. We didn't seek out tooling; in fact, we were explicitly directed not to. We were told to build this internally to foster our behavior as an engineering shop. I wanted to clarify that from the start. Does that resonate with you?

This is a snippet of the transcript, sign up to read more.

What did the DevOps pipeline look like overall when you left? Which technologies were used for source code management, CI/CD, observability, and so on?

For monitoring and alerting, we went through a series of options in the organization and ultimately settled on Datadog. We used New Relic for a while, which provided many opportunities for application monitoring. However, Datadog offered the consistency we needed for container visibility. As we transitioned from a predominantly server-based to a more containerized and serverless environment, we found Datadog to be the best for monitoring and alerting. We also integrated it with PagerDuty and Slack for our notification services.

This is a snippet of the transcript, sign up to read more.

Sign up to test our content quality with a free sample of 50+ interviews