Current manager of Identity; previously led Customer & Client Data and Auth
Integrated Experian data into Rokt's session pipeline (~6K RPS), boosting data quality and ML precision. Contributed to a broader ~6% YoY lift in value per transaction.
Stabilized Identity operational excellence by standardizing on-call, introducing DRIs and swimlanes, and aligning with VP-level stakeholders. Reduced on-call pages by 90% (50+ → ~5/week)
Migrated from a fragile, homegrown JWT system to Auth0, enabling SSO, MFA, and faster client onboarding. Used a SEV-1 to align 80+ engineers across all teams on modern architecture, reducing incidents and saving ~$2M/year in engineering capacity
Software Engineer
February 2021
—
January 2023
Google
Engineering Productivity (EngProd) in Google Cloud Platform
Conceived and built a Go load testing tool to trigger three-day-long mass creation events of 50M encryption keys for regular system capacity verification that helped unblock a Google Cloud deal valued at $1.1B
Instituted an integration testing suite that decreased load test complexity by 90% per project by reducing the required number of files from 40+ to four and the average code line count from 4324 to 430
Co-orchestrated a 253-team pilot of a code quality framework to enforce best-practice metrics that led to adoption across all 3000+ teams in Google Cloud
Senior Software Engineer
August 2019
—
January 2021
Foundry.ai
Architected a multi-tier data lake using AWS S3, Glue, Athena/Presto, and Terraform that trimmed data scientists' toil for machine learning model development from over a week to under 10 minutes
Initiated a serverless ETL pipeline using AWS Glue, Spark, and Delta Lake that lowered data latency from four days of manual work to 30 minutes and associated monthly cloud costs from over $20,000 to $3,000
Senior Data Engineer
July 2017
—
August 2019
Capital One Financial Corporation
Spearheaded production efforts for a new microservice-based credit review product that complied with complex internal quality standards and enabled production deployments of changes within 20 minutes
Engineered a data-detection application with Spark to replace an underperforming vendor product that increased daily throughput from 1.5 TB to 30 TB while decreasing the average cost per TB from $532 to $2.88
Redesigned AWS infrastructure for a critical data-scanning stack that implemented automatic scaling and cut disaster recovery time and point objectives from three hours to under 30 seconds
Education
The Georgia Institute of Technology
August 2019
—
May 2022
Master of Science in Computer Science
The University of North Carolina at Chapel Hill
August 2013
—
May 2017
Bachelor of Science in Computer Science & Economics
Publications
Container assembly with electronic transaction component