I design, scale, and operate cloud-native systems — building reliable, observable platforms that power enterprise products at scale.
I'm a Lead SRE and Platform Engineer based in Sterling Heights, MI, with 8+ years designing, scaling, and operating cloud-native infrastructure across AWS, GCP, Azure, and Kubernetes at enterprise scale.
At Ford Motor Company, I own service reliability architecture for large-scale customer-facing platforms — defining SLOs, SLIs, and error budgets, and leading enterprise-wide DevSecOps transformation across 30+ engineering teams.
I built Ford's centralized Internal Developer Platform using Backstage, standardizing golden paths for CI/CD, secrets management, and service onboarding across the organization. I also designed the enterprise observability stack — full-stack metrics, logs, traces, RUM, and synthetic monitoring using Datadog and Dynatrace.
I'm passionate about developer experience, incident response philosophy, and building platforms that reduce cognitive load — turning operational complexity into reliable, automated systems that teams can trust and own.
I hold a B.S. in Computer Science from Wayne State University.
AWS, Google Cloud Platform, Microsoft Azure
Kubernetes, OpenShift, Docker, ECS, EKS
Terraform, Ansible
GitHub Actions, Jenkins, Tekton, Cloud Build
Datadog, Dynatrace, OpenTelemetry — metrics, logs, traces, RUM, synthetics
IAM, Policy Enforcement, Fossa, SonarQube, Checkmarx
Go, Python, Bash, JavaScript, TypeScript, C#
SLOs, SLIs, Error Budgets, On-call Optimization, Capacity Planning
Redesigned alerting strategies, runbooks, and observability-driven triage workflows to dramatically reduce mean time to resolution.
Led Sev-1 incident mitigation with cross-functional coordination and permanent corrective actions that eliminated recurring issues.
Led enterprise VM-to-Kubernetes migration at Ford, dramatically accelerating cloud-native adoption across the organization.
Standardized CI/CD build and release workflows across 30+ engineering teams, reducing deployment inconsistencies.
Designed full-stack observability platforms using Datadog & Dynatrace covering metrics, logs, traces, RUM, and synthetics.
Embedded security controls — IAM, artifact scanning, policy validation — directly into CI/CD pipelines across the enterprise.
Designed and deployed a Backstage-powered IDP at Ford, giving 30+ teams golden path templates for CI/CD, service onboarding, and observability — reducing new service time-to-production from weeks to days.
Led large-scale containerization and migration to AWS ECS and EKS, with standardized pipelines and DevSecOps controls embedded at every stage of the delivery lifecycle.
Built automation for cluster upgrades, incident recovery, and capacity scaling — reducing manual operational effort by 35% and enabling the SRE team to focus on higher-leverage reliability work.
SLOs aren't just metrics — they're a contract with your users. I design reliability systems that translate technical signals into business outcomes, giving teams the data to make confident risk decisions instead of reactive ones.
Every manual runbook step, every click-to-deploy, every alert that requires human interpretation is debt. I build platforms that automate the predictable so engineers can focus on the novel.
The best infrastructure work is invisible to developers. I design internal platforms with golden paths — opinionated defaults that make the right thing the easy thing, reducing cognitive load at scale.
Ford's customer-facing platforms lacked unified observability. Teams were operating in silos with inconsistent alerting, no SLO definitions, and MTTR averaging over 45 minutes for Sev-1 incidents.
Designed and implemented a full-stack observability platform using Datadog and Dynatrace — standardizing metrics, structured logging, distributed tracing, RUM, and synthetic monitoring. Rebuilt alerting from noise-based to signal-based using error budget burn rate alerts. Authored org-wide SLO/SLI frameworks and runbook standards.
Reduced MTTR by 35%. Reduced repeat Sev-1 incidents by 40%. Gave leadership real-time SLO dashboards for business-critical user journeys.
30+ engineering teams at Ford operated with inconsistent CI/CD pipelines, duplicated infrastructure boilerplate, and no standardized service onboarding. New services took weeks to reach production-ready state.
Architected and deployed a centralized Internal Developer Platform using Backstage as the developer portal. Defined golden path templates for service creation, CI/CD pipeline setup, secrets management, and observability integration. Embedded DevSecOps controls — IAM enforcement, artifact scanning with Fossa/SonarQube/Checkmarx, and policy validation — directly into the platform.
Standardized build and release workflows across 30+ teams. Reduced new service time-to-production from weeks to days. Increased cloud-native adoption by 40%.
Ford's workloads were running on aging VM-based infrastructure with poor resource utilization, slow deployment cycles, and limited scalability for bursty traffic patterns.
Led the enterprise migration from VM-based workloads to OpenShift/Kubernetes. Designed Tekton-based Kubernetes-native CI/CD pipelines to replace legacy Jenkins workflows. Implemented RBAC, network policies, namespace isolation, autoscaling, and quota management. Built automation for cluster upgrades and incident recovery.
Accelerated cloud-native adoption by 50%. Achieved 99.9% uptime for critical production workloads. Reduced manual operational effort by 35% through automation.
I'm open to senior SRE, platform engineering, and DevSecOps opportunities. Whether you want to discuss a role, a project, or just connect — my inbox is open.