Data Engineer with 4 years building reliable data pipelines, warehouses, and analytics infrastructure at scale. I bridge the gap between raw data and business decisions — working closely with analytics, ML, and product teams.
I write pipelines like I write code — modular, tested, and documented. If an analyst can't trust the data, the pipeline doesn't matter. Observability and data quality checks are first-class citizens, not afterthoughts.
Data Engineer with 4 years building and maintaining data infrastructure at scale using Python, Airflow, dbt, and Snowflake.
I started in analytics at a mid-size e-commerce company and kept getting pulled into the infrastructure side — fixing broken pipelines, rebuilding unreliable ingestion jobs, and wondering why the data was always wrong. Eventually I made it official. I've worked across retail, fintech, and SaaS, and I've learned that good data engineering is mostly about trust — making sure downstream teams can rely on what you ship.
11day streak
Longest
418
Contributions in 2026
Languages
3data engineering
4data warehousing
4databases
4tools & devops
5data quality
2data & ML
3AI dev tools
2CLI tool for scaffolding production-ready Apache Airflow DAGs from YAML specs, with built-in retry logic, SLA alerting, and data quality checks via Great Expectations.
Reference architecture for real-time financial event processing using Kafka and Flink, with a live React dashboard. Built to explore sub-second streaming pipelines.
A collection of dbt macros for automated audit logging, freshness checks, and row-count reconciliation across Snowflake and BigQuery models.
2025
Python library for CDC-aware incremental data loading into Snowflake — supports PostgreSQL, MySQL, S3, and REST API sources with automatic schema evolution.
2025
Headless metrics store built on dbt and Snowflake — define business metrics once in YAML and query them via REST API or a lightweight React explorer. Eliminates duplicated metric logic across BI tools.
2026
Meridian Analytics — Seattle, WA
Senior data engineer on a 6-person data platform team at a Series B fintech company. Own core ingestion pipelines, the Snowflake data warehouse, and data quality infrastructure used by 30+ analysts and 3 ML engineers.
Novu Commerce — Portland, OR
Data engineer on a two-person data team at a mid-size e-commerce company. Built and maintained pipelines for marketing, finance, and operations analytics.
University of Washington — Seattle
B.S. Information Systems
2016-09-01 – 2020-06-01
Udacity
Data Engineering Nanodegree
2020-10-01 – 2021-03-01
Coursera — Google Cloud
Online Specialization: Data Engineering with Google Cloud
2022-04-01 – 2022-08-01
Amazon Web Services
dbt Labs
Snowflake
Google Cloud
Databricks
I'm currently open to senior data engineering roles at data-driven companies. If you're looking for someone who can build reliable pipelines, improve data quality, and work closely with analytics and ML teams — feel free to reach out.
Location
Seattle, WA
Elsewhere