Own and improve data quality across multiple pipelines, datasets, and integrations
Design and maintain automated data validation frameworks (freshness, completeness, schema checks, anomaly detection)
Build monitoring and alerting systems to ensure reliability of ELT/ETL pipelines
Investigate data inconsistencies and work cross-functionally to resolve root causes
Develop and maintain data SLAs, incident response playbooks, and documentation
Partner with Data Engineering, Analytics, and Customer Success teams to ensure data used by clients is always accurate, timely, and trustworthy
Improve internal tools and workflows related to data ingestion, observability, lineage, and testing
Contribute to continuous improvements of our data platform and operational excellence
3+ years of experience in Data Quality, Data Ops, Analytics Engineering, or Data Engineering
Solid SQL and Python skills
Experience implementing data testing frameworks (e.g., dbt tests, Great Expectations, Soda, or custom tooling)
Strong understanding of ETL/ELT pipelines and data warehousing concepts
Hands-on experience with orchestration tools (Airflow or equivalents)
Experience with AWS cloud services (S3, Lambda, ECS, etc.)
Understanding of schema design, data modeling, and data lineage
Strong analytical mindset and exceptional attention to detail
Excellent written and verbal communication skills
Nice to Have:
Experience with Snowflake and/or ClickHouse
Knowledge of monitoring/observability tools (e.g., Prometheus, Grafana, OpenTelemetry)
Familiarity with event-based architectures and webhook ingestion
Experience supporting ML pipelines from a data reliability standpoint
Flexible schedule and remote work
Participation in interesting and large-scale projects
Friendly and professional team atmosphere
Competitive salary