Senior Data Platform Engineer

  • Coinbase
  • San Francisco, CA
  • Aug 11, 2018

Job Description

Back To All Jobs Senior Data Platform Engineer
San Francisco, CA

Our vision is to bring more innovation, efficiency, and equality of opportunity to the world by creating an open financial system. Our first step on that journey is making digital currency accessible and approachable for everyone. To achieve that, it is critical to have timely and reliable access to all of our data, from user clicks on our website down to blockchain transactions.


As a Data Platform Engineer, you will build our next generation data platform and accompanying services. Our data pipelines are growing rapidly, currently processing several terabytes of data from production databases and external providers to our data warehouse. We build foundational self-service systems that allow end users to create ETL flows and consume data in batch and streaming fashion for machine learning, fraud prevention, A/B testing and analytics purposes.


  • Data ingestion pipeline: Build our next generation streaming ingestion pipeline for scale (10x data), speed (<1>
  • Self-service transformation engine: Build and maintain our self-service tooling that allows anybody at Coinbase to transform complex JSON and create dimensional models. Specific challenges are supporting type 2 slowly changing dimensions, end-to-end testability, cell-level security, validation/monitoring/alerting and efficient execution. Today we do this in Apache Airflow.
  • Anomaly detection: Build a comprehensive anomaly detection service that allows anybody at Coinbase to quickly set up notifications in order to detect process breakage.
  • Exhibit our core cultural values: positive energy, clear communication, efficient execution, continuous learning
  • Experience building (data) backend systems at scale with parallel/distributed compute
  • Experience building microservices
  • Experience with Python and/or Java/Scala
  • Knowledge of SQL
  • A data-oriented mindset
Preferred (not required):
  • Computer Science or related engineering degree
  • Deep knowledge of Apache Airflow, Spark, Hadoop, Hive, Kafka/Kinesis
What to send
  • A resume that describes scalable systems you've built