Our client is working to revolutionize restaurants through innovative technology and design. They believe that everyone deserves access to better food, better dining experiences and to have a little more magic in their day. They are focused on an industry that is ripe for change - the $900 Billion restaurant industry. Their unique, world-class team combines software and hardware engineers, designers and restaurant operations experts to push the boundaries on re-engineering every aspect of the business. They've created a one of a kind product, customer experience, and back-of-house operation platform to empower and delight customers and operators alike.
They are looking for a Data Engineer to join their growing team of analytics experts. The hire will be responsible for expanding and optimizing their data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
What You'll Do:
- Design, implement, test, deploy, and operate the data infrastructure and platform
- Hands-on development of data pipelines and data infrastructure supporting product development and data scientists
- Develop data set processes for data modeling, mining, and production
- Collaborate with other teams to design architectures to collect the right data
- Recommend ways to improve data reliability, efficiency, and quality
- Collaborate with Product Management and Engineering colleagues on technical vision and design
- Experience building and optimizing data pipelines, architectures, and data sets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- Working knowledge of message queuing, stream processing, and highly scalable data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Athena
- Experience with stream-processing systems: Storm, Spark-Streaming, etc
- Experience with object-oriented/scripting languages: Python, Java, Scala, etc.
- Solved complex canonicalization and standardization across a complex data inputs/sources
- Competitive Salary
- Equity/Stock Options
- Health, Vision, Dental insurance coverage
- Life Insurance, Short-Term Disability, Long-Term Disability
- Pre-tax Commuter, Healthcare and Dependent Care Benefits
- Flexible PTO + 10 Company Paid Holidays
- Pet Insurance
- ARC Fertility Program
- Regular company outings and events