About This Role
Join our data team to build and maintain scalable data pipelines and infrastructure. You'll work with cutting-edge technologies to transform raw data into actionable insights.
Responsibilities
- Design, build, and maintain efficient data pipelines
- Develop ETL processes to extract, transform, and load data from various sources
- Optimize data delivery and design scalable data models
- Implement data quality monitoring and validation
- Collaborate with data scientists and analysts to understand data requirements
- Maintain documentation for data architecture and processes
Requirements
- 3+ years of experience in data engineering
- Proficiency in Python and SQL
- Experience with Apache Spark, Airflow, or similar technologies
- Knowledge of cloud platforms (AWS, GCP, or Azure)
- Experience with data warehousing solutions (Snowflake, Redshift, BigQuery)
- Strong problem-solving and analytical skills
Benefits
Competitive compensation package
Remote-first culture
Health and wellness programs
Learning and development budget
Stock options
Unlimited PTO
Interested in this role?
Submit your application and our team will review your profile within 48 hours.
Apply NowRequired Skills
PythonSQLApache SparkAirflowSnowflakeAWS
Share This Job
Not the right fit?
Browse All Open Positions