Delivery Hero

Delivery Hero

Data Engineer II

Role

Data Engineer II

Job type

Full-time

Posted

16 hours ago

Salary

Not disclosed by employer

Job description

About the Role

We’re looking for a Data Engineer who’s passionate about building reliable, scalable, and cost-efficient data systems. You’ll work with a modern stack, Kafka, Google Cloud Platform (GCP), AWS, to design and maintain the pipelines that power analytics, machine learning, and product insights.

Ideal role for someone with solid foundational skills in data engineering who’s ready to deepen their expertise, take ownership of workflows, and collaborate across teams.

If you don’t know every tool in our stack yet, that’s okay. We value curiosity, problem-solving, and a willingness to learn just as much as existing technical skills.

What's On Your Plate? 

  • Design, build, and maintain data pipelines and workflows for batch and streaming use cases.

  • Work with Kafka to manage real-time data ingestion and event-driven architectures.

  • Leverage GCP and AWS services for storage, processing, and orchestration (e.g., BigQuery, Dataflow, S3, Lambda).

  • Orchestrate workflows using tools like Airflow or similar schedulers.

  • Ensure data quality and reliability through monitoring, alerting, and automated validation.

  • Collaborate with analysts, data scientists, and product teams to understand requirements and deliver data solutions that drive business impact.

  • Optimize for cost and performance across cloud environments.

  • Participate in code reviews, documentation, and knowledge sharing to raise the bar for the team.


Our Tech Stack

  • Data Ingestion & Streaming: Apache Kafka, Kafka Connect

  • Cloud Platforms: Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Cloud Storage), AWS (S3, Lambda, Glue)

  • Workflow Orchestration: Apache Airflow

  • Programming Languages: Python, SQL, (bonus: Java/Scala)

  • Infrastructure & DevOps: Terraform, CI/CD pipelines, Docker

  • Monitoring & Observability: Grafana, Prometheus, Cloud-native tools

What Did We Order?

What We’re Looking For

  • Experience (1-3 years) in data engineering, software engineering, or a related field.

  • Proficiency in SQL and at least one programming language (Python preferred).

  • Understanding of data modeling, ETL/ELT concepts, and cloud-based data warehouses.

  • Familiarity with streaming platforms (Kafka, Kinesis, or similar).

  • Comfort working in cloud environments (GCP, AWS, or Azure).

  • Strong communication skills, able to explain technical concepts to non-technical audiences.

  • Growth mindset, eager to learn, adapt, and take on new challenges.
     

Nice-to-Have (But Not Required and willing to learn)

  • Experience with infrastructure-as-code (Terraform, CloudFormation).

  • Exposure to containerization (Docker, Kubernetes).

  • Knowledge of data governance, security, and compliance best practices.

Why You’ll Love Working Here

  • Impact: Your work will directly influence how data powers decisions across the company.

  • Learning culture: We invest in your growth — from mentorship to training budgets.

  • Modern stack: Work with cutting-edge tools and cloud platforms.

  • Collaboration: Partner with talented engineers, analysts, and product managers.

  • Flexibility: We care about outcomes, not where you work from.

Our Hiring Philosophy

We know that a great data engineer isn’t defined by checking every box. If you’re excited about data engineering, have a solid foundation, and are eager to grow, we want to hear from you.

Resume ExampleCover Letter Example

Explore more

Similar jobs