👨🏻‍💻 postech.work

Data Engineer

Xebia • 🌐 In Person

In Person Posted 6 days, 5 hours ago

Job Description

About us

For more than 20 years, our global network of passionate technologists and pioneering craftspeople has delivered cutting-edge technology and game-changing consulting to companies on the brink of AI driven digital transformation. Since 2001, we have grown into a full service digital consulting company with 5500+ professionals working on a worldwide ambition.

Driven by the desire to make a difference, we keep innovating. Fuelling the growth of our company with our knowledge worker culture. When teaming up with Xebia, expect in-depth expertise based on an authentic, value-led, and high quality way of working that inspires all we do. At Xebia, we put ‘People First’—committed to attracting diverse talent and fostering an inclusive, respectful workplace where everyone is valued for their contributions. We welcome all individuals and evaluate solely on the quality of their work and teamwork.

About the Role

As

Data Engineer at Xebia, you will partner with engineering, product, and data teams to design and deliver scalable, high-performance data solutions. You will build and optimize data platforms and pipelines, implement cloud-native architectures, and mentor junior engineers while promoting engineering excellence across teams.

Responsibilities

Deliver scalable and robust data systems for clients worldwide.

Engineer data platforms with a strong focus on performance, security, and reliability.

Design and develop data processing pipelines (batch and streaming).

Integrate heterogeneous data sources and optimize processing workflows.

Collaborate with Platform \& Reliability engineers to build production-grade infrastructure.

Work closely with analysts to enhance data accessibility and insights.

Onboard and mentor new engineers within client and internal teams.

What you bring

6+ years of experience in a senior data engineering or backend data role.

Strong proficiency in Python with hands-on experience building data processing pipelines.

Expertise with GCP services, especially GCS, Cloud Run ,GKE, BigQuery and Dataflow.

Advanced SQL skills and solid understanding of relational and NoSQL databases.

Experience with data orchestration/workflow tools (Nextflow, Airflow, etc.).

Strong experience building cloud-native data solutions, particularly on GCP.

Proficiency in Docker and experience deploying workloads on Kubernetes.

Experience with version control (Git) and collaborative engineering workflows.

Excellent English communication (written and spoken).

Ability to work effectively within distributed teams and drive technical alignment.

Nice to have

Knowledge of Terraform for IaC.

Experience with PowerBI or similar BI tools.

Understanding of ML operationalization (e.g., running models in Docker/Kubernetes).

Basic exposure to Langfuse for observability in LLM-related pipelines.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.