👨🏻‍💻 postech.work

Sr. Data Engineer that loves AI - You'll shape Data Engineering from scratch and build AI solutions on top - Greenfield - Global transformation - Good comp + permanent contract - Hybrid - Amsterdam

Sprint and Partners • 🌐 Remote

Remote Posted 3 days, 10 hours ago

Job Description

If you are a Data Engineer that loves AI this is is it! You either know AI Engineering already OR you'd like to grow more towards AI engineering while still making impact with your Data Engineering skills. You need to be pro-active and able to help shape Data \& AI for the new platform. You should know what good looks like.

The role is a combination of greenfield Data Engineering (ETL, pipelines, orchestration, etc.) and on top, you will work on AI solutions from scratch. There is a POC ready for you to operationalize that will have a significant commercial impact but as you know we first need the data layer to be solid.

This is a newly created role where you will be the first dedicated Data Engineer in the company. We'll hire your right (or left) hand asap so you'll get help soon of course! You will take ownership of designing and building our data infrastructure from the ground up, enabling reliable AI and data-driven solutions across the business.

You'll work in an Engineering department of \~40 Engineers, managed by 5 EMs and an awesome Head of Engineering. Most Engineers are working remote from several locations. We are now setting up a core hub in Amsterdam with the most critical positions for a big transformation that will completely rebuild and modernize all the systems for even more global expansion. The product you're working on is high load and it operates globally.

Key Responsibilities

Build Data Infrastructure: Design and implement the company’s first production-grade data pipelines, workflows, and infrastructure.

AI Integration: Develop the pipelines needed to operationalize an existing AI proof-of-concept and support scalable AI solutions.

Data Engineering: Create robust ETL processes to deliver clean, structured, and production-ready data.

Cloud-Native Development: Work primarily with Python and Azure services (Data Factory, Blob Storage), while helping shape the broader technology stack.

Automation \& Orchestration: Implement tools such as Apache Airflow (or similar) for workflow automation and orchestration.

Collaboration: Partner closely with the BI (data analysts and data scientists) and other stakeholders to ensure high-quality, actionable data outputs.

Requirements

Strong Data Engineering Background: Hands-on experience with building pipelines, ETL processes, and infrastructure.

AI/ML Exposure: Familiarity with integrating and scaling AI/ML features into production environments.

Advanced Python Skills: Ability to write clean, efficient, and maintainable code.

SQL Expertise: Proficiency in working with large datasets and optimizing queries.

Cloud \& Orchestration Knowledge: Experience with Azure (or similar cloud platforms), orchestration frameworks, and version control.

Proactive \& Independent: Comfortable working with a high level of ownership in a role without pre-existing frameworks or guidance.

Communication Skills: Ability to collaborate effectively with technical and non-technical colleagues.

What We Offer

Solid compensation and benefits with a permanent contract from the start

Hybrid work in Amsterdam with flexibility to work across international offices, and an environment that values innovation and practical solutions.

Impactful Role: Opportunity to set up and define the company’s data engineering function from the ground up.

Professional Growth: Support for continuous learning and development.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.