Contract Duration : 6 months with the view to extend
Location : Brussels 1 day per week on-site
Rate : €850 Euros per day - Subject to experience
The Role
We are looking for a skilled Data Engineer to design, build, and optimize enterprise-scale data pipelines and cloud platforms. You will translate business and AI/ML requirements into robust, scalable solutions while collaborating across multi-disciplinary teams and external vendors.
As a key member of the data architecture you will:
Build and orchestrate data pipelines across Snowflake and AWS environments.
Apply data modeling, warehousing, and architecture principles (Kimball/Inmon).
Develop pipeline programming using Python, Spark, and SQL; integrate APIs for seamless workflows.
Support Machine Learning and AI initiatives, including NLP, Computer Vision, Time Series, and LLMs.
Implement MLOps, CI/CD pipelines, data testing, and quality frameworks.
Act as an AI super-user, applying prompt engineering and creating AI artifacts.
Work independently while providing clear justification for technical decisions.
Key Skills \& Experience
Strong experience in data pipeline development and orchestration.
Proficient with cloud platforms (Snowflake, AWS fundamentals).
Solid understanding of data architecture, warehousing, and modeling.
Programming expertise: Python, Spark, SQL, API integration.
Knowledge of ML/AI frameworks, MLOps, and advanced analytics concepts.
Experience with CI/CD, data testing frameworks, and versioning strategies.
Ability to work effectively in multi-team, vendor-integrated environments.