Build and maintain
scalable data pipelines
and
ETL processes
in cloud environments (AWS, Azure).
Integrate data from multiple sources to support
analytics
,
business intelligence
, and
ICS2/SSA
project needs.
Ensure
data quality
,
security
, and
compliance
with project and EU requirements.
Work with
data scientists
and
analysts
to deliver clean, well‑structured datasets.
Optimize
data storage
and
data retrieval
for performance and cost efficiency in cloud platforms.
Support the design, implementation, and daily operations of the
data and metadata processing
parts of the cloud solution.
Make sure all stakeholders can access data securely and efficiently according to their roles.
Provide support and training on data processing tools and workflows.
Develop and implement
data cleansing
,
data preparation
, and other data processing solutions using project tools.
Contribute to
data modelling
,
interface definitions
, and technical documentation.
Support the design and use of the
SSA platform
, focusing on data integration, storage, accessibility, processing, and security.
Participate in
analytics use cases
, testing scenarios, and platform tool deployment.
Document operational procedures and technical specifications (interfaces, data models, etc.).
Design or support the design of
data processing algorithms
for specific use cases.
KNOWLEDGE AND SKILLS:
Strong knowledge of:
Kubernetes
,
Docker
,
Cloudera
,
Spark
,
Kafka
,
Microservices
,
Relational DBMS
,
REST APIs
,
AWS
,
Azure
Excellent understanding of
ETL processes
and data quality best practices.
Ability to quickly take responsibility even with limited documentation.
Strong communication skills for both technical and non‑technical audiences.
Ability to produce clear, structured technical documentation.
Strong analytical and problem‑solving skills.
Ability to work in fast‑changing big‑data environments.
Level Intermediate
Delivery mode : Near Site (Brussels, Belgium)