Our Client in the
Education sector
is looking for a
Data Engineer
to join them on a large
data uplift project.
Initial contract until October 2026 + strong chance of extension
No set budget but rate guidance is around $1000 inc super
3 days on site \& 2 days WFH p/w
As the
Data Engineer
, you must have:
Minimum 3+ years’ experience designing and implementing data pipelines and integrations in complex enterprise environments.
Hands‑on experience with
Azure Data Lake, Databricks, DBT Cloud, Postgres, and Dataverse
.
Strong understanding of data integration and migration processes, including transformation optimisation, validation, and reconciliation.
Proficiency in
SQL, Python
, and
PySpark
for analytics and operational systems.
Experience developing and supporting integrations using Boomi and Kafka.
Familiarity with version control tools such as Git or Azure DevOps.
Demonstrated ability to collaborate effectively with business and technical stakeholders, including Enterprise Architecture, Information Security and Governance Groups.
Strong documentation, communication, and presentation skills for diverse audiences.
Experience working in Agile or hybrid delivery environment.
Not mandatory but nice to have:
Experience with metadata and governance platforms (e.g., Alation, Collibra, Purview).
Knowledge of API‑based data exchange patterns relevant to modern data integrations.
Familiarity with data governance frameworks and implementation of data quality processes.
Experience embedding change management and version control of data pipelines in large‑scale migration programs.
Exposure to incident response and operational support practices.
If this role is of interest, please apply or send your resume to matt@retained.com.au