Job Title:
Data Engineer – Azure \& Data bricks
Location:
Sydney/Melbourne
About the Role
We are looking for an experienced
Data Engineer
with strong expertise in
Microsoft Azure
and
Databricks
to design, develop, and optimize data solutions. You will be responsible for building scalable data pipelines, managing big data environments, and enabling advanced analytics for business insights.
Key Responsibilities
Design and implement
data pipelines
using
Azure Data Factory
,
Databricks
, and related services.
Develop and optimize
ETL/ELT workflows
for structured and unstructured data.
Build and maintain
data lakes and data warehouses
on Azure.
Implement
data governance, security, and compliance
standards.
Collaborate with data scientists and analysts to deliver
high-quality datasets
for analytics and machine learning.
Monitor and troubleshoot
data workflows
for performance and reliability.
Work with
Spark
and
PySpark
for big data processing in Databricks.
Integrate data from multiple sources including
APIs, streaming data, and on-prem systems
.
Required Skills \& Qualifications
Bachelor’s degree in
Computer Science, Engineering, or related field
.
Hands-on experience with
Azure services
:
Azure Data Factory
,
Azure Synapse Analytics
,
Azure Data Lake Storage
,
Azure SQL Database
.
Strong experience with
Databricks
,
Apache Spark
, and
PySpark
.
Proficiency in
Python
and
SQL
for data engineering tasks.
Knowledge of
data modeling
,
data governance
, and
security best practices
.
Familiarity with
CI/CD pipelines
and
DevOps practices
for data solutions.
Preferred Qualifications
Experience with
real-time data streaming
(Azure Event Hub, Kafka).
Knowledge of
Delta Lake architecture
and
Lakehouse concepts
.
Familiarity with
Power BI
or other visualization tools.
Understanding of
machine learning workflows
in Databricks.