Role: Data Engineer (Azure Data bricks)
Location: Melbourne
Job Description:
Design and develop scalable data pipelines using
Azure Databricks
, Delta Lake, and PySpark.
Build and optimize
ETL/ELT workflows
across Azure Data Lake Storage (ADLS) and other data sources.
Integrate Databricks with Azure services such as
Azure Data Factory, Azure Synapse, ADLS, Key Vault
, and Event Hub.
Develop and maintain
PySpark notebooks
, jobs, and workflows for batch and streaming data.
Ensure
data quality, reliability, and governance
, including schema enforcement and validation.
Monitor and optimize Databricks clusters for
cost efficiency and performance
.
Implement
CI/CD pipelines
for Data bricks workflows using Azure DevOps or GitHub Actions.
Interested people can share their resume at shalini.tomar@carecone.com.au or can reach me on +61 2 83195549