Role:
Data Engineer (PySpark)
Location: Sydney
Fulltime (Permanent)
Job Description
Design and develop
data pipelines
using
PySpark
for large-scale data processing.
Build and optimize
ETL workflows
to integrate data from multiple sources.
Ensure
data quality, accuracy, and consistency
across all systems.
Write efficient
PySpark and SQL
code for data transformation and analysis.
Work with
data lakes, warehouses, and cloud platforms
(AWS, Azure, GCP).
Collaborate with
data scientists and analysts
to provide clean, usable datasets.
Implement
data governance, lineage, and validation
best practices.
Automate data workflows using
Airflow, Jenkins, or similar orchestration tools
.
Monitor and optimize
Spark job performance and resource utilization
.
Support
production pipelines
, troubleshoot issues, and continuously improve data processes.
Interested Candidates can share their updated resumes on sourabh.sood@carecone.com.au or can reach me on +61 251 103 879.