We are currently looking for an experienced Data Engineer with strong expertise in Azure Databricks and PySpark to work on large-scale data platforms and modern data engineering solutions.
Key Responsibilities
Design and implement scalable data engineering solutions using Azure Databricks.
Build and manage data processing pipelines for large-scale data transformation.
Develop efficient PySpark and SQL scripts for data processing and analytics.
Work with Azure Data Lake, Azure Synapse, and other Azure data services.
Manage and process large datasets ensuring data quality and governance.
Implement version control and CI/CD practices for data pipelines using tools like Git.
Collaborate with cross-functional teams to deliver data-driven insights.
Required Skills
Strong experience in Data Engineering
Hands-on expertise with Azure Databricks
Proficiency in PySpark and advanced SQL
Experience with Azure Data Lake, Azure Synapse, and Azure ecosystem services
Familiarity with Apache Spark, Hadoop, and big data technologies
Experience with Git and CI/CD practices
Pay: $94,111.56 – $150,901.49 per year
Work Location: Hybrid remote in Sydney NSW