👨🏻‍💻 postech.work

Data Engineer (Pyspark)

CareCone Group • 🌐 In Person

In Person Posted 11 hours, 6 minutes ago

Job Description

Role- Data Engineer (Pyspark \& Scala)

Type- Permanent

Location- Sydney

Notice Period- Immediate start to 2 weeks

Job Summary:

We are seeking a skilled Data Engineer with 5 to 8 years of experience to join our dynamic software development team. The ideal candidate will have a strong background in data engineering, particularly with

PySpark, SQL, and data migration processes.

You will be responsible for designing, implementing, and maintaining data pipelines and ensuring the integrity and availability of data across large systems.

Responsibilities:

Design, develop, and maintain scalable

data pipelines using Apache PySpark and SQL.

Collaborate with cross functional teams to support data migration projects and ensure data quality.

Utilize

Azure Databricks

and

Teradata/Cloudera

for data processing and analytics.

Implement DevOps practices to automate data workflows and improve deployment processes.

Monitor and optimize data systems for performance and reliability.

Document data engineering processes and maintain clear communication with stakeholders.

Work independently and as part of a team to meet project deadlines.

Mandatory Skills:

3 to 5 years of experience in data engineering with a focus on

PySpark and SQL.

Proficiency in Pyspark, Scala,

Azure Databricks and experience with TERADATA or Cloudera.

Hands on experience with Data Migration projects.

Strong understanding of large systems architecture and data flow.

Excellent oral and written communication skills.

Interested people can share their resume at sakshi.tyagi@carecone.com.au

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.