👨🏻‍💻 postech.work

Senior Data Engineer

SKL Technology • 🌐 Remote

Remote Posted 6 days ago

Job Description

About the job

SKL Technology is currently searching for a motivated

Senior Data Engineer

to join one of our clients.

The role involves designing, building, and maintaining scalable data pipelines and real-time data streaming solutions using modern Microsoft data technologies. You will work closely with data architects, engineering teams, and project stakeholders to deliver secure, high-performing data platforms that support analytics, reporting, and AI-driven initiatives.

Location:

Sydney, NSW (Macquarie Park)

Type:

Permanent, Full-time

Citizenship:

Australian citizen or PR

WFH:

Hybrid working environment

Key Responsibilities

Design, develop, and maintain scalable data pipelines using Microsoft Fabric and Azure data services.

Integrate data from multiple sources (databases, APIs, file systems) using streaming and batch approaches.

Build and support real-time data processing solutions using Kafka, Spark Streaming, or Flink.

Monitor, troubleshoot, and optimise data pipelines to ensure performance, reliability, and low latency.

Enforce data quality, governance, security, and privacy standards across data platforms.

Collaborate with data architects and project teams to evolve the data infrastructure.

Provide technical guidance and mentorship to junior data engineers.

Maintain clear technical documentation for data pipelines and processes.

Essential Criteria

Extensive experience as a Senior Data Engineer or similar role in enterprise environments.

Strong hands-on experience with Microsoft Fabric, Azure Data Lake, Azure Data Factory, Azure Data Warehouse, and SQL Server.

Proven experience with real-time data streaming and event-driven architectures.

Strong programming skills in Python and SQL.

Experience working with distributed systems and data architecture principles.

Familiarity with CI/CD pipelines and Azure DevOps.

Excellent communication and problem-solving skills.

Desirable Criteria

Experience with Kafka, Kafka Streams, Apache Spark Streaming, or Apache Flink.

Exposure to machine learning pipelines and AI model integration.

Power BI certification or strong reporting/visualisation experience.

Experience working with microservices-based architectures.

Ability to thrive in fast-paced, evolving data environments.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.