👨🏻‍💻 postech.work

Data Engineer

J RAM IT Consulting • 🌐 In Person

In Person Posted 4 days, 3 hours ago

Job Description

We are seeking a 7 to 10 Years of experienced

Data Engineer

to design and optimize scalable big data solutions using

Databricks

and

Apache Spark

across cloud platforms (

Azure, AWS, or GCP

). You will build high-performance ETL pipelines, manage large-scale data workflows, and ensure data quality, security, and cost efficiency.

Key Responsibilities

Design, build, and optimize Databricks ETL pipelines.

Develop scalable Spark-based ingestion and transformation workflows.

Optimize performance, scalability, and cost of big data systems.

Implement CI/CD practices and version control for data pipelines.

Collaborate with cross-functional teams to deliver data-driven solutions.

Enforce security, governance, and access controls.

Required Skills

Strong experience with Databricks and Apache Spark (PySpark, Scala, or Java).

Advanced SQL expertise.

Experience with cloud platforms and storage solutions (ADLS, S3, BigQuery).

Hands-on experience with ETL/ELT pipelines and data warehousing.

Knowledge of Delta Lake, CI/CD, and distributed data processing.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.