6–9 years of hands-on data engineering experience.
Strong expertise in Apache Spark (batch + streaming)
and Hive.
Proficiency in Python, Scala, or Java.
Knowledge of orchestration tools (Airflow / Control-M)
and SQL transformation frameworks (DBT preferred).
Experience working with Kafka, Solace, and object
stores (S3, MinIO).
Exposure to Docker/Kubernetes for deployment.
Hands-on experience of data Lakehouse formats
(Iceberg, Delta Lake, Hudi).
Job Type: Full-time