Roles \& Responsibilities
Job Summary:
We are looking for an experienced Big Data Engineer with at least 5 years of experience in managing data pipelines and processing within Big Data environments (e.g. Cloudera Data Platform). The role involves designing, developing, and maintaining data ingestion and transformation jobs to support analytics and reporting needs.
Key Responsibilities:
Design and develop data ingestion, processing, and integration pipelines using Python, PySpark, and Informatica.
Analyse data requirements and build scalable data solutions.
Support testing, deployment, and production operations.
Collaborate with business and technical teams for smooth delivery.
Drive automation, standardization, and performance optimization.
Requirements:
Bachelor’s degree in IT, Computer Science, or related field.
Minimum 5 years’ experience in Big Data Engineering.
Hands-on skills in Python, PySpark, Linux, SQL, and ETL tools (Informatica preferred).
Experience with Cloudera Data Platform is an advantage.
Knowledge of data warehousing, Denodo, and reporting tools (SAP BO, Tableau) preferred.
Strong analytical, problem-solving, and communication skills.
Job Type: Contract