Job Title: Data Engineer
Job Overview:
We are seeking an experienced Data Engineer to join our Data team in Singapore. The ideal candidate will have a proven track record of hands-on data engineering experience, particularly within the AWS and Azure. As a Data Engineer, you will be responsible for developing and maintaining data pipelines, ensuring the reliability, efficiency, and scalability of our data lake and enabling data marts for AI models.
Responsibilities:
Develop robust ETL pipeline and frameworks for both batch and real-time data processing using Python and SQL.
Deploying and Monitoring the ETL Pipelines using orchestration tools such as Airflow, DBT or AWS Services such as Glue Workflow, Step Functions, EventBridge.
Work with cloud-based data platforms like Redshift, Snowflake and Data Ingestion tools like DMS, ELT tools like dbt cloud for effective data processing.
Work with Azure data factory for building data pipelines
Implement CI/CD for ETLs and Pipeline to automate build and deployments
Qualifications:
Bachelor’s or Master’s degree in computer science, Information Technology, or a related field.
3+ years of hands-on data engineering experience in AWS
Should have delivered Atleast 2 programs into production as data engineer
Primary Skills:
Proficient in Python, SQL and Data Warehousing Concepts
Develop ETL frameworks
Proficient in AWS services such as S3, DMS, Redshift, Glue, Kinesis, Athena, AWS Lambda, Step Functions to implement scalable data solutions.
Proficient in Azure data factory.
Working experience on Data Warehousing using Snowflake or AWS or Databricks
Should have understanding of data marts for presentation layer into reporting
Good-to-Have Skills:
ETL development using tools like Informatica, Talend, Fivetran
CI/CD setup using GitHub or Bitbucket
Good Communication Skill
Good Knowledge in Data lake and data warehousing concepts