Job Description
We are looking for a skilled Data Engineer to design, build, and maintain robust data systems that power analytics and decision-making across our organization. The ideal candidate will work across multiple workstreams, supporting data requirements through reports, dashboards, and end-to-end data pipeline development. You will collaborate with business and technical teams to translate requirements into scalable, data-driven solutions.
Key Responsibilities
Work collaboratively across teams to support data needs including reports, dashboards, and analytics.
Conduct data profiling and analysis to identify patterns, discrepancies, and quality issues in alignment with Data Quality and Data Management standards.
Design and develop end-to-end (E2E) data pipelines for data ingestion, transformation, processing, and surfacing in large-scale systems.
Automate data pipeline processes using Azure, AWS, Databricks, and Data Factory technologies.
Translate business requirements into detailed technical specifications for analysts and developers.
Perform data ingestion in both batch and real-time modes using methods such as file transfer, API, and data streaming (Kafka, Spark Streaming).
Develop ETL pipelines using Spark for data transformation and standardization.
Deliver data outputs via APIs, data exports, or visualization dashboards using tools like Power BI or Tableau.
Qualifications
Bachelor’s degree in Computer Science, Computer Engineering, Information Technology, or a related field.
Minimum 4 years of experience in Data Engineering or related roles.
Strong technical expertise in:
Python, SQL, Spark, Databricks, Azure, AWS
Cloud \& Data Architecture, APIs, and ETL pipelines
Proficiency in data visualization tools such as Power BI (preferred) or Tableau, including DAX programming, data modeling, and storytelling.
Understanding of Data Lakes, Data Warehousing, Big Data frameworks, RDBMS, NoSQL, and Knowledge Graphs.
Familiarity with business analysis, data profiling, data modeling, and requirement analysis.
Experience working in Singapore public sector, consulting, or client-facing environments is advantageous.
Excellent analytical, communication, and problem-solving skills with a collaborative mindset.
Preferred Skills
Experience with real-time data streaming (Kafka, Spark Streaming).
Understanding of data governance, data quality frameworks, and metadata management.
Hands-on experience with automation and CI/CD for data workflows.