Role:
This team builds an enterprise data lake/warehouse powering product, operations, and analytics.
We’re hiring a Data Engineer with strong Big Data engineering and stakeholder-facing analytics skills to design data models, build reliable pipelines, and deliver trusted BI insights across functional teams.
You’ll partner with BSA and business stakeholders to translate needs into scalable data solutions, while ensuring security, governance, and high data quality.
Responsibilities:
Design and build distributed ETL/ELT pipelines for large datasets (batch and/or near real-time).
Develop and optimize data models (dimensional/warehouse patterns) to enable self-serve analytics.
Implement data quality controls, reconciliation checks, and monitoring/alerting for pipeline reliability.
Build and maintain CI/CD for data pipelines (testing, deployment, and rollback practices).
Create high-impact BI dashboards and partner with stakeholders to drive decisions.
Troubleshoot performance/data issues (skew, partitioning, late data, duplicates), and propose fixes at the source.
Required skills:
5–7 years in data engineering with hands-on Big Data exposure.
Strong SQL (window functions, tuning, complex transformations).
Hands-on with Spark and at least one ecosystem component (Hive/Hadoop) in production.
Experience with data modelling/warehousing patterns (star schema, SCD, ETL design).