👨🏻‍💻 postech.work

Data Engineer

HRI Việt Nam • 🌐 In Person

In Person Posted 1 week, 1 day ago

Job Description

Position: Data Engineer

We are seeking a Data Engineer with strong experience building scalable data solutions on AWS and Databricks. This role will be responsible for developing high-quality data pipelines, optimizing our Lakehouse environment, and enabling data-driven products across the bank. Experience in financial services or banking is a strong advantage.

Job description

:

Data Pipelines \& Processing

Develop and maintain ETL/ELT pipelines on Databricks using PySpark, Spark SQL, and Delta Lake.

Build reliable, scalable data ingestion frameworks integrating with AWS services such as:

S3, Glue, Lambda, Step Functions

Kafka/MSK or Kinesis (real-time ingestion)

RDS/Redshift or on-prem databases

Automate workflows using Databricks Workflows, Airflow, or similar orchestrators.

Lakehouse Architecture

Implement and optimize Delta Lake–based storage, including:

Delta tables

Schema evolution

ACID transactions

Time travel and performance tuning

Support data modeling for analytics, dashboards, machine learning, and regulatory reporting.

Data Quality, Security \& Governance

Enforce data quality checks using:  Delta expectations, unit tests, and validation frameworks.

Implement metadata, lineage, and governance via: AWS Glue Catalog, Unity Catalog (preferred), or similar.

Ensure compliance with banking standards: PII protection, access control, auditability.

Stakeholder Collaboration

Partner with Data Analysts, Data Scientists, Business Units, and Risk/Compliance teams to deliver business-ready datasets.

Translate business requirements into technical solutions that align with enterprise data strategy.

Operational Excellence

Monitor pipeline performance and optimize cost and compute (e.g., Databricks cluster policies).

Troubleshoot production issues, ensuring platform stability and SLAs.

Apply DevOps practices, CI/CD pipelines (Azure DevOps, GitHub Actions, Bitbucket, etc.).

Job requirements

:

Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.

2–5+ years of experience in data engineering.

Strong hands-on skills with:

AWS: S3, Glue, Lambda, IAM, Step Functions, Kinesis/MSK

Databricks: PySpark, Spark SQL, Delta Lake, Workflows

Python \& SQL

Spark

Solid understanding of data modeling (relational, dimensional, domain-driven).

Experience working in large-scale, distributed data environments.

Preferred

Experience in banking or financial services (core banking, payments, lending, regulatory reporting).

Familiarity with:

Unity Catalog

Streaming architectures

Terraform or AWS CDK

Data quality frameworks (Great Expectations, Deequ, Databricks expectations)

Experience working in regulated environments with strict compliance and security controls.

Key Skills

Strong analytical and problem-solving abilities.

Ability to work in collaborative, cross-functional teams.

Clear communication skills with both technical and non-technical stakeholders.

Proactive mindset and willingness to improve existing systems.

Benefits/Welfare

:

Salary: upto 40M

Quarterly and annual performance-based bonuses, project completion bonuses, and annual salary review and adjustment.

In addition to base salary, other income includes overtime pay, lunch allowance, and business trip allowances.

Social insurance and health insurance in accordance with State regulations.

Periodic health check-up once a year.

Annual leave and public holidays in accordance with State regulations, with full pay.

Company covers expenses for job-related training courses required each year.

Rich extracurricular activities: football club, outings and excursions, birthday celebrations, and trade union activities.

Địa điểm làm việc: Onsite Cầu Giấy/Hoàn Kiếm, Hà Nội.

Các bạn quan tâm vui lòng liên hệ:

Ms. Thanh - 0357076599 hoặc gửi CV về email: thanh.hoang1@hri.com.vn

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.