Join our digital revolution in NatWest Digital X
In everything we do, we work to one aim. To make digital experiences which are effortless and secure.
So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter.
Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive.
This role is based in India and as such all normal working days must be carried out in India.
Job Description
Join us as a Data Engineer, Pyspark, SQL and DWH
Youâll be the voice of our customers, using data to tell their stories and put them at the heart of all decision-making
Weâll look to you to drive the build of effortless, digital first customer experiences
If youâre ready for a new challenge and want to make a far-reaching impact through your work, this could be the opportunity youâre looking for
We're offering this role at vice president level
What You'Ll Do As a Data Engineer, youâll be looking to simplify our organisation by developing innovative data driven solutions through data pipelines, modelling and ETL design, inspiring to be commercially successful while keeping our customers, and the bankâs data, safe and secure.
Youâll drive customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tool to gather and build data solutions. Youâll support our strategic direction by engaging with the data engineering community to deliver opportunities, along with carrying out complex data engineering tasks to build a scalable data architecture.
Your responsibilities will also include:
Building advanced automation of data engineering pipelines through removal of manual stages
Embedding new data techniques into our business through role modelling, training, and experiment design oversight
Delivering a clear understanding of data platform costs to meet your departments cost saving and income targets
Sourcing new data using the most appropriate tooling for the situation
Developing solutions for streaming data ingestion and transformations in line with our streaming strategy
The skills you'll need
To thrive in this role, youâll need a strong understanding of data usage and dependencies and experience of extracting value and features from large scale data. Youâll need at least twelve years of experience with PySpark, Python, SQL and AWS. You need experience in Sagemaker and Airflow.
You will also need experience in AWS architecture using EMR, S3, Glue and serverless solutions with experience in HLSD, FRD preparation.
Additionally, youâll need:
Experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, and exploration and analysis
Data warehousing and data modelling capabilities
A good understanding of modern code development practices
Experience of working in a governed, and regulatory environment
Strong communication skills with the ability to proactively engage and manage a wide range of stakeholders