Join us as a Data Engineering Lead
This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
Youâll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customersâ and the bankâs data safe and secure
Participating actively in the data engineering community, youâll deliver opportunities to support our strategic direction while building your network across the bank
Weâre recruiting for multiple roles across a range to levels, up to and including experienced managers
What You'Ll Do Weâll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. Youâll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers.
Weâll Also Expect You To Be
Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code
Helping to define common coding standards and model monitoring performance best practices
Owning and delivering the automation of data engineering pipelines through the removal of manual stages
Developing comprehensive knowledge of the bankâs data structures and metrics, advocating change where needed for product development
Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight
Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists
Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy
The skills you'll need
To be successful in this role, youâll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. Youâll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data.
We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure
Youâll Also Demonstrate
Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation
An understanding of machine-learning, information retrieval or recommendation systems
Good working knowledge of CICD tools
Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala
An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow
Knowledge of messaging, event or streaming technology such as Apache Kafka
Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL