👨🏻‍💻 postech.work

Data Engineer

Aviva • 🌐 In Person

In Person Posted 2 days, 21 hours ago

Job Description

Job Title: Data Warehouse Engineer (SQL / Azure)

Location: Central London, 80 Fenchurch Street, 2 days per week

Contract: 6 Months

Key Responsibilities

Design, build, and optimise SQL-based data warehouse solutions supporting underwriting, claims, actuarial, exposure management, and regulatory reporting.

Develop robust ETL/ELT pipelines using tools such as SSIS, Azure Data Factory (ADF), and Databricks.

Implement dimensional modelling (Kimball / star schema) for analytical workloads.

Integrate data from internal and external London Market sources including Lloyd’s, DXC, TPAs, and brokers.

Build automated data quality checks, lineage tracking, and controlled data releases.

Collaborate with Product Owners, Underwriters, Actuaries, BI teams, and IT to translate business requirements into technical specifications.

Support migration of legacy SQL systems into Azure cloud environments, enabling self-service analytics for Power BI and actuarial platforms.

Candidate Profile

Technical Skills:

Strong SQL development skills (T-SQL preferred).

Data warehouse design, dimensional modelling, and star schema expertise.

Experience with ETL/ELT tools (SSIS, ADF, Databricks).

Knowledge of Azure cloud data stack (SQL Pool, Synapse, Data Lake, ADF).

Familiarity with APIs, REST, JSON, and flat-file ingestion.

Performance tuning and optimisation for large-scale data warehouses.

London Market / Insurance Knowledge:

Bordereaux processing (premiums, risks, claims).

Policy, claims, and finance datasets, including Lloyd’s submissions \& reporting.

Understanding of underwriting and actuarial processes.

Soft Skills:

Excellent communication with technical and non-technical stakeholders.

Strong problem-solving mindset with attention to detail.

Experience working in Agile / Scrum environments.

Essential Experience:

Previous experience with London Market insurers, MGAs, Lloyd’s syndicates, or brokers.

Data warehouse dimensional modelling and ETL/ELT development.

Data Quality Assessment and embedding automated checks with lineage and reconciliation.

Desirable Experience:

Python or PySpark for data transformation.

Exposure to BI Tools (Power BI).

Experience with CI/CD, DevOps, and automated data pipelines.

Master data management experience and orchestration frameworks (Airflow/ADF).

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.