👨🏻‍💻 postech.work

Data Engineer

Broaden • 🌐 Remote

Remote Posted 3 days, 10 hours ago

Job Description

Data Engineer \| Remote (UK) \| Full-time \| 45-60K + bonus \| Path to Head of Data or Head of Engineering

Ready to build the data "brain" of a fast-growing MGA? We're looking for a brilliant, high-energy hire with experience within Insurance or underwriting data engineering to join our team.

We are looking for a hands-on Data Engineer/Scientist to take ownership of our underwriting infrastructure and analytics. This is a rare "greenfield" opportunity: we need someone to turn raw BDX, policy, and claims data into the actionable insights that drive our profitability. You'll be given the guidance and tools to build out our data function as we scale, and we'll be looking for you to take over the data leadership function with our firm in time.

This is a hybrid role blending Data Engineering (building pipelines, cleaning BDX) with High-Level Analytics (pricing, portfolio strategy, and dashboards). Experience within the insurance world is key for this role, and can be across claims/underwriting/BDX or other areas that pair deep insight with a good understanding of data processes.

We're looking for someone with strong back-end dev experience, particularly across Python, SQL, API management, Datafactory, and Azure/Microsoft Fabric.

About the Opportunity:

You will report directly to the Head of Underwriting and help leadership test hypotheses on new products and pricing. We're targeting 3x growth this year, so this role is crucial in making sure our data and product understanding evolves with the firm.

You will choose the tools and build out the infrastructure (Azure/SQL/Python) required to make data flow.

You'll work alongside our expert Data Analytics team.

As we scale, this role is designed to evolve into a Lead Data or Head of Data \& Engineering position.

Key Responsibilities:

Analyse loss ratios, rate changes, and retention to tell us why the portfolio is performing the way it is.

Build frameworks and dashboards to track aggregates and concentrations (by geography, industry, peril).

Design and maintain a lightweight but robust data stack (e.g., Azure SQL, Data Lake, Fabric).

Use Python/SQL to move data from messy sources (BDX, finance systems) into a clean, usable analytics layer.

Be the guardian of "the number." Ensure data is reconciled, version-controlled, and accurate.

Own the data side of BDX including ingestion, validation, and mapping from multiple partners.

Kill the manual spreadsheets. Automate monthly MI packs and portfolio deep dives.

What We’re Looking For:

You can write complex queries, joins, and window functions in your sleep.

A strong knowledge of Python SQL, and API Management.

Experience across Datafactory, and Azure/Microsoft Fabric

Experience within insurance or underwriting would be a big plus.

You are comfortable using pandas/NumPy for data wrangling and automation.

Experience with Azure (SQL/Data Lake/Fabric) is a must.

You understand (or can quickly learn) concepts like Loss Ratios, Earned Premium, IBNR, and Delegated Authority.

You aren't afraid of messy, real-world data. You know how to clean it, validate it, and fix it.

You can explain complex data insights to non-technical underwriters and finance teams in plain English.

You treat the data as your own product. You care about accuracy and traceability.

Qualifications:

Experience in a Data Engineer / Scientist role (Insurance, Banking, Credit Risk, or similar regulated sector).

Experience specifically within the MGA / Lloyd’s / Specialty market.

Experience with predictive modelling (GLMs, Gradient Boosting) for pricing insights.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.