Who we are
At Metris, we're on a mission to maximise every kWh of renewable capacity. Our platform delivers real-time data and automates time-consuming tasks involved in renewable operations — monitoring, fault detection, reporting, and billing.
We've reached early product-market fit with MetrisOS deployed to 20 customers and 5,000+ solar projects across the UK. We've just raised a Seed round from top VCs in our sector, and we're ready to throw fuel on the fire.
Our core product is a data product. We integrate with a wide range of third-party APIs — inverters, meters, batteries— to pull data from thousands of solar sites. On top of that, we build management tools and analytics dashboards that operators use every day to run their portfolios. Everything we build is downstream of the data we ingest, transform, and serve. Getting that right is the job.
We're a small, fast-moving team building the digital backbone of our future energy grid. Commercially, we're aiming to scale our ARR 6x in the next 24 months. We need data engineers who care about the product their data powers — not just the pipeline that feeds it.
What you'll do
Own data as a product, not a pipeline.
You'll think about the data you build and maintain in terms of what it enables: the dashboards it powers, the alerts it triggers, the decisions it informs. You'll understand what "good" looks like from a user's perspective, not just a schema perspective.
Integrate across a growing landscape of APIs.
We pull data from inverter manufacturers, monitoring platforms, and metering systems — each with their own quirks, limitations, and failure modes. You'll build and maintain reliable ingestion pipelines that handle the messiness of the real world at scale.
Design data structures that serve the product.
Data modeling decisions have long-term consequences. You'll think carefully about how data is structured, labelled, and related — not just for correctness, but for usability by the engineers and product surfaces downstream.
Identify and fix problems with real customer impact.
Bad data reaches customers fast. You'll develop a sharp instinct for data quality issues — whether they originate in ingestion, transformation, or architecture — and you'll drive them to resolution with urgency.
Think at scale.
We cover thousands of sites and that number is growing. The approaches that work for 500 sites break at 5,000. You'll design with scale in mind from the start and flag where existing patterns won't hold.
Care about the lifecycle of your data.
From ingestion to transformation to consumption, you'll take ownership of the data you're responsible for — knowing what's flowing, what's stale, what's missing, and what that means for the product.
Collaborate closely with product and engineering.
You'll work directly with frontend and backend engineers to understand what data is needed, surface constraints early, and make sure data feeds translate into real product value.
What you bring
A product lens on data.
You care about what the data is for, not just whether the pipeline runs. You've worked in environments where data quality directly affected users, and you take that seriously.
Solid experience building and maintaining data pipelines.
You're comfortable with the full lifecycle — API integration, ingestion, transformation, storage, serving — and you know where things typically go wrong.
Strong SQL and data modeling instincts.
You write clean, well-structured queries and you think carefully about schema design. You know how modelling decisions upstream ripple through to the product downstream.
Experience integrating with third-party APIs.
You've navigated under documented APIs, dealt with inconsistent data formats, handled rate limits and failures, and built pipelines robust enough to survive them.
Comfort with ambiguity and problem-solving at pace.
Not everything has a clear spec. You're used to investigating, forming hypotheses, and moving forward with incomplete information.
An eye for data quality.
You notice when something looks wrong. You don't just fix the symptom — you understand the root cause and address it properly.
Bonus
Experience with time-series data or energy/IoT telemetry workloads
Familiarity with Airflow or similar orchestration tools
Experience with AWS (S3, RDS, MWAA, or similar)
Exposure to analytics or BI tooling (dashboards, metrics layers)
Personal interest in energy, climate tech, or infrastructure
5+ years of experience in data engineering
This role isn't for you if
You just want to keep the pipes running.
We need someone who thinks beyond pipeline health. If you're not interested in how the data is used or what it powers, this won't be the right fit.
You treat data quality as someone else's problem.
In a product this data-dependent, everyone owns data quality. You need to care when something's wrong and have the drive to fix it.
You prefer a slow, predictable environment.
We're a startup scaling fast. Integrations break, APIs change, new data sources appear. You need to thrive in that environment, not endure it.
Why Join
Join an exciting, high-performing team that values substance over theatre
Massive ownership from day one — the data you build is the foundation everything else runs on
Work on technically interesting problems at the intersection of data engineering, product, and energy infrastructure
Competitive salary + meaningful equity in a company that's just raised its Seed round
Hybrid London working (3 days/week in Farringdon)
Budget for learning and personal development