Software Engineer â Python / Data / Cloud (Mid-Level)
London OR Manchester (hybrid)
Salary ÂŁ50k-80k
Weâre looking for a
Mid-Level Software Engineer
to join a high-performing squad building large-scale, data-intensive systems primarily in
Python
. Youâll work on distributed data pipelines that process
hundreds of millions to billions of rows
, contributing to backend development, data engineering workflows, and system performance.
Youâll collaborate closely with senior engineers and an Engineering Manager. The product itself is an AI/ML driven SaaS platform and your contributions will train new models, enabling new features and have large greenfield elements to it.
What Youâll Be Working On
Build and maintain backend services and data processing components in
Python
.
Work on large-scale data pipelines operating over huge datasets (hundreds of millions to billions of records).
Write performant
SQL
for data transformations, ETL workflows, and analytical use cases.
Contribute to discussions on architecture and design, focusing on scalability, cost, reliability, and performance.
Improve observability, testing, and overall system robustness.
Participate in incident reviews and continuous improvement initiatives within the squad.
Tech Youâll Work With
Python
(primary language)
SQL
Large-scale data workflows
(ETL, transformation, analytics)
Parquet
and columnar data formats
Cloud environments
â experience with
any
major cloud provider is great
AWS
experience (Redshift, Lambda, ECS, S3) is
nice to have
, not required
GCP / Azure backgrounds are equally welcome
What Youâll Bring
Solid professional experience developing in
Python
.
Strong SQL skills and comfort working with large or complex datasets.
Experience with any major cloud platform (AWS, GCP, Azure, etc.).
Exposure to data pipelines, distributed processing, or analytical data systems.
A focus on code quality, testing, and reliability.
Curiosity, problem-solving ability, and a collaborative approach.
What Success Looks Like
You deliver clean, scalable Python code that handles large data volumes effectively.
You contribute to improving data pipelines, performance, and system reliability.
You participate actively in design discussions, planning, and squad rituals.
You help strengthen testing, observability, and operational excellence.
You continually learn and take on more ownership as part of a tight, high-performing squad.