Are you a world-class Python engineer ready to design high-throughput, low-latency systems that support some of the most advanced quantitative research pipelines in finance?
Our client, a top-tier global quantitative investment fund is expanding its core engineering group. This team builds the foundational systems powering research, data engineering, simulation, and production trading. If you enjoy distributed computing, performance optimisation, and building scalable platforms for complex analytical workloads, this role is engineered for you.
The Opportunity
You’ll work at the intersection of large-scale data processing, compute orchestration, and production-grade model deployment. Expect to design distributed pipelines, manage multi-terabyte datasets, optimise numerical workloads, and develop high-performance services used by quants and PMs daily.
This is greenfield, architecture-heavy work — no legacy systems slowing you down.
What You’ll Build \& Own
End-to-end Python services for data ingestion, ETL, feature generation, and research workflows
Distributed compute systems using frameworks like Ray, Dask, Spark, Airflow, Prefect, Kubernetes, Argo Workflows
High-performance numerical components leveraging NumPy, Pandas, PyArrow, Numba, Cython, Polars
Scalable APIs and microservices using FastAPI, gRPC, message buses (Kafka/Redpanda)
Cloud-native and on-prem hybrid infrastructure using AWS/GCP, Terraform, Docker, Kubernetes
High-throughput storage and data tooling such as Parquet, Arrow, S3, Delta Lake, HDFS, Redis, ClickHouse
Tooling for compute optimisation: vectorisation, concurrency (asyncio), multiprocessing, profiling, caching layers
What They’re Looking For
4+ years of professional Python engineering in high-performance, data-heavy environments
Strong fundamentals in algorithms, distributed systems, and parallel compute
Experience with large-scale data frameworks: PySpark, Ray, Dask, Apache Arrow, Numba, Cython
Exposure to containerised, cloud-native environments (Docker, K8s, Terraform)
Familiarity with CI/CD, observability, and diagnostics: GitLab CI, Prometheus, Grafana, OpenTelemetry
Bonus: experience with model deployment, simulation engines, time-series databases (kdb+/QuestDB) or C++ integration layers
Why This Role
Build highly technical infrastructure that directly supports quant innovation and trading performance
Work alongside elite engineers and researchers from top tech firms and leading academic labs
Solve genuinely hard problems: distributed orchestration, compute scaling, low-latency data access, system reliability
Competitive compensation with significant performance-driven upside
Modern engineering culture — fast iteration, high autonomy, minimal bureaucracy
If you want to work on deep technical systems where engineering excellence is the differentiator, this is the role.