đŸ‘šđŸ»â€đŸ’» postech.work

Analytics Services Platform Engineer

G-Research ‱ 🌐 In Person

In Person Posted 1 day, 12 hours ago

Job Description

We tackle the most complex problems in quantitative finance, by bringing scientific clarity to financial complexity.

From our London HQ, we unite world-class researchers and engineers in an environment that values deep exploration and methodical execution - because the best ideas take time to evolve. Together we’re building a world-class platform to amplify our teams’ most powerful ideas.

As part of our engineering team, you’ll shape the platforms and tools that drive high-impact research - designing systems that scale, accelerate discovery and support innovation across the firm.

The role

At G-Research, we thrive on innovation and cutting-edge technology to drive world-class research and business capabilities. As our operations continue to grow, we’re seeking an experienced Platform Engineer to join our Analytics Services team.

In this role, you’ll help shape the next generation of our analytics infrastructure — designing, building and operating large-scale distributed platforms that power our research, trading and engineering teams. You’ll deliver highly available, secure and high-performance analytics services across on-premises and AWS environments, using technologies such as Spark, Trino, Kafka, ClickHouse and Airflow.

This is a highly collaborative position, working directly with researchers and engineers to evolve platform capabilities, automate operations and explore emerging technologies that drive innovation across the business.

Key responsibilities of the role include:

Building, operating and scaling distributed analytics platforms across on-premises and AWS environments

Designing and implementing new platform features that enhance usability, scalability and the developer experience

Collaborating with research, data and engineering teams to accelerate time-to-insight through modern analytics solutions

Driving improvements in automation, observability and resilience across analytics services

Evaluating and adopting emerging technologies such as AI assistants, data mesh and cloud-native analytics solutions

Defining SLAs, KPIs and monitoring strategies to ensure reliability, security and service excellence

Participating in the out-of-hours rota to support critical systems

Who are we looking for?

Core skills and technologies

Experience running distributed data and analytics systems at scale using tools such as Spark, Kafka, Trino or Airflow

Strong Linux skills and proficiency in Python for automation and integration

Familiarity with infrastructure as code, using Terraform or Ansible

Deep understanding of AWS analytics technologies including EMR, MSK, Athena, Redshift, Glue and MWAA

Experience with CI/CD and observability tools such as Jenkins, ArgoCD, Prometheus, Grafana and OpenTelemetry

Strong problem-solving skills and a systematic approach to diagnosing and resolving issues

Highly desirable skills

Experience with streaming frameworks such as Flink, Kafka Streams and Kafka Connect

Knowledge of modern data lake technologies including Delta Lake, Iceberg and Glue Data Catalog

Exposure to DataOps practices and collaboration with Data Engineering teams

Familiarity with GPU-accelerated analytics using Spark with GPUs or RAPIDS

Programming experience with Java, Scala, C#, Python or Go

Why join us?

Highly competitive compensation plus annual discretionary bonus

Lunch provided (via Just Eat for Business) and dedicated barista bar

30 days’ annual leave

9% company pension contributions

Informal dress code and excellent work/life balance

Comprehensive healthcare and life assurance

Cycle-to-work scheme

Monthly company events

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.