Location
Dublin
Business Area
Engineering and CTO
Ref #
10049069
Description \& Requirements
Our core product, The Bloomberg Terminal, is used by 400,000 financial professionals around the world. It's continuously being developed and improved by about 6,000 engineers that are experts in their field.
The Composite Pricing Ingestion Analytics team is a high-impact, cross-asset engineering group responsible for safeguarding the quality, stability, and integrity of Bloomberg’s composite pricing data contributions across derivatives: FX, commodities, credit derivatives, and related asset classes. Our systems are designed for scale, performance, and reliability, processing billions of multi-dimensional time-series data points daily from a diverse set of global contributors at the core of Bloomberg’s pricing infrastructure.
As part of this team, you will design and build scalable, high-performance, real-time cross-asset anomaly detection and monitoring systems, leveraging distributed streaming and data science platforms in Python and Java. This role offers the opportunity to work at the intersection of large-scale market data engineering, distributed systems and applied machine learning, solving complex cross-asset problems at global scale while shaping the future of Bloomberg’s pricing quality platform.
What’s in it for you?
Build scalable infrastructure used by global financial institutions daily
The chance to work with distributed streaming, applied machine learning, and performance-critical distributed systems
Collaborate with experienced engineers, Data Analysts and Products across London, New York \& San Francisco on a high-visibility product
Grow quickly in a team that values mentorship, ownership, and technical excellence
We’ll trust you to
Design and implement distributed systems that deliver scalable and high-performance market data solutions
Build APIs, services, and tooling to enable downstream applications to consume data efficiently and reliably
Optimise codebases and system performance to handle billions of daily time-series data with low latency
Ship clean, maintainable code in iterative development cycles with a collaborative team
You’ll Need to Have
A degree in Computer Science, Engineering, Mathematics, similar field of study or equivalent work experience
Demonstrated experience developing production ready applications in an OOP language (ideally Python or Java)
Experience building or supporting distributed systems and infrastructure
Familiarity with distributed messaging and streaming frameworks such as Kafka, Spark
Comfort with debugging and optimising performance-critical code
We’d Love to See
Experience working in financial services or large-scale data infrastructure
Experience with containerised development using Docker
Exposure to enterprise clients or B2B platform integration
If indicated, please note that years of experience are a guide; we will consider applications from all candidates who can demonstrate the skills necessary for the role.
Discover what makes Bloomberg unique - watch our for an inside look at our culture, values, and the people behind our success.