We are seeking a Data Systems Engineer to join our engineering team and play a key role in building and scaling our core Data Systems.
Working closely with a small, highly collaborative team of engineers and data professionals, you will own and enhance critical components of our data ingestion, processing, storage, and backend infrastructure. This role involves designing and scaling production systems, including ingestion pipelines, data models, and backend services that power both internal workflows and customer-facing products.
This is a deeply technical, engineering-first role. The successful candidate must be highly proficient at writing production-quality code and comfortable solving complex technical problems independently. You will be expected to design systems, debug complex issues, optimize performance, and write clean, maintainable code that operates reliably at scale.
The ideal candidate is a strong programmer who is comfortable working with large datasets and complex data workflows and who is motivated by building reliable, scalable systems that support data-driven products. This role is for engineers who excel at writing code independently, without needing to rely on AI tools, and who can operate at a high technical standard with minimal oversight.
About Us
Launched in 2023, Institutional Link operates the most followed news and market intelligence platform dedicated to the global alternative asset secondary market (“SecondaryLink”). We deliver daily market news and research, proprietary secondary pricing data, evergreen fund data, and deal-sourcing tools used by top secondary funds, investment banks, institutional limited partners, and asset managers.
Our platform operates as both a media business and a data platform, tracking more than 75,000 private funds. By combining structured datasets with real-time market intelligence, SecondaryLink provides institutional investors with unparalleled visibility into liquidity, pricing, and deal activity.
Responsibilities
Design, build, and maintain backend systems responsible for data ingestion, transformation, storage, and retrieval.
Write complex and optimized queries across relational and non-relational databases to support analytics, pricing, and product features.
Build and maintain data pipelines involving web scraping, data mining, parsing of structured and unstructured sources (including HTML and PDFs), and data matching.
Ensure data quality, integrity, and consistency across datasets used internally and exposed through platform features.
Monitor, troubleshoot, and optimize database performance and data workflows.
Collaborate with engineers, data analysts, and product stakeholders to translate data and platform requirements into scalable technical solutions.
Support the development and deployment of AI-driven and data-enhanced features across the platform.
Note that this role may evolve beyond this description. There is no limitation on the scope or impact the role may have over time.
Requirements
Backend \& Data Engineering Expertise
Strong experience handling large datasets, data pipelines, and data-driven services.
Proficiency in working with SQL and MongoDB, including writing complex queries and optimizing performance.
Experience with data ingestion and transformation, including web scraping, data mining, parsing unstructured data, and data matching.
Hands-on experience with TypeScript.
Familiarity with Python and relevant libraries for data processing is strongly preferred.
Experience using GitHub and standard version control workflows.
Strong ability to reason about systems and write production-quality code independently, with a clear understanding of how the underlying logic and architecture work
Data Systems \& Platform Skills
Understanding of data warehousing concepts, analytics pipelines, and data lifecycle management.
Ability to reason about data modeling, schema design, and long-term maintainability of datasets.
Education \& Experience
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Minimum 1–3 years of professional experience in data engineering, backend engineering, or a hybrid data/backend role.
Nice to Have Experience and Skills
Exposure to AI usage, LLMs, or API-driven AI workflows applied to data processing or product features.
Familiarity with data lakes, cloud data infrastructure, or large-scale distributed systems.
Understanding of best practices around data privacy, security, and access controls.
Why Join SecondaryLink?
Be part of the world’s fastest-growing alternative investments intelligence platform.
Work at the intersection of finance, data, and technology.
Join lean, growing, and innovative Engineering and Data teams with room for impact and ownership.
Gain exposure to large-scale data challenges, modern frameworks, and end-to-end development.
Enjoy regular social events, including our company book club, “Second Shelf”.
Casual dress, full benefits, and paid time off.
If you’re excited about building scalable products that transform how financial intelligence is delivered — and want to make an impact in a fast-moving, startup environment — we’d love to hear from you.
Job Type: Full-time
Pay: $65,000.00-$85,000.00 per year
Ability to commute/relocate:
Toronto, ON M5V 2G5: reliably commute or plan to relocate before starting work (required)
Education:
Bachelor's Degree (required)
Experience:
data engineering or back-end development: 1 year (required)
Language:
English (required)
Work Location: In person