👨🏻‍💻 postech.work

Senior Data Acquisition Engineer

Daloopa, Inc. • 🌐 Remote • 💵 $124,430 - $168,471

Remote Posted 1 month, 4 weeks ago

Job Description

About Daloopa, Inc.

Daloopa accelerates decision\-making for investment professionals by transforming complex financial data into actionable intelligence through AI\-powered infrastructure. Founded by Thomas Li, Daloopa automates the extraction, organization, and delivery of deeply sourced, audit\-ready financial datasets, enabling clients to update models faster, reduce errors, and focus on higher\-value analysis. With innovations like MCP for Financial Services, which connects verified data directly to AI agents and industry workflows, Daloopa is trusted by leading global institutions such as Morgan Stanley and supported by prominent investors. Their robust, scalable platform offers accurate, traceable, and seamlessly integrated data solutions, giving financial teams a decisive edge in research and compliance, redefining the future of fundamental data for buy\-side and sell\-side professionals worldwide.

Daloopa’s mission is to become the market leader in high\-quality, actionable data for the world’s top investment professionals. As a Senior Python Engineer on our Crawler Team, you will play a pivotal role in acquiring and transforming the raw data that serves as the backbone of every Daloopa product and decision.

Our data acquisition systems are critical to our ability to provide unique, up\-to\-date, and reliable datasets for our clients. In this role, you will help design, develop, and sustain the advanced infrastructure that brings vast, diverse data sources into the Daloopa ecosystem. Your work directly enables investment research, automation, and strategic insights across our organization and client base.

What You’ll Lead and Transform

Architect and implement systems to acquire large volumes of structured and unstructured data from a wide array of sources, ensuring completeness and quality.

Collaborate with product, engineering, and operations teams to understand evolving data requirements and turn them into robust, automated data collection solutions.

Enhance and optimize data ingestion pipelines to support timely delivery of critical datasets for business and client needs.

Monitor changing data landscapes and be proactive in sourcing new types of data to maintain Daloopa’s competitive edge.

Address challenges related to scale, data freshness, and reliability to ensure data pipelines drive business value.

Mentor peers on best practices in data acquisition and data engineering.

What Sets You Up for Success

Demonstrated success building, maintaining, and evolving data pipelines at scale, such as architecting ETL workflows that routinely process millions of records per day or integrating data from dozens of disparate sources into production.

Ownership of end\-to\-end data acquisition workflows, with direct experience managing and improving data quality metrics (for example, reducing error rates or increasing completeness by specific percentages).

A proven track record designing resilient and efficient systems, able to adapt quickly to new requirements, handle complex edge cases, and maintain data integrity and completeness at all times.

Hands\-on expertise with Python and frameworks like Scrapy (or equivalents), creating automated processes for data ingestion, cleaning, and enrichment.

Skilled at cross\-functional collaboration, driving ambitious data goals and iterating rapidly with engineering, data, product teams, and other internal stakeholders.

Countless stories optimizing, troubleshooting, and scaling data infrastructure for reliability, freshness, and performance. Speaking to practical expertise, not just theory.

Known for levelling up those around you: you share knowledge freely, break down silos, and elevate team standards through clear documentation, communication, and mentorship.

Bonus Points for

Pioneered solutions for crawling or parsing data from notoriously difficult sources (e.g., highly dynamic, anti\-bot\-protected, or multilingual sites), or have open\-sourced tools that raised the industry bar.

Integrated adaptive systems, such as machine learning\-driven extraction, automated throttling, or self\-healing pipelines, to accelerate data freshness, boost extraction quality, or dramatically reduce human intervention in high\-complexity scraping workflows.

Built or managed crawler platforms ingesting millions of financial records daily, ensuring not only continuous uptime and automated fault tolerance, but also rigorous adherence to data privacy, regulatory, and compliance requirements unique to financial data pipelines

Developed or integrated data infrastructure for capital markets, portfolio management, or regulatory analytics, enabling clients to gain actionable insights, achieve stricter data SLAs, or advance compliance. Especially if you’ve worked directly with investment or product teams to translate business\-critical requirements into scalable technical solutions, or have presented your work to industry groups focused on financial data innovation.

What You Can Look Forward To

Initial Conversation (30 min)

  • Introductory call to discuss your background, interests, and the team at Daloopa.

Hiring Manager Screening (30 min)

  • Speak with the hiring manager about your experience, motivations, and fit for the Senior Python Engineer, Crawler Team role.

Technical Deep Dive (60 min)

  • Tackle real\-world technical questions and coding problems relevant to large\-scale data pipelines and web crawling.

Systems \& Architecture Interview (60 min)

  • Design and discuss solutions for end\-to\-end data workflows, scalability, and reliability challenges.

Meet Your Colleagues (45 min)

  • Connect with future team members to explore collaboration, culture, and how you approach teamwork.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.