About Baselayer:
Trusted by 2,200+ financial institutions, Baselayer is the intelligent business identity platform that helps verify any business, automate KYB, and monitor real-time risk. Baselayerâs B2B risk solutions \& identity graph network leverage state \& federal government filings and proprietary data sources to prevent fraud, accelerate onboarding, and lower credit losses.
About You:
You want to learn from the best of the best, get your hands dirty, and put in the work to hit your full potential. Youâre not just doing it for the winâyouâre doing it because you have something to prove and want to be great. Youâre hungry to become an elite data engineer, designing rock-solid infrastructure that powers cutting-edge AI/ML products.
You have 1â3 years of experience in data engineering, working with Python, SQL, and cloud-native data platforms
Youâve built and maintained ETL/ELT pipelines, and you know what clean, scalable data architecture looks like
Youâre comfortable with structured and unstructured data, and you thrive on building systems that transform chaos into clarity
You think in DAGs, love automating things with Airflow or dbt, and sweat the details when it comes to data integrity and reliability
Youâre curious about AI/ML infrastructure, and you want to be close to the actionâfeeding the models, not just cleaning up after them
You value ethical data practices, especially when dealing with sensitive information in environments like KYC/KYB or financial services
Youâre a translator between technical and non-technical stakeholders, aligning infrastructure with business outcomes
Highly feedback-oriented. We believe in radical candor and using feedback to get to the next level
Proactive, ownership-driven, and unafraid of complexityâespecially when thereâs no playbook
Responsibilities:
Pipeline Development: Design, build, and maintain robust, scalable ETL/ELT pipelines that power analytics and ML use cases
Data Infrastructure: Own the architecture and tooling for storing, processing, and querying large-scale datasets using cloud-based solutions (e.g., Snowflake, BigQuery, Redshift)
Collaboration: Work closely with data scientists, ML engineers, and product teams to ensure reliable data delivery and feature readiness for modeling
Monitoring \& Quality: Implement rigorous data quality checks, observability tooling, and alerting systems to ensure data integrity across environments
Data Modeling: Create efficient, reusable data models using tools like dbt, enabling self-service analytics and faster experimentation
Security \& Governance: Partner with security and compliance teams to ensure data pipelines adhere to regulatory standards (e.g., SOC 2, GDPR, KYC/KYB)
Performance Optimization: Continuously optimize query performance and cost in cloud data warehouses
Documentation \& Communication: Maintain clear documentation and proactively share knowledge across teams
Innovation \& R\&D: Stay on the cutting edge of data engineering tools, workflows, and best practicesâbringing back what works and leveling up the team
Benefits:
Hybrid in SF. In office 3 days/week
Flexible PTO
Healthcare, 401K
Smart, genuine, ambitious team
Salary Range:
$135k â $220k + Equity - 0.05% â 0.25%