đŸ‘šđŸ»â€đŸ’» postech.work

Senior Software Engineer (Data)

DocuPet ‱ 🌐 Remote ‱ đŸ’” $115,000 - $135,000

Remote Posted 2 days, 23 hours ago

Job Description

About DocuPet

As the official pet registration provider for more than 250 jurisdictions, DocuPet is the largest and fastest growing pet registration platform in North America.

Our proprietary platform consolidates all pet information into a single place and provides the services for pet owners, community members and animal shelters to ensure pets can be reunited quickly if they become lost.

Beyond our platform, DocuPet offers specialized pet tags, an AI-powered pet tracker, lost pet alert system, and will soon be launching a first-of-its-kind pet parenting mobile app - all aimed to ensure every pet in North America is registered and that each has a safe and happy home.

Our work is very important. More than 6 million pets enter animal shelters every year. Just 10% of those animals are returned to their owners. Effective registration, pet identification, reunification tools, and animal shelter resources, all provided by DocuPet, is the solution that will measurably reduce shelter intakes while providing significant new funding for animal welfare organizations.

A new National Pet Record Search Tool, available for free to all shelters joining our National Animal Shelter Network will be launched in Q2 of 2025. DocuPet has the support of the animal welfare industry, and with astute strategic leadership will become the de facto National Pet Registry program serving tens of millions of pet owners by 2027.

About this role

DocuPet is looking for a Senior Software Engineer with a strong focus on data engineering and backend systems to contribute to the development and scalability of our data software solutions. This role will be a key player in designing, building, and maintaining scalable data pipelines, data warehousing solutions, and real-time analytics capabilities to support product initiatives and business intelligence needs.

You’ll work within the Data Engineering team, a cross-functional team of DataOps Engineers, Data Software Engineers, delivering software solutions, and supporting requests to create reports, and alter production data. You’ll also collaborate closely with the Licensing Data team, which is responsible for supporting partner launches into DocuPet, and regularly importing data into our system from third parties, such as Vets. You will play a crucial role in developing tools and workflows that validate, cleanse, structure, and ingest partner data efficiently into our system, ensuring high-quality data integration that supports a seamless onboarding experience for new partners.

The ideal candidate has a passion for writing efficient, high-quality code while ensuring the security, performance, and integrity of our data systems. This role will work closely with the DataOps team, DBAs, and product stakeholders to optimize data models, automate workflows, and improve overall data governance and quality. This position reports to the Software Engineering Manager, Data.

What You Will Be Doing

Data Engineering \& Pipelines: Design, build, and maintain software to extract, transform, and load (ETL) data at scale, ensuring accurate and timely ingestion across the platform. This includes automating data ingestion, transformation, and validation processes to ensure high availability and consistency across platforms. You will also be responsible for building real-time and batch data processing solutions.

Database Architecture \& Optimization: Architect and optimize relational databases to ensure scalability, performance, and reliability as our platform grows. Collaborate closely with the Platform team and DBAs to align on database best practices, performance tuning, and capacity planning. Ensure that database design supports operational needs, data integrity, and long-term maintainability while leveraging expertise from the DBA to enhance monitoring, security, scalability, and optimization strategies.

Collaboration \& Feature Development: You will collaborate with peers on technical design, work estimation, and feature implementation across data models, business processes, and data logic. You will work closely with product managers, business stakeholders, and engineering teams to translate business needs into scalable data solutions, driving discussions and decision-making in tech architecture and design meetings, and estimation.

Code Quality \& Best Practices: You will write elegant, maintainable, and efficient code while focusing on consistency and best practices. You will contribute to improving code quality through participation in peer code reviews, collaborating with intelligent engineers to enhance the overall system while mentoring junior developers.

SCRUM \& Agile Participation: You will actively participate in scrum ceremonies, including daily stand-ups, sprint estimation and planning, retrospectives, and EPIC review meetings. You will help identify bottlenecks and performance implications, contributing to discussions that weigh the cost of technical debt against business impact.

Continuous Improvement \& Innovation: You will contribute ideas to iteratively improve engineering team job enjoyment, processes, and productivity. You will help identify inefficiencies in data operations and propose innovative solutions, improving system reliability, maintainability, and scalability over time.

What You Should Have

A college or university degree in computer science or a related field (a combination of education or experience is also fine!).

5+ years of experience in software engineering with a data specialization and focus in large-scale applications or data infrastructure.

5+ years of relational database experience (MySQL or similar preferred), including schema design, performance tuning, and query optimization

5+ years of experience designing, building, and scaling batch and real-time data pipelines using tools like AWS Glue, Apache Spark, or similar frameworks.

5+ years of experience with data warehousing and data lakes, including designing and maintaining structured and unstructured data storage solutions (e.g., Amazon Redshift, Snowflake, Delta Lake, or Amazon S3-based data lakes).

5+ years of experience with open source and managed big data services (e.g., Kinesis, Kafka, DynamoDB, or Spark) to support high-scale, real-time data processing.

Hands-on experience with event-driven architectures and streaming technologies (e.g., Apache Kafka or AWS Kinesis).

Experience working with Agile methodologies and tools, such as JIRA.

Strong programming skills in Python (preferred), or a similar language for data processing and backend development.

Familiarity with data visualization tools like Metabase, Tableau, or Power BI for reporting and analytics.

Experience working with other engineers, QA analysts, product managers, and designers.

You are a strong communicator and a seasoned architect that can lead discussion or constructive debate, and help drive technical decision making.

A sense of ownership and a strong desire to solve problems rather than simply shipping solutions.

Hunger to have an impact on our team and the business.

Bonus Points If You Have

Proven experience building highly scalable backend web applications, particularly in PHP

Expertise in writing Python software for importing, validating, and transforming data

Familiarity with containerization and virtualization technologies, such as Docker

Benefits

Comprehensive medical insurance including Health, Dental and Vision

Flexible PTO

Fully remote

Our Mission and Values

Each of us at DocuPet comes to work each day to move our organization closer to its ultimate mission: to provide a safe and happy home for every pet. We take our core values very seriously knowing that we only work well with those who see the working world as we do.

Go Big - We aim to do big things. We don’t aim to impress ourselves, or those around us, we aim to be the very best anywhere. We accept all challenges and we intend to win.

Whatever It Takes - We finish whatever we start. No excuses. It often means a lot of work, but it’s worth it because we are the types who don’t rest until the job is done.

Inspire - Our people and our business inspire those around us. Each employee has a job to do, and they do it with excellence and grace. They bring joy to everyone they meet.

Believe - Each of us is responsible for selling ourselves, our projects, our outcomes, and our efforts. We must be individually and collectively convicted in our actions. We sell our ideas, our services, and our products at every opportunity.

Respect - We work as a team. We treat each other the way we expect to be treated. We listen to all opinions and voices taking time for those with quieter personalities and those who take time to collect and share their ideas are heard. We accept differing viewpoints and are an inclusive company.

Job Types: Full-time, Permanent

Pay: $115,000.00-$135,000.00 per year

Benefits:

Casual dress

Company events

Dental care

Extended health care

Flexible schedule

Paid time off

Vision care

Work from home

Experience:

Relational databases: 5 years (required)

Data software engineering: 5 years (required)

ETL/ELT: 5 years (required)

Batch and realtime data pipelines: 5 years (required)

Data warehouse: 5 years (required)

Data lake: 5 years (required)

Big data: 5 years (required)

Language:

English (required)

Work Location: Remote

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.