👨🏻‍💻 postech.work

Foundational Data Engineer (Worldwide Remote)

HelpGrid • 🌐 Remote

Remote Posted 2 days, 22 hours ago

Job Description

About HelpGrid

HelpGrid is a performance-driven growth partner for eCommerce brands in the health and wellness space. We specialize in abandoned cart recovery, post-sale engagement, and customer support, helping our clients recover

10–15%

more revenue from existing traffic. In fact, our work drove over

$35 million

in additional revenue for our clients last year alone.

What started as a small but mighty support team has grown into a

full-scale revenue engine

, and now, a

tech company

. We've built custom internal tools to optimize conversations, scale faster, and deepen customer connection, and we're now turning those tools into software products powered by AI and automation.

Whether you're a closer, a creative, a builder, or a systems thinker, there's room to grow with us. We care about performance, but we care just as much about people, and we’re building a place where both can thrive.

Why This Role Exists

This role exists to build the technical backbone that finally unifies our data. The Data Engineer will establish the structures, standards, and pipelines that turn scattered information into a trusted, governed, and scalable ecosystem. By creating proper schemas, mapping logic, and automated ETL processes, this role ensures that every team, from Operations to Finance to Analytics, can work with accurate, dependable data.

We’re looking for a builder: someone technical, meticulous, and systems‑driven who can architect a long‑term foundation for how data flows, transforms, and supports our business decisions.

What You’ll Do

Data Architecture \& Schema Design

Evaluate and restructure existing data architecture to support scalability and performance.

Design new schemas, relationships, and data models that align with business logic and analytics needs.

Build and maintain a

HelpGrid-centric data layer

that consolidates fragmented sources into a central structure.

Provide strategic guidance on how data should be organized, named, and modeled for long-term sustainability.

Establish best practices for schema versioning, documentation, and change control.

ETL / ELT Development

Design and implement the company’s first

ETL framework

, defining how data is extracted, transformed, and loaded from multiple sources.

Build automated, reliable pipelines that move data from the centralized database and external tools (ClickCRM, Twilio, LeadLoop, Finance, etc.) into analytics-ready structures.

Standardize transformation logic to clean, normalize, and enrich data for business use.

Implement pipeline monitoring, error handling, and validation for data quality assurance.

Workflow Optimization \& Advisory

Provide

architectural and workflow recommendations

for how data should flow between systems and teams.

Define how analysts should access, refresh, and use data safely and consistently.

Partner with the Data \& Analytics Manager to align the engineering roadmap with BI and reporting priorities.

Develop scalable, reusable scripts and frameworks that simplify ongoing data management.

Integration \& Performance Optimization

Integrate data from internal and third-party platforms into a centralized environment.

Optimize query and pipeline performance for high-volume operations.

Build APIs or microservices for data synchronization and access.

Collaborate with DevOps to ensure infrastructure stability, backup policies, and monitoring.

Governance, Security, \& Documentation

Document data lineage, schema definitions, and system dependencies.

Implement data access controls, validation checks, and compliance standards.

Maintain transparent documentation for analysts, developers, and leadership.

Promote data stewardship and governance best practices across departments.

Who You Are

A systems thinker who enjoys building from the ground up.

Comfortable making architectural decisions that balance long-term scalability with immediate needs.

Detail-oriented, reliable, and committed to producing high-integrity data structures.

Able to partner well with both technical teams and business stakeholders.

Calm, clear, and collaborative, even when juggling competing priorities.

Required Qualifications

3–5 years of experience in

data engineering, architecture, or backend development

.

Proven success

building or restructuring data environments

for scalability and reliability.

Strong command of

SQL

and at least one programming language (

Python

,

NodeJS

, or similar).

Experience designing and implementing

ETL/ELT frameworks

from scratch.

Hands-on experience with

data warehouses or data lakes

(Azure, Microsoft Fabric, Snowflake, or AWS Redshift).

Solid understanding of

data modeling principles

, including star schema and normalization.

Familiarity with

APIs, data validation, and version control (Git)

.

Preferred Skills

Experience with Clickhouse, Redis, or Cassandra.

Familiarity with BI tools like Power BI, Looker, or Tableau.

Experience with orchestration tools (Airflow, Docker, Kubernetes).

Strong documentation habits and systems-thinking skills.

Ability to translate technical concepts into clear recommendations.

Location:

Remote

Department:

Data \& Analytics

Reports To:

CEO

Note

: During onboarding and probation, this role will work under the guidance of the current Data / Product Manager (interim supervisor).

Employment Type:

Full-Time

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.