👨🏻‍💻 postech.work

Senior Data Engineer

Dinero Solutions • 🌐 Remote

Remote Posted 11 hours, 42 minutes ago

Job Description

Location: Remote

Position: Full-time / 4-5 Hours PST overlap

Department: Engineering

Position Overview

We are currently seeking a Data Engineer to join our technology team. In this role, you will work on

building data-driven products that sit atop cloud-based data infrastructure. The ideal candidate is

passionate about technology, software design, data engineering, and is excited about blending these

disciplines to create innovative solutions.

You will be responsible for one or more software and data products, participating in architectural

decisions, design, development, and evaluation of new technical solutions. This role also involves

leading a small team of developers.

Key Responsibilities

Manage and optimize data and compute environments to enable efficient use of data assets

Exposure to modern data stack technologies like dbt, airbyte

Experience building scalable data pipelines (e.g., Airflow is a plus)

Strong background with AWS and/or Google Cloud Platform (GCP)

Design, build, test, and deploy scalable and reusable systems capable of handling large volumes of

data

Lead a small team of developers; conduct code reviews and mentor junior engineers

Continuously learn and help teach emerging technologies to the team

Collaborate with cross-functional teams to integrate data into broader applications

Required Skills

Proven experience designing and managing data flows

Expertise in designing systems and APIs for data integration

8+ years of hands-on experience with Linux, Bash, Python, and SQL

4+ years working with Spark and other components in the Hadoop ecosystem

4+ years of experience using AWS cloud services, particularly:

EMRGlue

Athena

Redshift

4+ years of experience managing a development team

Deep passion for technology and enthusiasm for solving complex, real-world problems using cutting-

edge tools

Additional Skills (Preferred)

BS, MS, or PhD in Computer Science, Engineering, or equivalent practical experience

Strong experience with Python, C++, or other widely-used languages

Experience working with petabyte-scale data infrastructure

Solid understanding of:

Data organization: partitioning, clustering, file sizes, file formats

Data cataloging: Hive/Hive Metastore, AWS Glue, or similar

Background working with relational databases

Proficiency with Hadoop, Hive, Spark, or similar tools

Demonstrated ability to independently lead projects from concept through launch and ongoing

Job Types: Full-time, Permanent

Pay: ₹3,000,000.00 - ₹3,500,000.00 per year

Benefits:

Health insurance

Provident Fund

Work from home

Work Location: Remote

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.