👨🏻‍💻 postech.work

Data Engineer (Remote)

Augusta Hitech Soft Solutions • 🌐 In Person • 💵 $380,000 - $800,000

In Person Posted 2 days, 6 hours ago

Job Description

Industry: IT

Qualification: Any Degree

Required Skills: SQL, AWS Redshift, Glue, ETL/ELT

Working Shift: 2PM to 11PM IST

City: Coimbatore / Chennai / Bangalore

Country: India

Name of the position: Data Engineer

Location: Remote

No. of resources needed: 01

Mode: Contract (3 to 6 Months )

Years of experience: 10+ Years

Shift: UK shift (2pm to 11pm)

Overview

We are seeking an experienced Senior Data Engineer with strong expertise in AWS Redshift and AWS Glue to design, develop, and maintain large-scale data solutions. The ideal candidate will have deep hands-on experience across the AWS data ecosystem and will play a critical role in building scalable, reliable, and high-performance data pipelines, with a strong ability to understand business requirements and translate them into effective data models and SQL solutions.

Key Responsibilities

Design, develop, and maintain end-to-end ETL/ELT pipelines using AWS Glue, Redshift, and related AWS services

Develop and maintain robust data models by understanding business requirements and analytics needs

Write and optimize complex SQL queries to support reporting, analytics, and downstream data consumers

Optimize data warehouse schemas, data models, and queries for performance, scalability, and cost efficiency

Collaborate with data architects, analysts, and business stakeholders to translate business needs into technical data solutions

Implement data quality, governance, and security best practices across data pipelines

Automate workflows, monitoring, alerting, and performance tuning in AWS environments

Manage data ingestion from multiple structured and unstructured data sources

Troubleshoot and resolve production data issues, ensuring high availability and reliability

Required Skills \& Qualifications

Strong hands-on experience with AWS Redshift, including performance tuning, schema design, and data warehouse optimization

Expertise in AWS Glue (Glue Studio, Glue Catalog) and PySpark for ETL development

Solid expertise in data modeling, with the ability to understand business processes and translate them into efficient data models

Strong proficiency in SQL, with proven ability to write complex, high-performance queries

Solid understanding of data warehousing concepts, ETL design patterns, and data modeling techniques (Star / Snowflake schemas)

Experience with AWS services such as S3, Lambda, Athena, Step Functions, and CloudWatch

Proficiency in Python for data transformation and automation

Experience with CI/CD pipelines, Git version control, and Infrastructure as Code (CloudFormation / Terraform) is a plus

Excellent problem-solving, analytical, and communication skills

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.