👨🏻‍💻 postech.work

Lead Data Software Engineer

EPAM Systems • 🌐 Remote

Remote Posted 2 days, 17 hours ago

Job Description

We are seeking an experienced Lead Data Software Engineer for a remote role to drive the success of our team and advance our Data Science initiatives.

As a Lead Data Software Engineer, you will be responsible for designing and managing datamarts while delivering scalable and efficient data solutions tailored to client needs. This role includes building and maintaining data pipelines, providing operational support, and collaborating with Data Science teams to achieve high-quality results. You will also lead a team of skilled professionals, offering mentorship and guidance to encourage innovation and deliver impactful solutions.

Responsibilities

Lead and mentor the Data Software Engineering team, fostering professional development and continuous improvement

Collaborate with multi-disciplinary teams to deliver high-quality data solutions that meet project goals and deadlines

Establish and refine Data Software Engineering processes that prioritize automation, efficiency, and innovation

Develop and maintain scalable data solutions using AWS infrastructure

Optimize workflows and data processing with tools such as Apache Airflow and Apache Spark

Monitor industry trends and incorporate best practices to enhance engineering strategies

Provide operational support for data pipelines and datamarts to ensure seamless functionality

Direct the creation and deployment of REST APIs for effective data integration and communication

Partner with clients to understand requirements and deliver tailored solutions

Manage team structure and workflow to ensure timely and efficient project delivery

Work closely with stakeholders, demonstrating strong leadership and communication skills

Requirements

At least 5 years of experience as a Data Software Engineer, with expertise in managing complex and large-scale data infrastructures

Minimum of 1 year of experience leading and managing a team of engineers

Deep knowledge of AWS services, including the design and implementation of scalable data solutions

Advanced expertise in Apache Airflow and Apache Spark for workflow optimization and data processing

Strong proficiency with CI/CD tools such as Jenkins for efficient pipeline delivery

Solid skills in Python and SQL for creating and maintaining data pipelines and ETL processes

Experience with Databricks and PySpark for data analysis and processing

Familiarity with REST APIs for smooth data integration and communication

Excellent problem-solving and analytical capabilities for complex environments

Strong client-facing skills to ensure collaboration and alignment with project objectives

Outstanding organizational and leadership abilities to guarantee efficient project execution

Upper-intermediate English proficiency for effective communication with stakeholders and teams

Nice to have

Experience with Redshift for data warehousing and analysis

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.