👨🏻‍💻 postech.work

Data Engineer

Synechron • 🌐 In Person

In Person Posted 3 days, 20 hours ago

Job Description

We are

At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud \& DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.

Our challenge:

We are seeking a talented and experienced Data Engineer with expertise in OpenShift and a strong background in Python, PySpark, SQL, and data pipeline orchestration. The ideal candidate will have a solid understanding of Data Lakes, Data Factory concepts, and experience working within the banking or financial services domain. As a key member of our data engineering team, you will design, develop, and maintain scalable data pipelines and infrastructure to support business insights and decision-making.

The Role

Key Responsibilities:

Develop, deploy, and manage data pipelines using Python, PySpark, and Data Factory.

Design and optimize data lakes and data warehouse solutions, particularly leveraging Snowflake.

Build and maintain scalable ETL/ELT processes for large datasets.

Collaborate with data scientists, analysts, and stakeholders to understand data requirements.

Ensure data quality, integrity, and security across all data systems.

Manage deployment and orchestration of data workloads on OpenShift container platform.

Automate data workflows and improve data processing efficiency through scripting and orchestration tools.

Monitor system performance and troubleshoot issues related to data pipelines and infrastructure.

Requirements

4+ Years of Experience as a Data Engineer

Proven experience with Python programming for data engineering tasks.

Hands-on experience with PySpark and SQL for data transformations.

Strong knowledge of Snowflake data platform.

Experience with cloud data tools such as Data Factory or equivalent.

Familiarity with Data Lakes architecture and best practices.

Solid understanding of containerization and orchestration using OpenShift.

Familiarity with DevOps principles, CI/CD pipelines, and automation.

Strong problem-solving and communication skills.

Knowledge in Banking/Finance is mandatory

We offer:

A multinational organization with 58 offices in 21 countries and the possibility to work abroad.

15 days (3 weeks) of paid annual leave plus an additional 10 days of personal leave (floating days and sick days).

A comprehensive insurance plan including medical, dental, vision, life insurance, and long-term disability.

Flexible hybrid policy.

RRSP with employer’s contribution up to 4%.

A higher education certification policy.

On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.

Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.

Cutting edge projects at the world’s leading tier-one banks, financial institutions and insurance firms.

A truly diverse, fun-loving and global work culture.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.