Job Description
**Your Growth Unleashed
Where your journey begins and your potential shines**
At Synpulse, we donât just consult â we transform. As a leading global management and technology consultancy with Swiss roots, we empower financial institutions to navigate change and seize new opportunities. Since 1996, we've been shaping the future of financial services by delivering cutting-edge expertise in strategy, operations, and technology to banks and insurers worldwide.
What sets us apart is our people. At Synpulse, we cultivate a collaborative, high-impact culture where initiative and creativity are valued. With 21 offices across Europe, the Americas, and Asia-Pacific, we bring together diverse perspectives and talents. We believe innovation thrives where everyone feels they belong and can contribute.
Our core values â
Embrace, Drive, Achieve
â shape how we work and evolve. We donât just offer jobs â we offer the chance to develop, make an impact, and be part of a team thatâs redefining the future of financial services.
Are you excited about transforming the way we bank by using the latest technology? We are too!
Note: The selection process will involve in-person interviews.
As a Data Engineer, youâll play a pivotal role in designing, building, and optimizing data pipelines and infrastructure for our financial services clients. Youâll work closely with consultants and client stakeholders to enable scalable, secure, and high-performance data solutions that power strategic decision-making, while thriving in a dynamic, multi-project environment where initiative and ownership are key.
Job Requirements
About You:
5+ years of professional experience in data engineering, with a strong focus on designing, developing, and optimizing scalable data pipelines, ETL/ELT workflows, and data integration solutions.
Proficiency in SQL and Python; knowledge of Spark, PySpark, Scala, or Java is a strong asset.
Hands-on experience with cloud platforms (AWS, Azure, or GCP) and cloud-native data tools (e.g., Snowflake, Databricks, Airflow, AWS Glue, EMR, AWS Lambda).
Experience working with ML Models, LLM frameworks and deploying GenAI pipelines (prompt chains, embeddings etc.)
Experience with containerization (Docker) and orchestration (Kubernetes) in cloud environments.
Familiarity with data mesh architecture principles and decentralized data ownership models.
Experience with relational and NoSQL databases (PostgreSQL, Sybase, MySQL, DynamoDB) and storage/integration systems (S3, SFTP).
Experience with scheduling and orchestration tools (CRON, Apache Airflow, AWS Step Functions).
Strong understanding of data modeling, warehousing concepts, and real-time processing architectures.
Experience with version control (Git/Bitbucket) and CI/CD practices using tools such as Jenkins; familiarity with Infrastructure as Code solutions like Terraform and CloudFormation.
Knowledge of observability, monitoring, and data lineage frameworks to ensure pipeline transparency.
Experience in financial services data domains (risk, compliance, trading, wealth/asset management, customer analytics).
Excellent communication skills and ability to work with both technical and non-technical stakeholders.
Job Responsibilities
About The Job:
Develop ETL/ELT workflows to enable seamless extraction, transformation, and loading of data from diverse sources into warehouses, data lakes, and client-facing applications.
Contribute to full-stack platform projects and infrastructure initiatives, adapting to new technologies and platforms as needed to deliver comprehensive solutions that meet diverse client requirements across consulting engagements.
Build and maintain data assets and models to support analytics, reporting, visualization and operational use cases.
Architect and implement data mesh patterns to enable domain-oriented data ownership.
Collaborate with business teams, consultants, and analysts to gather and understand requirements, ensuring alignment with client objectives and data needs.
Implement observability, monitoring, lineage, and compliance best practices for transparency and regulatory alignment.
Contribute to migration projects from legacy platforms to modern data ecosystems (e.g., Snowflake, AWS).
Build and maintain CI/CD pipelines for data applications, ensuring automated testing, deployment, and rollback capabilities in containerized environments.
Collaborate with QA and Support teams to troubleshoot and resolve production issues, ensuring data quality and stability.
Participate in client workshops, solution design sessions, and the preparation of technical documentation.
Job Benefits
Why us:
Global transformation company with expertise in the financial space and the latest technologies
A comprehensive onboarding program that offers you time and resources to broaden your skillset and orientate yourself to Synpulseâs values and methods
Continual and comprehensive learning and development through our Global Academy Program
Internal and external events to drive our DE\&I mission âFree To Be Meâ
Opportunities to transfer between practices and to our locations across the world
Hybrid working environment
Alongside a competitive salary, you'll get lots of other great benefits too:
20 days annual leave plus public holidays
Contribution to retirement account
Health and Life insurance coverage from day 1
Employee Assistance Program with with 24/7 mental health support
Extensive Perks Program
Your Documents To Start The Process
Resume
Do you approach your tasks with commitment and enjoyment and are you convinced that teamwork achieves better results than working alone? Are you proactive and willing to go the extra mile for your clients? Are you motivated not only to design solutions but also to implement them? As a flexible and goal-oriented person, you will quickly assume entrepreneurial responsibility with us.
Do you appreciate the spirit of a growing international company with Swiss roots and a strong corporate culture?
Then we look forward to receiving your online application at Synpulse Careers.