Now Hiring: Platform Engineer (Big Data System Engineer) – Zurich, Switzerland
Role:
Platform Engineer (Big Data System Engineer)
Location:
Zurich, Switzerland
Experience:
Minimum 5+ years
Type:
Permanent
Work Mode:
Hybrid (3 days onsite, 2 days remote)
Languages:
English (Advanced), German (Optional)
Start Date:
Immediate
Eligibility:
EU Nationals only
About the Role
We are looking for an experienced
Platform Engineer (Big Data System Engineer)
to join a global data team in Zurich. The ideal candidate will have strong expertise in
Big Data platforms, DevOps, Kubernetes, Kafka
, and
data pipeline automation
. This role focuses on operating and optimizing enterprise data systems and implementing automation to ensure scalability, reliability, and performance.
The
Platform Engineer
will work closely with data engineering, security, and cloud operations teams to design and deliver resilient, secure, and high-performing data platforms.
Key Responsibilities
Operate and manage
Global Data Platform components
such as VM servers,
Kubernetes
, and
Kafka
.
Administer and support data applications including
Apache stack, Collibra, and Dataiku
.
Automate infrastructure, security, and
CI/CD pipelines
for efficient data pipeline execution (
ETL/ELT
).
Build and enhance
data pipeline resiliency
through monitoring, alerting, and quality checks.
Apply
DevSecOps
and
Agile methodologies
to deliver integrated platform solutions.
Collaborate with enterprise security, digital engineering, and cloud operations teams to align on architecture frameworks.
Troubleshoot and resolve system issues, incidents, and alerts, continuously improving performance.
Stay updated on industry developments and
Big Data technology trends
to design new features and capabilities.
Experience Required
Minimum 5 years of experience in
building or designing large-scale, distributed data systems
.
Strong background in
Big Data platform operations
,
data pipeline development
, and
DevOps automation
.
Hands-on experience with
Kafka
,
Control-M
,
AWA
, and other streaming/file-based ingestion tools.
Experience in
DevOps tools
such as
Jenkins
,
Octopus
,
Ansible
,
Chef
, or
XL Deploy
.
Knowledge of
on-prem Big Data architecture
; exposure to
cloud migration
is a plus.
Experience integrating
Data Science Workbench platforms
(e.g.,
Dataiku
).
Familiarity with
Agile methodologies
like
Scrum
or
SAFe
.
Experience supporting
enterprise reporting (Tableau)
and
Data Science / ML Ops
environments.
Skills \& Technical Competencies
Deep expertise in
large data solutions
such as data lakes, delta lakes, data meshes, lakehouses, and streaming platforms.
Strong hands-on knowledge of distributed technologies:
Kafka, Kubernetes, Spark, S3, Parquet
.
Proficient in
Python
and
Java
(or
Scala/R
), along with
Linux/Unix scripting
,
Jinja templates
, and
Puppet scripts
.
Experience in
VM setup
,
K8S scaling
,
Docker/Harbor management
, and
CI/CD pipelines
.
Understanding of
firewall configurations
, network setup, and platform security.
Excellent communication skills in English; German language knowledge is an advantage.
Background in the
financial services domain
is desirable.
Higher education in
Computer Science
,
Information Systems
, or
Engineering
(e.g., Fachhochschule, Wirtschaftsinformatik).
If you are a skilled
Platform Engineer
with expertise in
Big Data systems, Kubernetes, Kafka, and DevOps
, and are looking for a challenging role in Zurich, we’d love to hear from you.
#PlatformEngineer #BigDataEngineer #DataPlatform #Kafka #Kubernetes #Spark #S3 #Python #Java #Scala #Dataiku #DevOps #DataEngineering #CI_CD #HybridJobs #ZurichJobs #SwitzerlandJobs #S4HANA #BigData #DataPipelines #ETL #DataAutomation #HiringNow #TechJobs #EuropeCareers