Job Description
Candidates should bring 3–5+ years of practical experience in some of the following areas:
Working extensively with SQL Server.
Creating and maintaining data pipelines using Azure technologies — ideally Azure Databricks, but also Azure Data Factory, Azure Functions, Azure Stream/Log Analytics, and Azure DevOps.
Developing data workflows or enrichment processes in Python, particularly within a Databricks setup.
A willingness to explore and adopt new platforms or tools such as Redis, RabbitMQ, Neo4j, Apache Arrow, and others related to the data ecosystem.
Comfortable collaborating with business analysts and various stakeholders to clarify requirements and shape solutions that are reusable and well-integrated.
Ready to participate in thorough testing, aligned with DevOps practices and team reinforcement of quality standards.
Capable of producing clear, well-structured documentation for the solutions delivered.
Knowledge of Power BI is required.
Experience working with SAP data sources is beneficial but not essential.
Able to engage in constructive discussions, apply logical reasoning, and provide grounded, straightforward feedback when needed.
Languages: strong command of English is mandatory; fluency in German, French, or Dutch is considered an advantage.
Expected to approach data from a cross-organizational perspective, avoiding siloed thinking and promoting coherence and reuse across the enterprise.
Demonstrated ability to continuously improve processes and drive measurable outcomes.
Customer-focused mindset with an iterative and analytical approach to problem-solving and delivery.
A genuine team contributor who values respect, open communication, constructive challenge, innovation, and unified collaboration.
Product-oriented attitude and enthusiasm for sharing insights with the team or with other groups to contribute to overall product success.