Senior Data Engineer
Hybrid (Dublin, Ireland)
About the Company
Our client is a global technology-driven organization focused on improving lives through data, innovation, and digital transformation. With a culture rooted in collaboration, diversity, and continuous learning, the company builds scalable solutions that drive real-world impact across industries. Employees enjoy strong professional growth opportunities, flexible hybrid work arrangements, and a people-first culture that values initiative, creativity, and excellence.
About the Position
We are seeking an experienced Senior Data Engineer to join a growing analytics and engineering team. In this role, you will design, build, and maintain modern data pipelines and architectures that support analytics, reporting, and advanced data initiatives. You’ll work with cloud-based technologies, develop efficient ETL processes, and ensure the highest standards of data quality, performance, and security.
Key Responsibilities
Design, build, and optimize data pipelines that extract, transform, and load data from multiple sources into centralized repositories (data lakes, data warehouses, etc.)
Manage data integration across systems, APIs, logs, and external sources, ensuring data is consistent and reliable
Implement data transformation and cleaning routines to ensure data integrity and usability for analytics, reporting, and machine learning initiatives
Collaborate with cross-functional teams to establish data architecture standards and best practices in pipeline automation, deployment, and monitoring
Contribute to frameworks that support data governance and compliance with organizational and regulatory standards
Work closely with analytics, product, and infrastructure teams to design scalable data solutions using cloud platforms such as Azure and Snowflake
Implement monitoring and alerting mechanisms to ensure high availability, accuracy, and reliability of data systems
Participate in continuous improvement efforts by exploring new technologies and optimizing existing data processes
Experience/Requirements
Required:
Bachelor’s degree (or equivalent experience) in Computer Science, Information Technology, Data Management, or a related field
Proven experience designing and implementing data solutions and performing data modeling
Strong hands-on experience with PySpark and SQL, including advanced queries and window functions
Experience with Azure Data Factory or Airflow for orchestration
Proficiency with Azure Databricks and data pipeline development in cloud environments
Familiarity with CI/CD and DevOps tools for automated deployment and testing
Strong understanding of data governance, security, and compliance best practices
Advanced Python skills for data manipulation and transformation
Preferred:
Experience working in agile/scrum environments
Knowledge of Snowflake or similar modern data warehouse technologies
Exposure to machine learning (ML) or AI model deployment in production settings
Self-motivated and proactive, with the ability to manage priorities and deliver results independently
Excellent communication skills and a collaborative mindset