6 month Daily rate contract
Dublin - Hybrid
As a Data Engineer at you will be responsible for designing and implementing data solutions using Microsoft Azure and Databricks, ensuring scalable, efficient, and secure data pipelines while meeting the needs of both technical and business stakeholders. You will work closely with cross-functional teams to understand data requirements and design \& develop cloud-based solutions that support real-time and batch data processing, advanced analytics, and machine learning workflows.
The role
Design, develop and implement robust, scalable, and efficient data pipelines in our Azure/Databricks cloud platform, for real-time and batch workloads
Implement ELT processes to integrate data from various sources into the CDP
Implement data quality checks, monitoring systems, and alerts for pipeline health
Manage and optimize data workloads in the data platform, ensuring performance, cost-efficiency, resiliency and security
Collaborate with data scientists, analysts, and other engineers to understand data needs and deliver optimal solutions from ingestion through to visualisation
Ensure data solutions comply with data governance, security, and privacy requirements
Understand cloud infrastructure provisioning using Infrastructure as Code (IaC) tools e.g. Terraform
Ensure data integrity, security, governance, and compliance in all data solutions, adhering to industry best practices and regulations
Mentor and guide junior data engineers leveraging industry wide best practices and technical excellence
Troubleshoot and resolve data-related issues, ensuring data accuracy and performant pipelines
Stay current with the latest technologies and trends in cloud computing \& data engineering particularly within the Microsoft Azure and Databricks ecosystems
Skills Required
Strong experience in Microsoft Azure, including Azure Data Factory, Azure SQL Database, Azure Storage, Azure Key Vault, Function \& Logic Apps, Cost Analysis and other related Azure cloud services
Experience with Databricks, including data engineering, data processing, and analytics development
Strong understanding of performance optimisation best practices \& data governance frameworks (e.g Databricks Unity Catalog)
Strong knowledge of ETL/ELT processes, data modelling, data warehousing concepts and medallion architecture
Experience with real-time data processing frameworks (e.g., Apache Kafka)
Proficiency in Python programming language
Expertise in PySpark, SQL and experience working with large-scale datasets
Knowledge of CI/CD DevOps practices and tools (Git, Azure DevOps)
Experience working with Agile methods of delivery
Cloud infrastructure provisioning using Terraform Infrastructure as Code (IaC), desirable
Knowledge of machine learning integration with ML Ops desirable
For more information call Michael on 01-6146058 or [email protected]