Job Title: Data Engineer – Azure \& Databricks
Requisition ID: 53660
Location: Oshawa, ON (Hybrid, 3 days remote) Job Overview
We are seeking a hands-on Data Engineer – Azure \& Databricks to design, build, and maintain scalable data solutions that enable data-driven decision-making across the organization. You will work in a cross-functional agile team to deliver high-quality data pipelines, data lake, and data warehouse products that support analytics, reporting, and innovative customer experiences.
This role requires expertise in Azure Data Factory, Azure Data Lake, Azure Synapse, and Databricks, as well as strong programming skills in Python, PySpark, and SQL. The ideal candidate is experienced in building performant, secure, and maintainable data pipelines, collaborating with stakeholders, and applying best practices in data engineering and governance. Key Responsibilities* Build and productionize modular, scalable ELT/ETL pipelines using Azure Data Factory, Databricks, PySpark, and SparkSQL.
Implement data ingestion and curation pipelines to create a single source of truth for analytics, reporting, and downstream systems.
Collaborate with Data Architects, infrastructure, and cybersecurity teams to ensure data security and compliance.
Clean, optimize, and prepare datasets, ensuring data quality, lineage, and performance standards.
Support Business Intelligence Analysts in dimensional data modeling and visualization.
Provide production support for data pipelines and troubleshoot data-related issues.
Collaborate with data engineers, analysts, architects, and scientists to build a centralized data marketplace.
Automate manual processes and optimize infrastructure for scalability and efficiency.
Work within an agile SCRUM framework, contributing to backlog items and sprint deliverables.
Maintain documentation, metadata, and source control for data products.
Implement CI/CD and DevOps pipelines for data infrastructure and product deployment.
Monitor production solutions and provide Tier 2 support as needed.
Enforce role-based access control for data products.
Participate in peer code reviews, testing, and quality assurance.
QualificationsEducation:* Four-year university degree in Computer Science, Software Engineering, Data Engineering, AI, or a related field.
Experience:* Proven experience as a Data Engineer building data pipelines, lakes, and warehouses.
Strong programming skills in Python, PySpark, SparkSQL, and SQL.
Hands-on experience with Azure Data Factory, ADLS, Synapse Analytics, and Databricks.
Understanding of data structures, data processing frameworks, and governance principles.
Ability to communicate technical concepts to non-technical stakeholders effectively.
Spirit Omega is committed to a diverse and inclusive workplace. We welcome applications from anyone, including members of Indigenous peoples, Women, visible minorities, persons with disabilities, persons of minority sexual orientations and gender identities, and others with the skills and knowledge to productively engage with diverse communities.
Looking for more opportunities? Check out our website at jobs.spiritomega.com
#INDSPO