👨🏻‍💻 postech.work

Sr. Data Engineer (Databricks)

Blumetra Solutions • 🌐 In Person

In Person Posted 1 week, 1 day ago

Job Description

Job Title: Senior Data Engineer

Experience:

6 - 8 years

Location:

Hybrid

Employment Type:

Full-time

Mandatory skills: AWS, Python, SQL, Databricks

About The Role

We are looking for a highly skilled

Senior Data Engineer

who is passionate about building robust, scalable, and high-performance data systems. The ideal candidate will have deep expertise in

SQL, Python, AWS, and Databricks

, with a proven track record of designing and implementing modern data pipelines and analytical frameworks.

Key Responsibilities

Design, develop, and maintain scalable data pipelines and ETL processes for data ingestion, transformation, and storage.

Work with cross-functional teams to define and deliver data solutions supporting business and analytics needs.

Optimize and fine-tune SQL queries, data models, and pipeline performance.

Build and manage data workflows in Databricks and integrate with AWS data services (S3, Redshift, Glue, Lambda, etc.).

Ensure data accuracy, consistency, and reliability through data quality checks and monitoring frameworks.

Collaborate with Data Scientists, Analysts, and Product teams to enable self-service analytics and advanced data-driven insights.

Follow best practices for data governance, security, and compliance.

Continuously evaluate emerging data technologies and propose innovative solutions for process improvement.

Required Skills \& Qualifications

Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

5+ years of hands-on experience in Data Engineering or related roles.

Strong proficiency in SQL for complex query development and data manipulation.

Expertise in Python for building data processing and automation scripts.

Experience with AWS ecosystem — especially S3, Glue, Redshift, Lambda, and EMR.

Hands-on experience with Databricks for data processing, transformation, and analytics.

Experience working with structured and unstructured datasets in large-scale environments.

Solid understanding of ETL frameworks, data modeling, and data warehousing concepts.

Excellent problem-solving, debugging, and communication skills.

Good to Have

Experience with Airflow, Snowflake, or Kafka.

Knowledge of CI/CD pipelines and Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.

Exposure to data governance, metadata management, and data cataloguing tools.

Skills: python,databricks,sql,aws

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.