Key Responsibilities:
Data Pipeline Development: Design, develop, and maintain efficient and scalable data
pipelines using Python and Azure tech stack.
Collaboration: Work closely with data scientists, analysts, and other stakeholders to
understand data requirements and deliver solutions that meet business needs.
Documentation: Create and maintain comprehensive documentation for data pipelines,
processes, and solutions.
Troubleshooting: Identify and resolve data-related issues and bottlenecks.
Qualifications:
Education: Bachelor's or Master's degree in Computer Science, Information
Technology, or equivalent in related field.
Experience: 6-8 years of experience in data engineering, with a strong focus on Python,
Azure, CI/CD, and deployment.
Technical Skills:
Proficiency in Python, Api’s for data processing and automation.
Extensive experience with Azure cloud services (Azure Data Factory, Azure
Databricks, Azure SQL Database, etc.).
Understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI/CD).
Understandin with containerization and orchestration tools (e.g., Docker,
Kubernetes).
Knowledge of SQL and NoSQL databases.
Familiarity with data warehousing concepts and technologies.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and collaboration abilities.
Ability to work independently and as part of a team.
Detail-oriented with a focus on quality and accuracy.
Job Types: Full-time, Permanent
Pay: ₹2,500,000.00 - ₹2,700,000.00 per year
Benefits:
Work from home
Experience:
Total Work: 4 years (Preferred)
Agentic AI: 1 year (Preferred)
Work Location: Remote