Project Role : Application Developer
Project Role Description : Design, build and configure applications to meet business process and application requirements.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary: Roles \& Responsibilities : Provide operational support for Azure Databricks clusters, jobs, notebooks, and workspaces. Monitor platform health, job performance, and troubleshoot issues related to pipelines, clusters, and data latency. Handle incident management, root cause analysis (RCA), and L2/L3 problem resolution for Databricks-related issues. Support Delta Lake operations including schema evolution, versioning, and data consistency checks. Manage and troubleshoot connections to ADLS Gen2, Azure SQL, Synapse, ADF, Event Hubs and other external data sources. Provide support for data sharing concepts including Delta Sharing, Unity Catalog, cross-workspace data access, and data governance. Troubleshoot cross-tenant/workspace data access via private endpoints, credential passthrough, and security policies. Support real-time streaming data pipelines and resolve issues with checkpoint corruption, consumer lag, or schema drift. Manage access controls, permissions, and entitlements for Databricks users, groups, and shared data assets. Automate monitoring and housekeeping activities using Python, PySpark, or PowerShell scripts. Maintain runbooks, troubleshooting guides, and operational best practices for recurring issues. Collaborate with data engineers, architects, and business teams to ensure smooth operations of the data platform. Provide operational support for Azure Databricks clusters, jobs, and workspaces. Monitor platform health, job performance, and proactively resolve issues related to pipelines, clusters, or data latency. Handle incident management, root cause analysis (RCA), and problem resolution for Databricks-related issues. Support Delta Lake operations including schema evolution, versioning, and data consistency checks. Collaborate with Data Engineers and Architects to ensure smooth execution of ETL pipelines and data flows. Manage and troubleshoot integrations with Azure Data Lake Storage (ADLS), ADF, Key Vault, and Synapse. Apply performance tuning to PySpark jobs, SQL queries, and cluster configurations. Maintain access controls, permissions, and security policies for Databricks users and groups. Develop automation scripts for monitoring, housekeeping, and operational tasks. Document runbooks, troubleshooting guides, and operational best practices. Participate in on-call rotation for critical production support. Professional \& Technical Skills: Must Have Skills : Hands-on experience with Databricks Unified Data Analytics Platform (administration, job scheduling, troubleshooting). Strong knowledge of PySpark and SQL for debugging production workloads. Solid understanding of Delta Lake concepts, schema evolution, and optimization. Proficiency in Azure cloud services: ADLS Gen2, ADF, Synapse, Key Vault. Experience in cluster management, autoscaling, job retries, and performance tuning. Familiarity with monitoring tools (Log Analytics, Azure Monitor, or third-party tools). Knowledge of CI/CD integration for Databricks (Repos, Git, pipeline triggers). Good to Have Skills: Exposure to DataOps practices and automation using PowerShell, Python, or Terraform. Experience with incident and change management tools (ServiceNow, Jira). Understanding of Lakehouse architecture governance and security best practices. Knowledge of networking aspects in Databricks (VNets, private endpoints, firewall rules). Additional Information: This position is based at our Hyderabad office. Requires 15 years of full-time education. Candidate should have strong problem-solving skills and be comfortable working in a production support environment with cross-functional teams.
15 years full time education