Databricks Developer
Location: Amsterdam, Netherlands
Duration: Long Term (Contract)
Experience Required: 10+ Years
Required Skills
2–4 years hands-on experience with Databricks, including Notebooks, Jobs, and Workflows.
Strong proficiency in PySpark, Spark SQL, and building distributed data processing pipelines.
Practical experience with Delta Lake (ACID, MERGE, schema evolution, OPTIMIZE/VACUUM).
6–7 years of experience in data engineering, ETL/ELT pipeline development, and cloud-based data processing.
Strong coding skills in Python and SQL.
Experience with at least one major cloud platform (Azure preferred, AWS/GCP acceptable).
Hands-on experience with Azure Data Lake, AWS S3, or Google Cloud Storage.
Familiarity with data modeling concepts: fact/dimension models, star/snowflake schema, and lakehouse architecture.
Experience using Git and working with version control workflows.
Experience supporting CI/CD pipelines (Azure DevOps, GitHub Actions, GitLab CI).
Strong understanding of data validation, schema checks, data quality controls, and error handling.
Experience with orchestration tools like Databricks Jobs, ADF, Airflow, or Prefect.
Understanding of cloud security practices: IAM, RBAC, secrets management (Key Vault/Secrets).
Strong documentation, communication, and cross-team collaboration skills.
Nice-to-Have Skills
Experience with Unity Catalog, Databricks SQL endpoints, or MLflow.
Knowledge of streaming technologies: Kafka, Azure Event Hub, AWS Kinesis.
Exposure to modern cloud data warehouses like Snowflake, Azure Synapse, Redshift, or BigQuery.
Familiarity with Terraform, ARM templates, or other IaC tools.
Experience with monitoring and logging tools (CloudWatch, Azure Log Analytics, Datadog).
Understanding of advanced Spark tuning concepts such as adaptive query execution, autoscaling, and cluster optimization.