Title: AWS Data Engineer
Location: Sydney
Permanent Role
Key Responsibilities:
1. Design and implement data pipelines for seamless data ingestion.
2. Utilize AWS technologies, including S3, Glue, and Lambda, for effective data management.
3. Collaborate with cross-functional teams to enhance data processes using Python, PySpark, and SQL.
4. Implement CI/CD deployments for streamlined development processes.
5. Understand Data Warehouse concepts to ensure efficient data storage and retrieval.
6. Orchestrate data workflows using AWS Step Functions and Apache Airflow.
7. Able to create data pipeline using AWS services and Snowflake with DBT.
8. Able to utilize Snowflake concepts to streamline data processing and reduce bottlenecks of time and variable cost.
Essential Qualifications:
1. Minimum of 6 years of hands-on experience in data engineering.
2. Proficient in AWS services (S3, Glue, Lambda, Redshift, Airflow, Step Functions).
3. Strong programming skills in Python and PySpark.
4. Excellent SQL skills for data manipulation and analysis.
5. Should have a minimum of 2 years of hands-on experience in Snowflake. Should have good knowledge of core Snowflake concepts.
6. DBT hands-on experience is required. If no hands-on, the candidate should be having good command on concepts and basics of DBT fundamentals.
Interested Candidates can share their CVs at "kranthi.kumar@carecone.com.au"