The Opportunity
Future Fluent has been exclusively engaged by a leading infrastructure services organisation to appoint a
Data Engineer
in a newly created role in Brisbane.
This is a pivotal opportunity to design, build, and maintain scalable, high-performance data infrastructure that underpins the organisation’s analytics, machine learning initiatives, and core operational capabilities.
As a key enabler of data-driven transformation, you’ll convert raw data into actionable intelligence — ensuring data quality, accessibility, and reliability across the enterprise. Your work will empower analysts and the broader data team to deliver insights that drive strategic growth, operational efficiency, and innovation.
This position directly connects data engineering outcomes to business goals, supporting smarter decision-making and enabling product innovation across the supply chain.
What’s in it for you
Competitive salary package
Flexible work arrangements to support work-life balance
Collaborative, supportive team environment
Exposure to diverse projects across multiple business sectors
Ongoing training and professional development to support your career growth
A range of great employee benefits, including paid parental leave, discounted private health insurance, and more
Key Accountabilities
Design and maintain scalable ETL/ELT pipelines using modern data stack tools
Build optimised data models in warehouses (e.g., Snowflake, BigQuery) and lakes (e.g., Delta Lake)
Ingest data from APIs, ERP systems, and streaming sources with high integrity
Automate processes, monitor pipeline health, and ensure performance and availability
Continuously tune pipelines for efficiency and cost-effectiveness
Architect scalable solutions to support growing data demands
Ensure compliance with data privacy regulations (e.g., GDPR, CCPA)
Stay current with emerging technologies and drive platform enhancements
Could You Be Our Next Data Engineer?
We’re seeking a technically brilliant and forward-thinking
Data Engineer
to help shape the future of data in supply chain.
You’ll bring:
A Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field
Industry certifications (e.g., AWS Certified Data Analytics, Google Cloud Data Engineer) are a strong advantage
Extensive experience in data engineering, with a proven track record of building scalable ETL/ELT pipelines using modern data stack technologies
Hands-on expertise in AI/ML, Databricks, and cloud platforms
Familiarity with agile methodologies (Scrum, Kanban)
A portfolio of projects or open-source contributions is highly valued
Technical Expertise
Programming:
Python and SQL expertise; Java/Scala a plus
Databases:
PostgreSQL, Snowflake, BigQuery, and Delta Lake
Cloud:
AWS, Azure, or GCP (e.g., S3, EMR, Glue, Data Factory, BigQuery)
Big Data:
Apache Spark, Airflow, and Kafka
ERP Systems:
Experience with SAP or similar P2P/S2C platforms
Tooling:
DBT, Docker, and Git