We are seeking a skilled Senior Data Software Engineer to play a critical role in building reliable, data-centric applications.
In this role, you will utilize your expertise in Big Data technologies, cloud platforms, and engineering best practices to create and implement cutting-edge systems. You will work closely with multidisciplinary teams to align with organizational goals and deliver impactful solutions.
Responsibilities
Develop and support data software applications designed for Data Integration Engineers
Design and implement advanced analytics platforms using technologies such as Spark, PySpark, and NoSQL databases
Leverage cloud solutions like AWS to optimize workflows and improve system efficiency
Collaborate with product and engineering teams to collect requirements and contribute to strategic decision-making
Work with architects, technical leads, and other stakeholders to ensure seamless integration of solutions
Analyze technical environments and business needs to deliver scalable, high-performance implementations
Perform code reviews to uphold development standards and ensure quality control
Test and validate systems to verify they meet functional, technical, and performance benchmarks
Prepare and maintain detailed documentation to support both development and operational processes
Engage directly with clients to understand their needs and provide customized technical recommendations
Requirements
Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related discipline
Minimum of 3 years of experience in Data Software Engineering with expertise in Big Data technologies
Thorough understanding of data engineering principles, including storage, management, security, and visualization
Experience with data pipelines, Data Warehousing, and Data Lakes concepts
Proficiency in programming languages such as Python, Java, Scala, or Kotlin
Extensive experience with SQL and NoSQL database systems
Practical knowledge of Big Data tools, particularly Spark and PySpark
Proven experience designing and deploying cloud-based solutions using AWS services such as Glue and RedShift
Familiarity with CI/CD pipelines to streamline integration and deployment processes
Understanding of containerization and resource management tools like Docker, Kubernetes, and Yarn
Experience working with Databricks for advanced data engineering and analytics projects
Strong English communication skills, both written and verbal, at a B2 level or above
Nice to have
Hands-on experience with additional Big Data tools, including Hadoop, Hive, and Flink
Knowledge of SDLC methodologies, with an emphasis on Agile frameworks
Demonstrated ability to manage and execute the software development lifecycle effectively