Job Description:
We are looking for a highly motivated and detail-oriented
Data Engineer
to design, build, and maintain data infrastructure that powers our analytics and business intelligence initiatives. This role is ideal for individuals with 1–3 years of experience in data engineering, data warehousing, or ETL development who are passionate about transforming raw data into reliable, scalable pipelines and systems.
As a Data Engineer, you will collaborate with data scientists, analysts, and software developers to ensure data availability, integrity, and performance. You will contribute to the development of efficient data models, pipelines, and APIs to support data-driven decision-making across the organization.
Key Responsibilities:
Design, develop, and maintain scalable ETL/ELT pipelines for data ingestion, transformation, and integration
Build and manage data warehouses, data lakes, and databases to support analytics and reporting
Collaborate with stakeholders to understand data requirements and translate them into technical solutions
Ensure data quality, consistency, and integrity through validation and monitoring processes
Optimize data workflows for performance, reliability, and cost-effectiveness
Automate data collection and processing tasks using scripts or data tools
Support data governance practices, including metadata management and documentation
Monitor production data pipelines and troubleshoot issues as needed
Assist in designing and implementing data models that align with business needs
Qualifications:
Education:
Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field
(Master’s degree or certifications in data engineering are a plus)
Experience:
1–3 years of hands-on experience in data engineering, ETL development, or backend data systems
Internship or project-based experience in data infrastructure may also be considered
Technical Skills:
Proficient in SQL for data querying and manipulation
Experience with at least one programming language (e.g., Python, Java, or Scala)
Familiarity with data pipeline frameworks such as Apache Airflow, Luigi, or DBT
Experience with cloud platforms like AWS, GCP, or Azure (e.g., Redshift, BigQuery, S3, Snowflake)
Understanding of data warehousing concepts, normalization, and performance tuning
Exposure to version control tools (e.g., Git) and CI/CD practices is a plus
Soft Skills:
Strong analytical and problem-solving abilities
Excellent attention to detail and data accuracy
Good communication skills to work with technical and non-technical stakeholders
Ability to manage multiple tasks and deliver on time in a dynamic environment
Collaborative team player with a proactive mindset
Other Requirements:
Authorized to work without sponsorship
Strong interest in building reliable and scalable data solutions to support business growth
-