This is a remote position.
Job Summary
The Data Engineer designs, builds, and maintains scalable data pipelines and infrastructure that enable reliable data collection, storage, and processing. This role ensures high data quality, availability, and performance to support analytics, reporting, and data-driven decision-making across the organization.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL/ELT processes
Build and optimize data models, data warehouses, and data lakes
Integrate data from multiple sources, including databases, APIs, and streaming systems
Ensure data quality, reliability, security, and governance standards are met
Optimize data performance, cost, and scalability
Collaborate with data analysts, data scientists, and software engineers to support data needs
Monitor, troubleshoot, and resolve data pipeline and infrastructure issues
Implement automation, monitoring, and logging for data workflows
Maintain technical documentation and data architecture standards
Requirements Required Qualifications
Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field
Strong proficiency in SQL and experience with relational and NoSQL databases
Experience with data pipeline tools and frameworks (e.g., Airflow, dbt, Spark)
Experience working with cloud data platforms (AWS, Azure, or GCP)
Preferred Qualifications
Proficiency in Python, Java, or Scala
Experience with big data technologies (e.g., Hadoop, Kafka)
Familiarity with data warehousing solutions (e.g., Snowflake, BigQuery, Redshift)
Experience with Infrastructure as Code and CI/CD for data pipelines
Skills \& Competencies
Strong problem-solving and analytical skills
Attention to detail and data accuracy
Ability to design scalable and maintainable systems
Strong collaboration and communication skills
Continuous improvement and learning mindset