About Saarthee:
Saarthee is a Global Strategy, Analytics, Technology and AI consulting company, where our passion for helping others fuels our approach and our products and solutions. We are a onestop shop for all things data and analytics. Unlike other analytics consulting firms that are technology or platform specific, Saarthee’s holistic and tool agnostic approach is unique in the marketplace. Our Consulting Value Chain framework meets our customers where they are in their data journey. Our diverse and global team work with one objective in mind: Our Customers’ Success. At Saarthee, we are passionate about guiding organizations towards insights fueled success. That’s why we call ourselves Saarthee–inspired by the Sanskrit word ‘Saarthi’, which means charioteer, trusted guide, or companion. Cofounded in 2015 by Mrinal Prasad and Shikha Miglani, Saarthee already encompasses all the components of Data Analytics consulting. Saarthee is based out of Philadelphia, USA with office in UK and India Position Summary:
We are seeking a highly skilled Senior DevOps Engineerwith strong expertise in Core DevOps, AWS, and Python/GoLang to design, build, and operate scalable, secure, and highly available cloud infrastructure.
This role requires hands-on ownership of CI/CD pipelines, container orchestration platforms, infrastructure automation, and system observability. You will collaborate closely with application engineers and SRE teams to ensure reliability, scalability, and operational excellence across environments. Your Role Responsibilities and Duties:
Design, implement, and manage scalable and highly available infrastructure on AWS.
Build, automate, and maintain CI/CD pipelines using Jenkins and/or GitHub Actions.
Deploy, operate, and optimize Kubernetes clustersfor containerized workloads in production.
Develop and manage Infrastructure as Code (IaC) using Terraform or CloudFormation.
Create and manage Helm Charts for application deployments.
Automate operational and system tasks using Python or GoLang scripting.
Collaborate with developers and SREs to improve system reliability, performance, and scalability.
Implement monitoring, alerting, and observability using tools such as Datadog, Prometheus, and Grafana.
Troubleshoot production issues and drive root-cause analysis and continuous improvement.
Follow DevOps and security best practices across environments.
Required Skills and Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or a related field.
6+ years of hands-on experience as a DevOps Engineer, with a strong focus on AWS.
Strong experience with Docker and container-based application deployments.
Proven expertise in Kubernetesadministration and production operations.
Hands-on experience creating and managing CI/CD workflows using Jenkins or GitHub Actions.
Strong proficiency in Infrastructure as Codetools such as Terraform or CloudFormation.
Automation and scripting experience using Python and/or GoLang.
Solid understanding of cloud networking, security, and reliability principles.
Strong analytical, troubleshooting, and problem-solving skills.
Excellent communication and cross-team collaboration abilities.
Kubernetes or Terraform certifications are a plus.
Good to Have / Preferred Skills Experience with configuration management tools such as Ansible*.
Exposure to AI \& ML concepts and ML infrastructure is an added advantage.
Experience working in consulting or enterprise-scale environments.
What We Offer Opportunity to work on enterprise-scale cloud and DevOps platforms*.
Exposure to modern AWS-native architectures and Kubernetes-based ecosystems.
Collaborative, learning-driven culture with strong technical ownership.
Competitive compensation aligned with industry standards.
A role that combines deep technical impact with business relevance.