👨🏻‍💻 postech.work

Senior Devops Engineer

TextLayer • 🌐 Remote

Remote Posted 1 day, 8 hours ago

Job Description

About TextLayer

===================

TextLayer helps enterprises and funded startups deploy advanced AI systems without rewriting their infrastructure. We work with organizations across fintech, healthtech, and other sectors to bridge the gap between AI potential and practical implementation.

Our approach combines deep technical expertise with proven frameworks like TextLayer Core to accelerate development and ensure production-ready results. From bespoke AI workflows to agentic systems, we help clients adopt AI that actually works in their existing tech stacks.

We're on a mission to help address the implementation gap that over 85% of enterprise clients experience in adding AI to their operations and products. We're looking for sharp, curious people who want to meaningfully shape how we build, operate, and deliver.

If you're excited to work on foundational AI infrastructure, solve complex problems for diverse clients, and help define what agentic software looks like in practice, we'd love to meet you.

The Role

The Senior DevOps Engineer will architect production-grade monitoring, logging, and tracing systems specifically designed for AI workloads, implement OpenTelemetry-based data collection pipelines, build robust deployment workflows using IaC, and create resilient observability solutions that provide deep insights into LLM applications and conversational AI systems. Observability into AI systems plays a critical role in helping us build and maintain the infrastructure that powers our client engagements and plays a key role in an upcoming product launch.

Key Responsibilities

Design and maintain OpenTelemetry-based observability infrastructure for distributed AI systems and LLM applications

Build and scale ELK stack deployments (Elasticsearch, Logstash, Kibana) for log aggregation, search, and visualization of AI application data

Implement comprehensive tracing and monitoring solutions for LLM inference, RAG pipelines, and AI Agent workflows

Develop and maintain data ingestion pipelines for processing high-volume telemetry data from AI applications

Configure and optimize OpenSearch clusters for real-time analytics and trace reconstruction of conversational flows

Deploy and manage LLM observability platforms like Langfuse, OpenLLMetry, and custom monitoring solutions

Implement Infrastructure as Code (Terraform, CloudFormation) for reproducible observability and application stack deployments

Build automated alerting and incident response systems for AI application performance and reliability

Collaborate with engineering teams to instrument AI applications with proper telemetry and observability hooks

Optimize data retention policies, indexing strategies, and query performance for large-scale observability data

What You Will Bring

To succeed in this role, you'll need deep expertise in observability infrastructure, hands-on experience with OpenTelemetry and ELK stack, and a strong understanding of AI/ML system monitoring challenges. You should be passionate about building scalable, reliable infrastructure that provides actionable insights into complex AI workloads.

Required Qualifications

4+ years of DevOps/Infrastructure engineering experience with focus on observability and monitoring

Expert-level experience with OpenTelemetry implementation, configuration, and custom instrumentation

Production experience with ELK stack (Elasticsearch, Logstash, Kibana) including cluster management and optimization

Strong knowledge of distributed tracing, metrics collection, and log aggregation architectures

Experience with container orchestration (Kubernetes, Docker) and cloud infrastructure (AWS/GCP/Azure)

Proficiency with Infrastructure as Code tools (Terraform, Ansible, CloudFormation)

Experience building high-throughput data ingestion pipelines and real-time analytics systems

Strong scripting skills (Python, Bash/Sh) for automation and tooling

Knowledge of observability best practices, SLI/SLO definitions, and incident response

Experience with monitoring tools like Prometheus, Grafana, or DataDog

Bonus Points

Experience with LLMOps observability tools (Langfuse, LiteLLM, Weights \& Biases, Phoenix, Braintrust)

Experience with Golang (Go), Rust, or C/C++

Knowledge of AI/ML system monitoring patterns and LLM application telemetry

Experience with OpenSearch and ClickHouse for analytics workloads

Familiarity with conversational AI analytics and trace reconstruction techniques

Experience instrumenting LLM applications, RAG systems, or AI Agent workflows

Background in time-series databases and vector search optimization

Contributions to open-source observability or LLMOps projects

Knowledge of eval-driven development and automated AI system testing frameworks

Compensation Range: CA$200K - CA$220K

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.