đŸ‘šđŸ»â€đŸ’» postech.work

Backend MCP Engineer

unknown-company ‱ 🌐 In Person

In Person Posted 1 day, 10 hours ago

Job Description

LiteLLM is the world’s most popular AI Gateway used by the largest companies (Adobe, Netflix, NASA, etc.) in the world to give their developers access to LLMs and adjacent services (MCP’s, Vector Stores, etc.).

Why do companies use LiteLLM enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens).

What you will be working on

Skills: Python, MCP, AI infrastructure, FastAPI

As the Backend MCP Engineer, you'll be responsible for implementing MCP server support, building tool orchestration layers, designing protocol for external tool integration, enabling function calling across multiple LLM providers, and creating SDK for MCP server discovery and connection. You'll work directly with the CEO and CTO on critical projects including:

Adding MCP protocol support to LiteLLM gateway

Building unified tool calling interface across providers

Implementing session management for stateful agents

Creating examples/docs for MCP + LiteLLM integration

What is our tech stack

Core: Python, FastAPI, MCP, Redis, Postgres.

LLM Integration: OpenAI SDK, Anthropic SDK, AWS Bedrock, Vertex AI

Protocol Layer: JSON-RPC, WebSockets, Server-Sent Events (SSE)

Agent Tooling: Model Context Protocol (MCP), function calling, tool schemas

Infrastructure: Docker, Kubernetes, Prometheus, GitHub Actions

You'll work with:

Multiple LLM provider APIs (Anthropic, OpenAI, Google, AWS)

MCP protocol implementation (client + server)

High-throughput async systems (10K+ req/sec)

Open source community (34K+ GitHub stars)

What’s so exciting about this role?

LiteLLM is at the intersection of 3 critical AI infrastructure layers:

1. LLM Gateway - Call any LLM with one API (our core strength)

2. MCP Gateway - Give any LLM access to any tool (emerging need)

3. Agent Gateway - Enable agents to communicate with other agents/llm’s/tools

You'll help us become the unified infrastructure layer that connects:

Applications LiteLLM LLM Providers (OpenAI, Anthropic, Bedrock)

LLMs LiteLLM MCP Servers (databases, APIs, internal tools)

Agents LiteLLM MCP Servers (databases, APIs, internal tools) + LLMs

This means working on cutting-edge problems like:

How do we route tool calls across providers with different specs?

How do we make MCP servers work seamlessly with any LLM?

How do we build the "Stripe of AI infrastructure"? If you're excited about building the foundational layer that every AI application will use, this is for you.

Who we are looking for

1-2 years of backend/full-stack experience with production systems

Passion for open source and user engagement

Experience working with the OpenAI api (understand the difference between /chat/completions and /responses, and can speak to API-specific nuances)

Strong work ethic and ability to thrive in small teams

Eagerness to talk to users and help solve real problems

About the interview

Our interview process is:

Intro call - 30 min

Behavioral discussion about your working style, expectations, and the company’s direction.

Hackerrank - 1 hr

A hackerrank covering basic python questions

Virtual Onsite - 3 hrs

Virtual onsite with founders, which involves solving an issue on LiteLLM’s github together, a presentation of a technical project and a system design question

About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.