LiteLLM is the worldâs most popular AI Gateway used by the largest companies (Adobe, Netflix, NASA, etc.) in the world to give their developers access to LLMs and adjacent services (MCPâs, Vector Stores, etc.).
LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format.
Why do companies use LiteLLM enterprise
Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens).
What you will be working on
Skills: Python, LLM APIs, FastAPI, High-throughput/low-latency
As the Backend LLM Engineer, you'll be responsible for ensuring LiteLLM unifies the format for calling LLM APIs in the broader OpenAI + Anthropic spec. This involves writing transformations to convert API requests from OpenAI/Anthropic spec to various LLM provider formats, building provider-agnostic unification functionality (e.g. session management across non-openai models for /v1/responses API, etc.). You'll work directly with the CEO and CTO on critical projects including:
Adding support for Anthropic and Bedrock Anthropic 'thinking' parameter
Handling provider-specific quirks like OpenAI o-1 streaming limitations
Maintaining âexcellentâ unified APIâs across /v1/messages, /v1/responses, /chat/completions for OpenAI/Gemini/Anthropic models across Azure, OpenAI API, Bedrock Invoke, Bedrock Converse, Vertex AI, Google AI Studio
Implementing cost tracking and logging for Anthropic API
What is our tech stack
The tech stack includes Python, FastAPI, Redis, Postgres.
Who we are looking for
1-2 years of backend/full-stack experience with production systems
Passion for open source and user engagement
Experience working with the OpenAI api (understand the difference between /chat/completions and /responses, and can speak to API-specific nuances)
Strong work ethic and ability to thrive in small teams
Eagerness to talk to users and help solve real problems
About the interview
Our interview process is:
Intro call - 30 min
Behavioral discussion about your working style, expectations, and the companyâs direction.
Hackerrank - 1 hr
A hackerrank covering basic python questions
Virtual Onsite - 3 hrs
Virtual onsite with founders, which involves solving an issue on LiteLLMâs github together, a presentation of a technical project and a system design question
About LiteLLM
LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.