--- title: "Overview" description: "Configure your AI model for Strix" --- Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers. ## Strix Router (Recommended) The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention. ```bash export STRIX_LLM="strix/gpt-5" export LLM_API_KEY="your-strix-api-key" ``` Get your API key at [models.strix.ai](https://models.strix.ai). ## Bring Your Own Key You can also use any LiteLLM-compatible provider with your own API keys: | Model | Provider | Configuration | | ----------------- | ------------- | -------------------------------- | | GPT-5 | OpenAI | `openai/gpt-5` | | Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` | | Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` | ```bash export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" ``` ## Local Models Run models locally with [Ollama](https://ollama.com), [LM Studio](https://lmstudio.ai), or any OpenAI-compatible server: ```bash export STRIX_LLM="ollama/llama4" export LLM_API_BASE="http://localhost:11434" ``` See the [Local Models guide](/llm-providers/local) for setup instructions and recommended models. ## Provider Guides Recommended models router with high rate limits. GPT-5 and Codex models. Claude Opus, Sonnet, and Haiku. Access 100+ models through a single API. Gemini 3 models via Google Cloud. Claude and Titan models via AWS. GPT-5 via Azure. Llama 4, Mistral, and self-hosted models. ## Model Format Use LiteLLM's `provider/model-name` format: ``` openai/gpt-5 anthropic/claude-sonnet-4-6 vertex_ai/gemini-3-pro-preview bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0 ollama/llama4 ```