chore: update default model to gpt-5.4 and remove Strix Router from docs
- Change default model from gpt-5 to gpt-5.4 across docs, tests, and examples - Remove Strix Router references from docs, quickstart, overview, and README - Delete models.mdx (Strix Router page) and its nav entry - Simplify install script to suggest openai/ prefix directly - Keep strix/ model routing support intact in code Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -30,7 +30,7 @@ Thank you for your interest in contributing to Strix! This guide will help you g
|
|||||||
|
|
||||||
3. **Configure your LLM provider**
|
3. **Configure your LLM provider**
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
10
README.md
10
README.md
@@ -73,9 +73,7 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
|
|||||||
|
|
||||||
**Prerequisites:**
|
**Prerequisites:**
|
||||||
- Docker (running)
|
- Docker (running)
|
||||||
- An LLM API key:
|
- An LLM API key from any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
|
||||||
- Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
|
|
||||||
- Or [Strix Router](https://models.strix.ai) — single API key for multiple providers
|
|
||||||
|
|
||||||
### Installation & First Scan
|
### Installation & First Scan
|
||||||
|
|
||||||
@@ -84,7 +82,7 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
|
|||||||
curl -sSL https://strix.ai/install | bash
|
curl -sSL https://strix.ai/install | bash
|
||||||
|
|
||||||
# Configure your AI provider
|
# Configure your AI provider
|
||||||
export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai)
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
|
|
||||||
# Run your first security assessment
|
# Run your first security assessment
|
||||||
@@ -215,7 +213,7 @@ jobs:
|
|||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
|
|
||||||
# Optional
|
# Optional
|
||||||
@@ -229,7 +227,7 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high,
|
|||||||
|
|
||||||
**Recommended models for best results:**
|
**Recommended models for best results:**
|
||||||
|
|
||||||
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
|
- [OpenAI GPT-5.4](https://openai.com/api/) — `openai/gpt-5.4`
|
||||||
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
|
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
|
||||||
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
|
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
|
||||||
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file.
|
|||||||
## LLM Configuration
|
## LLM Configuration
|
||||||
|
|
||||||
<ParamField path="STRIX_LLM" type="string" required>
|
<ParamField path="STRIX_LLM" type="string" required>
|
||||||
Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-6`).
|
Model name in LiteLLM format (e.g., `openai/gpt-5.4`, `anthropic/claude-sonnet-4-6`).
|
||||||
</ParamField>
|
</ParamField>
|
||||||
|
|
||||||
<ParamField path="LLM_API_KEY" type="string">
|
<ParamField path="LLM_API_KEY" type="string">
|
||||||
@@ -114,7 +114,7 @@ strix --target ./app --config /path/to/config.json
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"env": {
|
"env": {
|
||||||
"STRIX_LLM": "openai/gpt-5",
|
"STRIX_LLM": "openai/gpt-5.4",
|
||||||
"LLM_API_KEY": "sk-...",
|
"LLM_API_KEY": "sk-...",
|
||||||
"STRIX_REASONING_EFFORT": "high"
|
"STRIX_REASONING_EFFORT": "high"
|
||||||
}
|
}
|
||||||
@@ -125,7 +125,7 @@ strix --target ./app --config /path/to/config.json
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Required
|
# Required
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="sk-..."
|
export LLM_API_KEY="sk-..."
|
||||||
|
|
||||||
# Optional: Enable web search
|
# Optional: Enable web search
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ description: "Contribute to Strix development"
|
|||||||
</Step>
|
</Step>
|
||||||
<Step title="Configure LLM">
|
<Step title="Configure LLM">
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
</Step>
|
</Step>
|
||||||
|
|||||||
@@ -32,7 +32,6 @@
|
|||||||
"group": "LLM Providers",
|
"group": "LLM Providers",
|
||||||
"pages": [
|
"pages": [
|
||||||
"llm-providers/overview",
|
"llm-providers/overview",
|
||||||
"llm-providers/models",
|
|
||||||
"llm-providers/openai",
|
"llm-providers/openai",
|
||||||
"llm-providers/anthropic",
|
"llm-providers/anthropic",
|
||||||
"llm-providers/openrouter",
|
"llm-providers/openrouter",
|
||||||
|
|||||||
@@ -78,7 +78,7 @@ Strix uses a graph of specialized agents for comprehensive security testing:
|
|||||||
curl -sSL https://strix.ai/install | bash
|
curl -sSL https://strix.ai/install | bash
|
||||||
|
|
||||||
# Configure
|
# Configure
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
|
|
||||||
# Scan
|
# Scan
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ Add these secrets to your repository:
|
|||||||
|
|
||||||
| Secret | Description |
|
| Secret | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
| `STRIX_LLM` | Model name (e.g., `openai/gpt-5`) |
|
| `STRIX_LLM` | Model name (e.g., `openai/gpt-5.4`) |
|
||||||
| `LLM_API_KEY` | API key for your LLM provider |
|
| `LLM_API_KEY` | API key for your LLM provider |
|
||||||
|
|
||||||
## Exit Codes
|
## Exit Codes
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ description: "Configure Strix with Claude models"
|
|||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="sk-ant-..."
|
export LLM_API_KEY="sk-ant-..."
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ export AZURE_API_VERSION="2025-11-01-preview"
|
|||||||
## Example
|
## Example
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="azure/gpt-5-deployment"
|
export STRIX_LLM="azure/gpt-5.4-deployment"
|
||||||
export AZURE_API_KEY="abc123..."
|
export AZURE_API_KEY="abc123..."
|
||||||
export AZURE_API_BASE="https://mycompany.openai.azure.com"
|
export AZURE_API_BASE="https://mycompany.openai.azure.com"
|
||||||
export AZURE_API_VERSION="2025-11-01-preview"
|
export AZURE_API_VERSION="2025-11-01-preview"
|
||||||
@@ -33,5 +33,5 @@ export AZURE_API_VERSION="2025-11-01-preview"
|
|||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
1. Create an Azure OpenAI resource
|
1. Create an Azure OpenAI resource
|
||||||
2. Deploy a model (e.g., GPT-5)
|
2. Deploy a model (e.g., GPT-5.4)
|
||||||
3. Get the endpoint URL and API key from the Azure portal
|
3. Get the endpoint URL and API key from the Azure portal
|
||||||
|
|||||||
@@ -1,75 +0,0 @@
|
|||||||
---
|
|
||||||
title: "Strix Router"
|
|
||||||
description: "Access top LLMs through a single API with high rate limits and zero data retention"
|
|
||||||
---
|
|
||||||
|
|
||||||
Strix Router gives you access to the best LLMs through a single API key.
|
|
||||||
|
|
||||||
<Note>
|
|
||||||
Strix Router is currently in **beta**. It's completely optional — Strix works with any [LiteLLM-compatible provider](/llm-providers/overview) using your own API keys, or with [local models](/llm-providers/local). Strix Router is just the setup we test and optimize for.
|
|
||||||
</Note>
|
|
||||||
|
|
||||||
## Why Use Strix Router?
|
|
||||||
|
|
||||||
- **High rate limits** — No throttling during long-running scans
|
|
||||||
- **Zero data retention** — Routes to providers with zero data retention policies enabled
|
|
||||||
- **Failover & load balancing** — Automatic fallback across providers for reliability
|
|
||||||
- **Simple setup** — One API key, one environment variable, no provider accounts needed
|
|
||||||
- **No markup** — Same token pricing as the underlying providers, no extra fees
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
1. Get your API key at [models.strix.ai](https://models.strix.ai)
|
|
||||||
2. Set your environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export LLM_API_KEY='your-strix-api-key'
|
|
||||||
export STRIX_LLM='strix/gpt-5'
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Run a scan:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
strix --target ./your-app
|
|
||||||
```
|
|
||||||
|
|
||||||
## Available Models
|
|
||||||
|
|
||||||
### Anthropic
|
|
||||||
|
|
||||||
| Model | ID |
|
|
||||||
|-------|-----|
|
|
||||||
| Claude Sonnet 4.6 | `strix/claude-sonnet-4.6` |
|
|
||||||
| Claude Opus 4.6 | `strix/claude-opus-4.6` |
|
|
||||||
|
|
||||||
### OpenAI
|
|
||||||
|
|
||||||
| Model | ID |
|
|
||||||
|-------|-----|
|
|
||||||
| GPT-5.2 | `strix/gpt-5.2` |
|
|
||||||
| GPT-5.1 | `strix/gpt-5.1` |
|
|
||||||
| GPT-5 | `strix/gpt-5` |
|
|
||||||
|
|
||||||
### Google
|
|
||||||
|
|
||||||
| Model | ID |
|
|
||||||
|-------|-----|
|
|
||||||
| Gemini 3 Pro | `strix/gemini-3-pro-preview` |
|
|
||||||
| Gemini 3 Flash | `strix/gemini-3-flash-preview` |
|
|
||||||
|
|
||||||
### Other
|
|
||||||
|
|
||||||
| Model | ID |
|
|
||||||
|-------|-----|
|
|
||||||
| GLM-5 | `strix/glm-5` |
|
|
||||||
| GLM-4.7 | `strix/glm-4.7` |
|
|
||||||
|
|
||||||
## Configuration Reference
|
|
||||||
|
|
||||||
<ParamField path="LLM_API_KEY" type="string" required>
|
|
||||||
Your Strix API key from [models.strix.ai](https://models.strix.ai).
|
|
||||||
</ParamField>
|
|
||||||
|
|
||||||
<ParamField path="STRIX_LLM" type="string" required>
|
|
||||||
Model ID from the tables above. Must be prefixed with `strix/`.
|
|
||||||
</ParamField>
|
|
||||||
@@ -6,7 +6,7 @@ description: "Configure Strix with OpenAI models"
|
|||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="sk-..."
|
export LLM_API_KEY="sk-..."
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -25,7 +25,7 @@ See [OpenAI Models Documentation](https://platform.openai.com/docs/models) for t
|
|||||||
For OpenAI-compatible APIs:
|
For OpenAI-compatible APIs:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-key"
|
export LLM_API_KEY="your-key"
|
||||||
export LLM_API_BASE="https://your-proxy.com/v1"
|
export LLM_API_BASE="https://your-proxy.com/v1"
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ description: "Configure Strix with models via OpenRouter"
|
|||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openrouter/openai/gpt-5"
|
export STRIX_LLM="openrouter/openai/gpt-5.4"
|
||||||
export LLM_API_KEY="sk-or-..."
|
export LLM_API_KEY="sk-or-..."
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -18,7 +18,7 @@ Access any model on OpenRouter using the format `openrouter/<provider>/<model>`:
|
|||||||
|
|
||||||
| Model | Configuration |
|
| Model | Configuration |
|
||||||
|-------|---------------|
|
|-------|---------------|
|
||||||
| GPT-5 | `openrouter/openai/gpt-5` |
|
| GPT-5.4 | `openrouter/openai/gpt-5.4` |
|
||||||
| Claude Sonnet 4.6 | `openrouter/anthropic/claude-sonnet-4.6` |
|
| Claude Sonnet 4.6 | `openrouter/anthropic/claude-sonnet-4.6` |
|
||||||
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
|
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
|
||||||
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |
|
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |
|
||||||
|
|||||||
@@ -5,29 +5,18 @@ description: "Configure your AI model for Strix"
|
|||||||
|
|
||||||
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
|
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
|
||||||
|
|
||||||
## Strix Router (Recommended)
|
## Configuration
|
||||||
|
|
||||||
The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention.
|
Set your model and API key:
|
||||||
|
|
||||||
```bash
|
|
||||||
export STRIX_LLM="strix/gpt-5"
|
|
||||||
export LLM_API_KEY="your-strix-api-key"
|
|
||||||
```
|
|
||||||
|
|
||||||
Get your API key at [models.strix.ai](https://models.strix.ai).
|
|
||||||
|
|
||||||
## Bring Your Own Key
|
|
||||||
|
|
||||||
You can also use any LiteLLM-compatible provider with your own API keys:
|
|
||||||
|
|
||||||
| Model | Provider | Configuration |
|
| Model | Provider | Configuration |
|
||||||
| ----------------- | ------------- | -------------------------------- |
|
| ----------------- | ------------- | -------------------------------- |
|
||||||
| GPT-5 | OpenAI | `openai/gpt-5` |
|
| GPT-5.4 | OpenAI | `openai/gpt-5.4` |
|
||||||
| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
|
| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
|
||||||
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
|
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -45,11 +34,8 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re
|
|||||||
## Provider Guides
|
## Provider Guides
|
||||||
|
|
||||||
<CardGroup cols={2}>
|
<CardGroup cols={2}>
|
||||||
<Card title="Strix Router" href="/llm-providers/models">
|
|
||||||
Recommended models router with high rate limits.
|
|
||||||
</Card>
|
|
||||||
<Card title="OpenAI" href="/llm-providers/openai">
|
<Card title="OpenAI" href="/llm-providers/openai">
|
||||||
GPT-5 models.
|
GPT-5.4 models.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="Anthropic" href="/llm-providers/anthropic">
|
<Card title="Anthropic" href="/llm-providers/anthropic">
|
||||||
Claude Opus, Sonnet, and Haiku.
|
Claude Opus, Sonnet, and Haiku.
|
||||||
@@ -64,7 +50,7 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re
|
|||||||
Claude and Titan models via AWS.
|
Claude and Titan models via AWS.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="Azure OpenAI" href="/llm-providers/azure">
|
<Card title="Azure OpenAI" href="/llm-providers/azure">
|
||||||
GPT-5 via Azure.
|
GPT-5.4 via Azure.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="Local Models" href="/llm-providers/local">
|
<Card title="Local Models" href="/llm-providers/local">
|
||||||
Llama 4, Mistral, and self-hosted models.
|
Llama 4, Mistral, and self-hosted models.
|
||||||
@@ -76,7 +62,7 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re
|
|||||||
Use LiteLLM's `provider/model-name` format:
|
Use LiteLLM's `provider/model-name` format:
|
||||||
|
|
||||||
```
|
```
|
||||||
openai/gpt-5
|
openai/gpt-5.4
|
||||||
anthropic/claude-sonnet-4-6
|
anthropic/claude-sonnet-4-6
|
||||||
vertex_ai/gemini-3-pro-preview
|
vertex_ai/gemini-3-pro-preview
|
||||||
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
|
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ description: "Install Strix and run your first security scan"
|
|||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
- Docker (running)
|
- Docker (running)
|
||||||
- An LLM API key — use [Strix Router](/llm-providers/models) for the easiest setup, or bring your own key from any [supported provider](/llm-providers/overview)
|
- An LLM API key from any [supported provider](/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -27,23 +27,13 @@ description: "Install Strix and run your first security scan"
|
|||||||
|
|
||||||
Set your LLM provider:
|
Set your LLM provider:
|
||||||
|
|
||||||
<Tabs>
|
|
||||||
<Tab title="Strix Router">
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="strix/gpt-5"
|
export STRIX_LLM="openai/gpt-5.4"
|
||||||
export LLM_API_KEY="your-strix-api-key"
|
|
||||||
```
|
|
||||||
</Tab>
|
|
||||||
<Tab title="Bring Your Own Key">
|
|
||||||
```bash
|
|
||||||
export STRIX_LLM="openai/gpt-5"
|
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
</Tab>
|
|
||||||
</Tabs>
|
|
||||||
|
|
||||||
<Tip>
|
<Tip>
|
||||||
For best results, use `strix/gpt-5`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`.
|
For best results, use `openai/gpt-5.4`, `anthropic/claude-opus-4-6`, or `openai/gpt-5.2`.
|
||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
## Run Your First Scan
|
## Run Your First Scan
|
||||||
|
|||||||
@@ -335,14 +335,11 @@ echo -e "${MUTED} AI Penetration Testing Agent${NC}"
|
|||||||
echo ""
|
echo ""
|
||||||
echo -e "${MUTED}To get started:${NC}"
|
echo -e "${MUTED}To get started:${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e " ${CYAN}1.${NC} Get your Strix API key:"
|
echo -e " ${CYAN}1.${NC} Set your environment:"
|
||||||
echo -e " ${MUTED}https://models.strix.ai${NC}"
|
|
||||||
echo ""
|
|
||||||
echo -e " ${CYAN}2.${NC} Set your environment:"
|
|
||||||
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
|
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
|
||||||
echo -e " ${MUTED}export STRIX_LLM='strix/gpt-5'${NC}"
|
echo -e " ${MUTED}export STRIX_LLM='openai/gpt-5.4'${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e " ${CYAN}3.${NC} Run a penetration test:"
|
echo -e " ${CYAN}2.${NC} Run a penetration test:"
|
||||||
echo -e " ${MUTED}strix --target https://example.com${NC}"
|
echo -e " ${MUTED}strix --target https://example.com${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
|
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
|
||||||
|
|||||||
@@ -101,7 +101,7 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
|||||||
error_text.append("• ", style="white")
|
error_text.append("• ", style="white")
|
||||||
error_text.append("STRIX_LLM", style="bold cyan")
|
error_text.append("STRIX_LLM", style="bold cyan")
|
||||||
error_text.append(
|
error_text.append(
|
||||||
" - Model name to use with litellm (e.g., 'openai/gpt-5')\n",
|
" - Model name to use with litellm (e.g., 'openai/gpt-5.4')\n",
|
||||||
style="white",
|
style="white",
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -140,10 +140,7 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
|||||||
)
|
)
|
||||||
|
|
||||||
error_text.append("\nExample setup:\n", style="white")
|
error_text.append("\nExample setup:\n", style="white")
|
||||||
if uses_strix_models:
|
error_text.append("export STRIX_LLM='openai/gpt-5.4'\n", style="dim white")
|
||||||
error_text.append("export STRIX_LLM='strix/gpt-5'\n", style="dim white")
|
|
||||||
else:
|
|
||||||
error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
|
|
||||||
|
|
||||||
if missing_optional_vars:
|
if missing_optional_vars:
|
||||||
for var in missing_optional_vars:
|
for var in missing_optional_vars:
|
||||||
|
|||||||
@@ -36,7 +36,7 @@ STRIX_MODEL_MAP: dict[str, str] = {
|
|||||||
"claude-opus-4.6": "anthropic/claude-opus-4-6",
|
"claude-opus-4.6": "anthropic/claude-opus-4-6",
|
||||||
"gpt-5.2": "openai/gpt-5.2",
|
"gpt-5.2": "openai/gpt-5.2",
|
||||||
"gpt-5.1": "openai/gpt-5.1",
|
"gpt-5.1": "openai/gpt-5.1",
|
||||||
"gpt-5": "openai/gpt-5",
|
"gpt-5.4": "openai/gpt-5.4",
|
||||||
"gemini-3-pro-preview": "gemini/gemini-3-pro-preview",
|
"gemini-3-pro-preview": "gemini/gemini-3-pro-preview",
|
||||||
"gemini-3-flash-preview": "gemini/gemini-3-flash-preview",
|
"gemini-3-flash-preview": "gemini/gemini-3-flash-preview",
|
||||||
"glm-5": "openrouter/z-ai/glm-5",
|
"glm-5": "openrouter/z-ai/glm-5",
|
||||||
|
|||||||
@@ -1,15 +1,16 @@
|
|||||||
import litellm
|
import litellm
|
||||||
|
import pytest
|
||||||
|
|
||||||
from strix.llm.config import LLMConfig
|
from strix.llm.config import LLMConfig
|
||||||
from strix.llm.llm import LLM
|
from strix.llm.llm import LLM
|
||||||
|
|
||||||
|
|
||||||
def test_llm_does_not_modify_litellm_callbacks(monkeypatch) -> None:
|
def test_llm_does_not_modify_litellm_callbacks(monkeypatch: pytest.MonkeyPatch) -> None:
|
||||||
monkeypatch.setenv("STRIX_TELEMETRY", "1")
|
monkeypatch.setenv("STRIX_TELEMETRY", "1")
|
||||||
monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1")
|
monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1")
|
||||||
monkeypatch.setattr(litellm, "callbacks", ["custom-callback"])
|
monkeypatch.setattr(litellm, "callbacks", ["custom-callback"])
|
||||||
|
|
||||||
llm = LLM(LLMConfig(model_name="openai/gpt-5"), agent_name=None)
|
llm = LLM(LLMConfig(model_name="openai/gpt-5.4"), agent_name=None)
|
||||||
|
|
||||||
assert llm is not None
|
assert llm is not None
|
||||||
assert litellm.callbacks == ["custom-callback"]
|
assert litellm.callbacks == ["custom-callback"]
|
||||||
|
|||||||
Reference in New Issue
Block a user