Strix LLM Documentation and Config Changes (#315)
* feat: add to readme new keys * feat: shoutout strix models, docs * fix: mypy error * fix: base api * docs: update quickstart and models * fixes: changes to docs uniform api_key variable naming * test: git commit hook * nevermind it was nothing * docs: Update default model to claude-sonnet-4.6 and improve Strix Router docs - Replace gpt-5 and opus-4.6 defaults with claude-sonnet-4.6 across all docs and code - Rewrite Strix Router (models.mdx) page with clearer structure and messaging - Add Strix Router as recommended option in overview.mdx and quickstart prerequisites - Update stale Claude 4.5 references to 4.6 in anthropic.mdx, openrouter.mdx, bug_report.md - Fix install.sh links to point to models.strix.ai and correct docs URLs - Update error message examples in main.py to use claude-sonnet-4-6 --------- Co-authored-by: 0xallam <ahmed39652003@gmail.com>
This commit is contained in:
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -27,7 +27,7 @@ If applicable, add screenshots to help explain your problem.
|
|||||||
- OS: [e.g. Ubuntu 22.04]
|
- OS: [e.g. Ubuntu 22.04]
|
||||||
- Strix Version or Commit: [e.g. 0.1.18]
|
- Strix Version or Commit: [e.g. 0.1.18]
|
||||||
- Python Version: [e.g. 3.12]
|
- Python Version: [e.g. 3.12]
|
||||||
- LLM Used: [e.g. GPT-5, Claude Sonnet 4]
|
- LLM Used: [e.g. GPT-5, Claude Sonnet 4.6]
|
||||||
|
|
||||||
**Additional context**
|
**Additional context**
|
||||||
Add any other context about the problem here.
|
Add any other context about the problem here.
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ Thank you for your interest in contributing to Strix! This guide will help you g
|
|||||||
|
|
||||||
3. **Configure your LLM provider**
|
3. **Configure your LLM provider**
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
10
README.md
10
README.md
@@ -72,7 +72,9 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
|
|||||||
|
|
||||||
**Prerequisites:**
|
**Prerequisites:**
|
||||||
- Docker (running)
|
- Docker (running)
|
||||||
- An LLM provider key (e.g. [get OpenAI API key](https://platform.openai.com/api-keys) or use a local LLM)
|
- An LLM API key:
|
||||||
|
- Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
|
||||||
|
- Or [Strix Router](https://models.strix.ai) — single API key for multiple providers with $10 free credit on signup
|
||||||
|
|
||||||
### Installation & First Scan
|
### Installation & First Scan
|
||||||
|
|
||||||
@@ -84,7 +86,7 @@ curl -sSL https://strix.ai/install | bash
|
|||||||
pipx install strix-agent
|
pipx install strix-agent
|
||||||
|
|
||||||
# Configure your AI provider
|
# Configure your AI provider
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6" # or "strix/claude-sonnet-4.6" via Strix Router (https://models.strix.ai)
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
|
|
||||||
# Run your first security assessment
|
# Run your first security assessment
|
||||||
@@ -201,7 +203,7 @@ jobs:
|
|||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
|
|
||||||
# Optional
|
# Optional
|
||||||
@@ -215,8 +217,8 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high,
|
|||||||
|
|
||||||
**Recommended models for best results:**
|
**Recommended models for best results:**
|
||||||
|
|
||||||
|
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
|
||||||
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
|
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
|
||||||
- [Anthropic Claude Sonnet 4.5](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-5`
|
|
||||||
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
|
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
|
||||||
|
|
||||||
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.
|
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file.
|
|||||||
## LLM Configuration
|
## LLM Configuration
|
||||||
|
|
||||||
<ParamField path="STRIX_LLM" type="string" required>
|
<ParamField path="STRIX_LLM" type="string" required>
|
||||||
Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-5`).
|
Model name in LiteLLM format (e.g., `anthropic/claude-sonnet-4-6`, `openai/gpt-5`).
|
||||||
</ParamField>
|
</ParamField>
|
||||||
|
|
||||||
<ParamField path="LLM_API_KEY" type="string">
|
<ParamField path="LLM_API_KEY" type="string">
|
||||||
@@ -86,7 +86,7 @@ strix --target ./app --config /path/to/config.json
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"env": {
|
"env": {
|
||||||
"STRIX_LLM": "openai/gpt-5",
|
"STRIX_LLM": "anthropic/claude-sonnet-4-6",
|
||||||
"LLM_API_KEY": "sk-...",
|
"LLM_API_KEY": "sk-...",
|
||||||
"STRIX_REASONING_EFFORT": "high"
|
"STRIX_REASONING_EFFORT": "high"
|
||||||
}
|
}
|
||||||
@@ -97,7 +97,7 @@ strix --target ./app --config /path/to/config.json
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Required
|
# Required
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="sk-..."
|
export LLM_API_KEY="sk-..."
|
||||||
|
|
||||||
# Optional: Enable web search
|
# Optional: Enable web search
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ description: "Contribute to Strix development"
|
|||||||
</Step>
|
</Step>
|
||||||
<Step title="Configure LLM">
|
<Step title="Configure LLM">
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
</Step>
|
</Step>
|
||||||
|
|||||||
@@ -78,7 +78,7 @@ Strix uses a graph of specialized agents for comprehensive security testing:
|
|||||||
curl -sSL https://strix.ai/install | bash
|
curl -sSL https://strix.ai/install | bash
|
||||||
|
|
||||||
# Configure
|
# Configure
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
|
|
||||||
# Scan
|
# Scan
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ Add these secrets to your repository:
|
|||||||
|
|
||||||
| Secret | Description |
|
| Secret | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
| `STRIX_LLM` | Model name (e.g., `openai/gpt-5`) |
|
| `STRIX_LLM` | Model name (e.g., `anthropic/claude-sonnet-4-6`) |
|
||||||
| `LLM_API_KEY` | API key for your LLM provider |
|
| `LLM_API_KEY` | API key for your LLM provider |
|
||||||
|
|
||||||
## Exit Codes
|
## Exit Codes
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ description: "Configure Strix with Claude models"
|
|||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="anthropic/claude-sonnet-4-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="sk-ant-..."
|
export LLM_API_KEY="sk-ant-..."
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -14,8 +14,8 @@ export LLM_API_KEY="sk-ant-..."
|
|||||||
|
|
||||||
| Model | Description |
|
| Model | Description |
|
||||||
|-------|-------------|
|
|-------|-------------|
|
||||||
| `anthropic/claude-sonnet-4-5` | Best balance of intelligence and speed (recommended) |
|
| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed (recommended) |
|
||||||
| `anthropic/claude-opus-4-5` | Maximum capability for deep analysis |
|
| `anthropic/claude-opus-4-6` | Maximum capability for deep analysis |
|
||||||
|
|
||||||
## Get API Key
|
## Get API Key
|
||||||
|
|
||||||
|
|||||||
80
docs/llm-providers/models.mdx
Normal file
80
docs/llm-providers/models.mdx
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
---
|
||||||
|
title: "Strix Router"
|
||||||
|
description: "Access top LLMs through a single API with high rate limits and zero data retention"
|
||||||
|
---
|
||||||
|
|
||||||
|
Strix Router gives you access to the best LLMs through a single API key.
|
||||||
|
|
||||||
|
<Note>
|
||||||
|
Strix Router is currently in **beta**. It's completely optional — Strix works with any [LiteLLM-compatible provider](/llm-providers/overview) using your own API keys, or with [local models](/llm-providers/local). Strix Router is just the setup we test and optimize for.
|
||||||
|
</Note>
|
||||||
|
|
||||||
|
## Why Use Strix Router?
|
||||||
|
|
||||||
|
- **High rate limits** — No throttling during long-running scans
|
||||||
|
- **Zero data retention** — Routes to providers with zero data retention policies enabled
|
||||||
|
- **Failover & load balancing** — Automatic fallback across providers for reliability
|
||||||
|
- **Simple setup** — One API key, one environment variable, no provider accounts needed
|
||||||
|
- **No markup** — Same token pricing as the underlying providers, no extra fees
|
||||||
|
- **$10 free credit** — Try it free on signup, no credit card required
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
1. Get your API key at [models.strix.ai](https://models.strix.ai)
|
||||||
|
2. Set your environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export LLM_API_KEY='your-strix-api-key'
|
||||||
|
export STRIX_LLM='strix/claude-sonnet-4.6'
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Run a scan:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
strix --target ./your-app
|
||||||
|
```
|
||||||
|
|
||||||
|
## Available Models
|
||||||
|
|
||||||
|
### Anthropic
|
||||||
|
|
||||||
|
| Model | ID |
|
||||||
|
|-------|-----|
|
||||||
|
| Claude Sonnet 4.6 | `strix/claude-sonnet-4.6` |
|
||||||
|
| Claude Opus 4.6 | `strix/claude-opus-4.6` |
|
||||||
|
|
||||||
|
### OpenAI
|
||||||
|
|
||||||
|
| Model | ID |
|
||||||
|
|-------|-----|
|
||||||
|
| GPT-5.2 | `strix/gpt-5.2` |
|
||||||
|
| GPT-5.1 | `strix/gpt-5.1` |
|
||||||
|
| GPT-5 | `strix/gpt-5` |
|
||||||
|
| GPT-5.2 Codex | `strix/gpt-5.2-codex` |
|
||||||
|
| GPT-5.1 Codex Max | `strix/gpt-5.1-codex-max` |
|
||||||
|
| GPT-5.1 Codex | `strix/gpt-5.1-codex` |
|
||||||
|
| GPT-5 Codex | `strix/gpt-5-codex` |
|
||||||
|
|
||||||
|
### Google
|
||||||
|
|
||||||
|
| Model | ID |
|
||||||
|
|-------|-----|
|
||||||
|
| Gemini 3 Pro | `strix/gemini-3-pro-preview` |
|
||||||
|
| Gemini 3 Flash | `strix/gemini-3-flash-preview` |
|
||||||
|
|
||||||
|
### Other
|
||||||
|
|
||||||
|
| Model | ID |
|
||||||
|
|-------|-----|
|
||||||
|
| GLM-5 | `strix/glm-5` |
|
||||||
|
| GLM-4.7 | `strix/glm-4.7` |
|
||||||
|
|
||||||
|
## Configuration Reference
|
||||||
|
|
||||||
|
<ParamField path="LLM_API_KEY" type="string" required>
|
||||||
|
Your Strix API key from [models.strix.ai](https://models.strix.ai).
|
||||||
|
</ParamField>
|
||||||
|
|
||||||
|
<ParamField path="STRIX_LLM" type="string" required>
|
||||||
|
Model ID from the tables above. Must be prefixed with `strix/`.
|
||||||
|
</ParamField>
|
||||||
@@ -19,7 +19,7 @@ Access any model on OpenRouter using the format `openrouter/<provider>/<model>`:
|
|||||||
| Model | Configuration |
|
| Model | Configuration |
|
||||||
|-------|---------------|
|
|-------|---------------|
|
||||||
| GPT-5 | `openrouter/openai/gpt-5` |
|
| GPT-5 | `openrouter/openai/gpt-5` |
|
||||||
| Claude 4.5 Sonnet | `openrouter/anthropic/claude-sonnet-4.5` |
|
| Claude Sonnet 4.6 | `openrouter/anthropic/claude-sonnet-4.6` |
|
||||||
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
|
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
|
||||||
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |
|
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |
|
||||||
|
|
||||||
|
|||||||
@@ -5,31 +5,54 @@ description: "Configure your AI model for Strix"
|
|||||||
|
|
||||||
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
|
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
|
||||||
|
|
||||||
## Recommended Models
|
## Strix Router (Recommended)
|
||||||
|
|
||||||
For best results, use one of these models:
|
The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export STRIX_LLM="strix/claude-sonnet-4.6"
|
||||||
|
export LLM_API_KEY="your-strix-api-key"
|
||||||
|
```
|
||||||
|
|
||||||
|
Get your API key at [models.strix.ai](https://models.strix.ai).
|
||||||
|
|
||||||
|
## Bring Your Own Key
|
||||||
|
|
||||||
|
You can also use any LiteLLM-compatible provider with your own API keys:
|
||||||
|
|
||||||
| Model | Provider | Configuration |
|
| Model | Provider | Configuration |
|
||||||
| ----------------- | ------------- | -------------------------------- |
|
| ----------------- | ------------- | -------------------------------- |
|
||||||
|
| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
|
||||||
| GPT-5 | OpenAI | `openai/gpt-5` |
|
| GPT-5 | OpenAI | `openai/gpt-5` |
|
||||||
| Claude 4.5 Sonnet | Anthropic | `anthropic/claude-sonnet-4-5` |
|
|
||||||
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
|
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
|
||||||
|
|
||||||
## Quick Setup
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Local Models
|
||||||
|
|
||||||
|
Run models locally with [Ollama](https://ollama.com), [LM Studio](https://lmstudio.ai), or any OpenAI-compatible server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export STRIX_LLM="ollama/llama4"
|
||||||
|
export LLM_API_BASE="http://localhost:11434"
|
||||||
|
```
|
||||||
|
|
||||||
|
See the [Local Models guide](/llm-providers/local) for setup instructions and recommended models.
|
||||||
|
|
||||||
## Provider Guides
|
## Provider Guides
|
||||||
|
|
||||||
<CardGroup cols={2}>
|
<CardGroup cols={2}>
|
||||||
|
<Card title="Strix Router" href="/llm-providers/models">
|
||||||
|
Recommended models router with high rate limits.
|
||||||
|
</Card>
|
||||||
<Card title="OpenAI" href="/llm-providers/openai">
|
<Card title="OpenAI" href="/llm-providers/openai">
|
||||||
GPT-5 and Codex models.
|
GPT-5 and Codex models.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="Anthropic" href="/llm-providers/anthropic">
|
<Card title="Anthropic" href="/llm-providers/anthropic">
|
||||||
Claude 4.5 Sonnet, Opus, and Haiku.
|
Claude Sonnet 4.6, Opus, and Haiku.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="OpenRouter" href="/llm-providers/openrouter">
|
<Card title="OpenRouter" href="/llm-providers/openrouter">
|
||||||
Access 100+ models through a single API.
|
Access 100+ models through a single API.
|
||||||
@@ -38,7 +61,7 @@ export LLM_API_KEY="your-api-key"
|
|||||||
Gemini 3 models via Google Cloud.
|
Gemini 3 models via Google Cloud.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="AWS Bedrock" href="/llm-providers/bedrock">
|
<Card title="AWS Bedrock" href="/llm-providers/bedrock">
|
||||||
Claude 4.5 and Titan models via AWS.
|
Claude and Titan models via AWS.
|
||||||
</Card>
|
</Card>
|
||||||
<Card title="Azure OpenAI" href="/llm-providers/azure">
|
<Card title="Azure OpenAI" href="/llm-providers/azure">
|
||||||
GPT-5 via Azure.
|
GPT-5 via Azure.
|
||||||
@@ -53,8 +76,8 @@ export LLM_API_KEY="your-api-key"
|
|||||||
Use LiteLLM's `provider/model-name` format:
|
Use LiteLLM's `provider/model-name` format:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
anthropic/claude-sonnet-4-6
|
||||||
openai/gpt-5
|
openai/gpt-5
|
||||||
anthropic/claude-sonnet-4-5
|
|
||||||
vertex_ai/gemini-3-pro-preview
|
vertex_ai/gemini-3-pro-preview
|
||||||
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
|
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
|
||||||
ollama/llama4
|
ollama/llama4
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ description: "Install Strix and run your first security scan"
|
|||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
- Docker (running)
|
- Docker (running)
|
||||||
- An LLM provider API key (OpenAI, Anthropic, or local model)
|
- An LLM API key — use [Strix Router](/llm-providers/models) for the easiest setup, or bring your own key from any [supported provider](/llm-providers/overview)
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -27,13 +27,23 @@ description: "Install Strix and run your first security scan"
|
|||||||
|
|
||||||
Set your LLM provider:
|
Set your LLM provider:
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<Tab title="Strix Router">
|
||||||
```bash
|
```bash
|
||||||
export STRIX_LLM="openai/gpt-5"
|
export STRIX_LLM="strix/claude-sonnet-4.6"
|
||||||
|
export LLM_API_KEY="your-strix-api-key"
|
||||||
|
```
|
||||||
|
</Tab>
|
||||||
|
<Tab title="Bring Your Own Key">
|
||||||
|
```bash
|
||||||
|
export STRIX_LLM="anthropic/claude-sonnet-4-6"
|
||||||
export LLM_API_KEY="your-api-key"
|
export LLM_API_KEY="your-api-key"
|
||||||
```
|
```
|
||||||
|
</Tab>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
<Tip>
|
<Tip>
|
||||||
For best results, use `openai/gpt-5`, `anthropic/claude-sonnet-4-5`, or `vertex_ai/gemini-3-pro-preview`.
|
For best results, use `strix/claude-sonnet-4.6`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`.
|
||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
## Run Your First Scan
|
## Run Your First Scan
|
||||||
|
|||||||
@@ -335,14 +335,18 @@ echo -e "${MUTED} AI Penetration Testing Agent${NC}"
|
|||||||
echo ""
|
echo ""
|
||||||
echo -e "${MUTED}To get started:${NC}"
|
echo -e "${MUTED}To get started:${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e " ${CYAN}1.${NC} Set your LLM provider:"
|
echo -e " ${CYAN}1.${NC} Get your Strix API key:"
|
||||||
echo -e " ${MUTED}export STRIX_LLM='openai/gpt-5'${NC}"
|
echo -e " ${MUTED}https://models.strix.ai${NC}"
|
||||||
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
|
|
||||||
echo ""
|
echo ""
|
||||||
echo -e " ${CYAN}2.${NC} Run a penetration test:"
|
echo -e " ${CYAN}2.${NC} Set your environment:"
|
||||||
|
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
|
||||||
|
echo -e " ${MUTED}export STRIX_LLM='strix/claude-sonnet-4.6'${NC}"
|
||||||
|
echo ""
|
||||||
|
echo -e " ${CYAN}3.${NC} Run a penetration test:"
|
||||||
echo -e " ${MUTED}strix --target https://example.com${NC}"
|
echo -e " ${MUTED}strix --target https://example.com${NC}"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
|
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
|
||||||
|
echo -e "${MUTED}Supported models ${NC}https://docs.strix.ai/llm-providers/overview"
|
||||||
echo -e "${MUTED}Join our community ${NC}https://discord.gg/strix-ai"
|
echo -e "${MUTED}Join our community ${NC}https://discord.gg/strix-ai"
|
||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
|
|||||||
@@ -5,6 +5,9 @@ from pathlib import Path
|
|||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
STRIX_API_BASE = "https://models.strix.ai/api/v1"
|
||||||
|
|
||||||
|
|
||||||
class Config:
|
class Config:
|
||||||
"""Configuration Manager for Strix."""
|
"""Configuration Manager for Strix."""
|
||||||
|
|
||||||
@@ -177,3 +180,30 @@ def apply_saved_config(force: bool = False) -> dict[str, str]:
|
|||||||
|
|
||||||
def save_current_config() -> bool:
|
def save_current_config() -> bool:
|
||||||
return Config.save_current()
|
return Config.save_current()
|
||||||
|
|
||||||
|
|
||||||
|
def resolve_llm_config() -> tuple[str | None, str | None, str | None]:
|
||||||
|
"""Resolve LLM model, api_key, and api_base based on STRIX_LLM prefix.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: (model_name, api_key, api_base)
|
||||||
|
"""
|
||||||
|
model = Config.get("strix_llm")
|
||||||
|
if not model:
|
||||||
|
return None, None, None
|
||||||
|
|
||||||
|
api_key = Config.get("llm_api_key")
|
||||||
|
|
||||||
|
if model.startswith("strix/"):
|
||||||
|
model_name = "openai/" + model[6:]
|
||||||
|
api_base: str | None = STRIX_API_BASE
|
||||||
|
else:
|
||||||
|
model_name = model
|
||||||
|
api_base = (
|
||||||
|
Config.get("llm_api_base")
|
||||||
|
or Config.get("openai_api_base")
|
||||||
|
or Config.get("litellm_base_url")
|
||||||
|
or Config.get("ollama_api_base")
|
||||||
|
)
|
||||||
|
|
||||||
|
return model_name, api_key, api_base
|
||||||
|
|||||||
@@ -51,10 +51,13 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
|||||||
missing_required_vars = []
|
missing_required_vars = []
|
||||||
missing_optional_vars = []
|
missing_optional_vars = []
|
||||||
|
|
||||||
if not Config.get("strix_llm"):
|
strix_llm = Config.get("strix_llm")
|
||||||
|
uses_strix_models = strix_llm and strix_llm.startswith("strix/")
|
||||||
|
|
||||||
|
if not strix_llm:
|
||||||
missing_required_vars.append("STRIX_LLM")
|
missing_required_vars.append("STRIX_LLM")
|
||||||
|
|
||||||
has_base_url = any(
|
has_base_url = uses_strix_models or any(
|
||||||
[
|
[
|
||||||
Config.get("llm_api_base"),
|
Config.get("llm_api_base"),
|
||||||
Config.get("openai_api_base"),
|
Config.get("openai_api_base"),
|
||||||
@@ -96,7 +99,7 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
|||||||
error_text.append("• ", style="white")
|
error_text.append("• ", style="white")
|
||||||
error_text.append("STRIX_LLM", style="bold cyan")
|
error_text.append("STRIX_LLM", style="bold cyan")
|
||||||
error_text.append(
|
error_text.append(
|
||||||
" - Model name to use with litellm (e.g., 'openai/gpt-5')\n",
|
" - Model name to use with litellm (e.g., 'anthropic/claude-sonnet-4-6')\n",
|
||||||
style="white",
|
style="white",
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -135,7 +138,10 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
|||||||
)
|
)
|
||||||
|
|
||||||
error_text.append("\nExample setup:\n", style="white")
|
error_text.append("\nExample setup:\n", style="white")
|
||||||
error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
|
if uses_strix_models:
|
||||||
|
error_text.append("export STRIX_LLM='strix/claude-sonnet-4.6'\n", style="dim white")
|
||||||
|
else:
|
||||||
|
error_text.append("export STRIX_LLM='anthropic/claude-sonnet-4-6'\n", style="dim white")
|
||||||
|
|
||||||
if missing_optional_vars:
|
if missing_optional_vars:
|
||||||
for var in missing_optional_vars:
|
for var in missing_optional_vars:
|
||||||
@@ -198,17 +204,12 @@ def check_docker_installed() -> None:
|
|||||||
|
|
||||||
|
|
||||||
async def warm_up_llm() -> None:
|
async def warm_up_llm() -> None:
|
||||||
|
from strix.config.config import resolve_llm_config
|
||||||
|
|
||||||
console = Console()
|
console = Console()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
model_name = Config.get("strix_llm")
|
model_name, api_key, api_base = resolve_llm_config()
|
||||||
api_key = Config.get("llm_api_key")
|
|
||||||
api_base = (
|
|
||||||
Config.get("llm_api_base")
|
|
||||||
or Config.get("openai_api_base")
|
|
||||||
or Config.get("litellm_base_url")
|
|
||||||
or Config.get("ollama_api_base")
|
|
||||||
)
|
|
||||||
|
|
||||||
test_messages = [
|
test_messages = [
|
||||||
{"role": "system", "content": "You are a helpful assistant."},
|
{"role": "system", "content": "You are a helpful assistant."},
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
from strix.config import Config
|
from strix.config import Config
|
||||||
|
from strix.config.config import resolve_llm_config
|
||||||
|
|
||||||
|
|
||||||
class LLMConfig:
|
class LLMConfig:
|
||||||
@@ -10,7 +11,8 @@ class LLMConfig:
|
|||||||
timeout: int | None = None,
|
timeout: int | None = None,
|
||||||
scan_mode: str = "deep",
|
scan_mode: str = "deep",
|
||||||
):
|
):
|
||||||
self.model_name = model_name or Config.get("strix_llm")
|
resolved_model, self.api_key, self.api_base = resolve_llm_config()
|
||||||
|
self.model_name = model_name or resolved_model
|
||||||
|
|
||||||
if not self.model_name:
|
if not self.model_name:
|
||||||
raise ValueError("STRIX_LLM environment variable must be set and not empty")
|
raise ValueError("STRIX_LLM environment variable must be set and not empty")
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from typing import Any
|
|||||||
|
|
||||||
import litellm
|
import litellm
|
||||||
|
|
||||||
from strix.config import Config
|
from strix.config.config import resolve_llm_config
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -155,14 +155,7 @@ def check_duplicate(
|
|||||||
|
|
||||||
comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned}
|
comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned}
|
||||||
|
|
||||||
model_name = Config.get("strix_llm")
|
model_name, api_key, api_base = resolve_llm_config()
|
||||||
api_key = Config.get("llm_api_key")
|
|
||||||
api_base = (
|
|
||||||
Config.get("llm_api_base")
|
|
||||||
or Config.get("openai_api_base")
|
|
||||||
or Config.get("litellm_base_url")
|
|
||||||
or Config.get("ollama_api_base")
|
|
||||||
)
|
|
||||||
|
|
||||||
messages = [
|
messages = [
|
||||||
{"role": "system", "content": DEDUPE_SYSTEM_PROMPT},
|
{"role": "system", "content": DEDUPE_SYSTEM_PROMPT},
|
||||||
|
|||||||
@@ -200,15 +200,10 @@ class LLM:
|
|||||||
"stream_options": {"include_usage": True},
|
"stream_options": {"include_usage": True},
|
||||||
}
|
}
|
||||||
|
|
||||||
if api_key := Config.get("llm_api_key"):
|
if self.config.api_key:
|
||||||
args["api_key"] = api_key
|
args["api_key"] = self.config.api_key
|
||||||
if api_base := (
|
if self.config.api_base:
|
||||||
Config.get("llm_api_base")
|
args["api_base"] = self.config.api_base
|
||||||
or Config.get("openai_api_base")
|
|
||||||
or Config.get("litellm_base_url")
|
|
||||||
or Config.get("ollama_api_base")
|
|
||||||
):
|
|
||||||
args["api_base"] = api_base
|
|
||||||
if self._supports_reasoning():
|
if self._supports_reasoning():
|
||||||
args["reasoning_effort"] = self._reasoning_effort
|
args["reasoning_effort"] = self._reasoning_effort
|
||||||
|
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ from typing import Any
|
|||||||
|
|
||||||
import litellm
|
import litellm
|
||||||
|
|
||||||
from strix.config import Config
|
from strix.config.config import Config, resolve_llm_config
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -104,13 +104,7 @@ def _summarize_messages(
|
|||||||
conversation = "\n".join(formatted)
|
conversation = "\n".join(formatted)
|
||||||
prompt = SUMMARY_PROMPT_TEMPLATE.format(conversation=conversation)
|
prompt = SUMMARY_PROMPT_TEMPLATE.format(conversation=conversation)
|
||||||
|
|
||||||
api_key = Config.get("llm_api_key")
|
_, api_key, api_base = resolve_llm_config()
|
||||||
api_base = (
|
|
||||||
Config.get("llm_api_base")
|
|
||||||
or Config.get("openai_api_base")
|
|
||||||
or Config.get("litellm_base_url")
|
|
||||||
or Config.get("ollama_api_base")
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
completion_args: dict[str, Any] = {
|
completion_args: dict[str, Any] = {
|
||||||
|
|||||||
Reference in New Issue
Block a user