docs: add documentation to main repository
This commit is contained in:
24
docs/llm-providers/anthropic.mdx
Normal file
24
docs/llm-providers/anthropic.mdx
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: "Anthropic"
|
||||
description: "Configure Strix with Claude models"
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="anthropic/claude-sonnet-4-5"
|
||||
export LLM_API_KEY="sk-ant-..."
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
| Model | Description |
|
||||
|-------|-------------|
|
||||
| `anthropic/claude-sonnet-4-5` | Best balance of intelligence and speed (recommended) |
|
||||
| `anthropic/claude-opus-4-5` | Maximum capability for deep analysis |
|
||||
|
||||
## Get API Key
|
||||
|
||||
1. Go to [console.anthropic.com](https://console.anthropic.com)
|
||||
2. Navigate to API Keys
|
||||
3. Create a new key
|
||||
37
docs/llm-providers/azure.mdx
Normal file
37
docs/llm-providers/azure.mdx
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: "Azure OpenAI"
|
||||
description: "Configure Strix with OpenAI models via Azure"
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="azure/your-gpt5-deployment"
|
||||
export AZURE_API_KEY="your-azure-api-key"
|
||||
export AZURE_API_BASE="https://your-resource.openai.azure.com"
|
||||
export AZURE_API_VERSION="2025-11-01-preview"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `STRIX_LLM` | `azure/<your-deployment-name>` |
|
||||
| `AZURE_API_KEY` | Your Azure OpenAI API key |
|
||||
| `AZURE_API_BASE` | Your Azure OpenAI endpoint URL |
|
||||
| `AZURE_API_VERSION` | API version (e.g., `2025-11-01-preview`) |
|
||||
|
||||
## Example
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="azure/gpt-5-deployment"
|
||||
export AZURE_API_KEY="abc123..."
|
||||
export AZURE_API_BASE="https://mycompany.openai.azure.com"
|
||||
export AZURE_API_VERSION="2025-11-01-preview"
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Create an Azure OpenAI resource
|
||||
2. Deploy a model (e.g., GPT-5)
|
||||
3. Get the endpoint URL and API key from the Azure portal
|
||||
47
docs/llm-providers/bedrock.mdx
Normal file
47
docs/llm-providers/bedrock.mdx
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: "AWS Bedrock"
|
||||
description: "Configure Strix with models via AWS Bedrock"
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0"
|
||||
```
|
||||
|
||||
No API key required—uses AWS credentials from environment.
|
||||
|
||||
## Authentication
|
||||
|
||||
### Option 1: AWS CLI Profile
|
||||
|
||||
```bash
|
||||
export AWS_PROFILE="your-profile"
|
||||
export AWS_REGION="us-east-1"
|
||||
```
|
||||
|
||||
### Option 2: Access Keys
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="AKIA..."
|
||||
export AWS_SECRET_ACCESS_KEY="..."
|
||||
export AWS_REGION="us-east-1"
|
||||
```
|
||||
|
||||
### Option 3: IAM Role (EC2/ECS)
|
||||
|
||||
Automatically uses instance role credentials.
|
||||
|
||||
## Available Models
|
||||
|
||||
| Model | Description |
|
||||
|-------|-------------|
|
||||
| `bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0` | Claude 4.5 Sonnet |
|
||||
| `bedrock/anthropic.claude-4-5-opus-20251022-v1:0` | Claude 4.5 Opus |
|
||||
| `bedrock/anthropic.claude-4-5-haiku-20251022-v1:0` | Claude 4.5 Haiku |
|
||||
| `bedrock/amazon.titan-text-premier-v2:0` | Amazon Titan Premier v2 |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Enable model access in the AWS Bedrock console
|
||||
2. Ensure your IAM role/user has `bedrock:InvokeModel` permission
|
||||
56
docs/llm-providers/local.mdx
Normal file
56
docs/llm-providers/local.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: "Local Models"
|
||||
description: "Run Strix with self-hosted LLMs for privacy and air-gapped testing"
|
||||
---
|
||||
|
||||
Running Strix with local models allows for completely offline, privacy-first security assessments. Data never leaves your machine, making this ideal for sensitive internal networks or air-gapped environments.
|
||||
|
||||
## Privacy vs Performance
|
||||
|
||||
| Feature | Local Models | Cloud Models (GPT-5/Claude 4.5) |
|
||||
|---------|--------------|--------------------------------|
|
||||
| **Privacy** | 🔒 Data stays local | Data sent to provider |
|
||||
| **Cost** | Free (hardware only) | Pay-per-token |
|
||||
| **Reasoning** | Lower (struggles with agents) | State-of-the-art |
|
||||
| **Setup** | Complex (GPU required) | Instant |
|
||||
|
||||
<Warning>
|
||||
**Compatibility Note**: Strix relies on advanced agentic capabilities (tool use, multi-step planning, self-correction). Most local models, especially those under 70B parameters, struggle with these complex tasks.
|
||||
|
||||
For critical assessments, we strongly recommend using state-of-the-art cloud models like **Claude 4.5 Sonnet** or **GPT-5**. Use local models only when privacy is the absolute priority.
|
||||
</Warning>
|
||||
|
||||
## Ollama
|
||||
|
||||
[Ollama](https://ollama.ai) is the easiest way to run local models on macOS, Linux, and Windows.
|
||||
|
||||
### Setup
|
||||
|
||||
1. Install Ollama from [ollama.ai](https://ollama.ai)
|
||||
2. Pull a high-performance model:
|
||||
```bash
|
||||
ollama pull qwen3-vl
|
||||
```
|
||||
3. Configure Strix:
|
||||
```bash
|
||||
export STRIX_LLM="ollama/qwen3-vl"
|
||||
export LLM_API_BASE="http://localhost:11434"
|
||||
```
|
||||
|
||||
### Recommended Models
|
||||
|
||||
We recommend these models for the best balance of reasoning and tool use:
|
||||
|
||||
**Recommended models:**
|
||||
- **Qwen3 VL** (`ollama pull qwen3-vl`)
|
||||
- **DeepSeek V3.1** (`ollama pull deepseek-v3.1`)
|
||||
- **Devstral 2** (`ollama pull devstral-2`)
|
||||
|
||||
## LM Studio / OpenAI Compatible
|
||||
|
||||
If you use LM Studio, vLLM, or other runners:
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="openai/local-model"
|
||||
export LLM_API_BASE="http://localhost:1234/v1" # Adjust port as needed
|
||||
```
|
||||
31
docs/llm-providers/openai.mdx
Normal file
31
docs/llm-providers/openai.mdx
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
title: "OpenAI"
|
||||
description: "Configure Strix with OpenAI models"
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="openai/gpt-5"
|
||||
export LLM_API_KEY="sk-..."
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
See [OpenAI Models Documentation](https://platform.openai.com/docs/models) for the full list of available models.
|
||||
|
||||
## Get API Key
|
||||
|
||||
1. Go to [platform.openai.com](https://platform.openai.com)
|
||||
2. Navigate to API Keys
|
||||
3. Create a new secret key
|
||||
|
||||
## Custom Base URL
|
||||
|
||||
For OpenAI-compatible APIs:
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="openai/gpt-5"
|
||||
export LLM_API_KEY="your-key"
|
||||
export LLM_API_BASE="https://your-proxy.com/v1"
|
||||
```
|
||||
37
docs/llm-providers/openrouter.mdx
Normal file
37
docs/llm-providers/openrouter.mdx
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: "OpenRouter"
|
||||
description: "Configure Strix with models via OpenRouter"
|
||||
---
|
||||
|
||||
[OpenRouter](https://openrouter.ai) provides access to 100+ models from multiple providers through a single API.
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="openrouter/openai/gpt-5"
|
||||
export LLM_API_KEY="sk-or-..."
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
Access any model on OpenRouter using the format `openrouter/<provider>/<model>`:
|
||||
|
||||
| Model | Configuration |
|
||||
|-------|---------------|
|
||||
| GPT-5 | `openrouter/openai/gpt-5` |
|
||||
| Claude 4.5 Sonnet | `openrouter/anthropic/claude-sonnet-4.5` |
|
||||
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
|
||||
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |
|
||||
|
||||
## Get API Key
|
||||
|
||||
1. Go to [openrouter.ai](https://openrouter.ai)
|
||||
2. Sign in and navigate to Keys
|
||||
3. Create a new API key
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Single API** — Access models from OpenAI, Anthropic, Google, Meta, and more
|
||||
- **Fallback routing** — Automatic failover between providers
|
||||
- **Cost tracking** — Monitor usage across all models
|
||||
- **Higher rate limits** — OpenRouter handles provider limits for you
|
||||
61
docs/llm-providers/overview.mdx
Normal file
61
docs/llm-providers/overview.mdx
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: "Overview"
|
||||
description: "Configure your AI model for Strix"
|
||||
---
|
||||
|
||||
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
|
||||
|
||||
## Recommended Models
|
||||
|
||||
For best results, use one of these models:
|
||||
|
||||
| Model | Provider | Configuration |
|
||||
| ----------------- | ------------- | -------------------------------- |
|
||||
| GPT-5 | OpenAI | `openai/gpt-5` |
|
||||
| Claude 4.5 Sonnet | Anthropic | `anthropic/claude-sonnet-4-5` |
|
||||
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
|
||||
|
||||
## Quick Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="openai/gpt-5"
|
||||
export LLM_API_KEY="your-api-key"
|
||||
```
|
||||
|
||||
## Provider Guides
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="OpenAI" href="/llm-providers/openai">
|
||||
GPT-5 and Codex models.
|
||||
</Card>
|
||||
<Card title="Anthropic" href="/llm-providers/anthropic">
|
||||
Claude 4.5 Sonnet, Opus, and Haiku.
|
||||
</Card>
|
||||
<Card title="OpenRouter" href="/llm-providers/openrouter">
|
||||
Access 100+ models through a single API.
|
||||
</Card>
|
||||
<Card title="Google Vertex AI" href="/llm-providers/vertex">
|
||||
Gemini 3 models via Google Cloud.
|
||||
</Card>
|
||||
<Card title="AWS Bedrock" href="/llm-providers/bedrock">
|
||||
Claude 4.5 and Titan models via AWS.
|
||||
</Card>
|
||||
<Card title="Azure OpenAI" href="/llm-providers/azure">
|
||||
GPT-5 via Azure.
|
||||
</Card>
|
||||
<Card title="Local Models" href="/llm-providers/local">
|
||||
Llama 4, Mistral, and self-hosted models.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Model Format
|
||||
|
||||
Use LiteLLM's `provider/model-name` format:
|
||||
|
||||
```
|
||||
openai/gpt-5
|
||||
anthropic/claude-sonnet-4-5
|
||||
vertex_ai/gemini-3-pro-preview
|
||||
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
|
||||
ollama/llama4
|
||||
```
|
||||
53
docs/llm-providers/vertex.mdx
Normal file
53
docs/llm-providers/vertex.mdx
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: "Google Vertex AI"
|
||||
description: "Configure Strix with Gemini models via Google Cloud"
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
Vertex AI requires the Google Cloud dependency. Install Strix with the vertex extra:
|
||||
|
||||
```bash
|
||||
pipx install "strix-agent[vertex]"
|
||||
```
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="vertex_ai/gemini-3-pro-preview"
|
||||
```
|
||||
|
||||
No API key required—uses Google Cloud Application Default Credentials.
|
||||
|
||||
## Authentication
|
||||
|
||||
### Option 1: gcloud CLI
|
||||
|
||||
```bash
|
||||
gcloud auth application-default login
|
||||
```
|
||||
|
||||
### Option 2: Service Account
|
||||
|
||||
```bash
|
||||
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
| Model | Description |
|
||||
|-------|-------------|
|
||||
| `vertex_ai/gemini-3-pro-preview` | Best overall performance for security testing |
|
||||
| `vertex_ai/gemini-3-flash-preview` | Faster and cheaper |
|
||||
|
||||
## Project Configuration
|
||||
|
||||
```bash
|
||||
export VERTEXAI_PROJECT="your-project-id"
|
||||
export VERTEXAI_LOCATION="us-central1"
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Enable the Vertex AI API in your Google Cloud project
|
||||
2. Ensure your account has the `Vertex AI User` role
|
||||
Reference in New Issue
Block a user