---
title: "Configuration"
description: "Environment variables for Strix"
---
Configure Strix using environment variables.
## LLM Configuration
Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-5`).
API key for your LLM provider. Not required for local models or cloud provider auth (Vertex AI, AWS Bedrock).
Custom API base URL. Also accepts `OPENAI_API_BASE`, `LITELLM_BASE_URL`, or `OLLAMA_API_BASE`.
Request timeout in seconds for LLM calls.
Delay in seconds between LLM requests for rate limiting.
Maximum concurrent LLM requests.
Control thinking effort for reasoning models. Valid values: `none`, `minimal`, `low`, `medium`, `high`, `xhigh`. Defaults to `medium` for quick scan mode.
## Optional Features
API key for Perplexity AI. Enables real-time web search during scans for OSINT and vulnerability research.
## Docker Configuration
Docker image to use for the sandbox container.
Docker daemon socket path. Use for remote Docker hosts or custom configurations.
Runtime backend for the sandbox environment.
## Sandbox Configuration
Maximum execution time in seconds for sandbox operations.
Timeout in seconds for connecting to the sandbox container.
Disable browser tool.
## Example Setup
```bash
# Required
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="sk-..."
# Optional: Enable web search
export PERPLEXITY_API_KEY="pplx-..."
# Optional: Custom timeouts
export LLM_TIMEOUT="600"
export STRIX_SANDBOX_EXECUTION_TIMEOUT="1000"
# Optional: Use custom Docker image
export STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:latest"
```