20 Commits

Author SHA1 Message Date
0xallam
4384f5bff8 chore: Bump version to 0.8.2 2026-02-23 18:41:06 -08:00
0xallam
d84d72d986 feat: Expose Caido proxy port to host for human-in-the-loop interaction
Users can now access the Caido web UI from their browser to inspect traffic,
replay requests, and perform manual testing alongside the automated scan.

- Map Caido port (48080) to a random host port in DockerRuntime
- Add caido_port to SandboxInfo and track across container lifecycle
- Display Caido URL in TUI sidebar stats panel with selectable text
- Bind Caido to 0.0.0.0 in entrypoint (requires image rebuild)
- Bump sandbox image to 0.1.12
- Restore discord link in exit screen
2026-02-23 18:37:25 -08:00
mason5052
0ca9af3b3e docs: fix Discord badge expired invite code
The badge image URL used invite code  which is expired,
causing the badge to render 'Invalid invite' instead of the server info.
Updated to use the vanity URL  which resolves correctly.

Fixes #313
2026-02-22 20:52:03 -08:00
dependabot[bot]
939bc2a090 chore(deps): bump google-cloud-aiplatform from 1.129.0 to 1.133.0
Bumps [google-cloud-aiplatform](https://github.com/googleapis/python-aiplatform) from 1.129.0 to 1.133.0.
- [Release notes](https://github.com/googleapis/python-aiplatform/releases)
- [Changelog](https://github.com/googleapis/python-aiplatform/blob/main/CHANGELOG.md)
- [Commits](https://github.com/googleapis/python-aiplatform/compare/v1.129.0...v1.133.0)

---
updated-dependencies:
- dependency-name: google-cloud-aiplatform
  dependency-version: 1.133.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-22 20:51:29 -08:00
0xallam
00c571b2ca fix: Lower sidebar min width from 140 to 120 for smaller terminals 2026-02-22 09:28:52 -08:00
0xallam
522c010f6f fix: Update end screen to display models.strix.ai instead of strix.ai and discord 2026-02-22 09:03:56 -08:00
Ahmed Allam
551b780f52 Update installation instructions
Removed pipx installation instructions for strix-agent.
2026-02-22 00:10:06 +04:00
0xallam
643f6ba54a chore: Bump version to 0.8.1 2026-02-20 10:36:48 -08:00
0xallam
7fb4b63b96 fix: Change default model from claude-sonnet-4-6 to gpt-5 across docs and code 2026-02-20 10:35:58 -08:00
0xallam
027cea2f25 fix: Handle stray quotes in tag names and enforce parameter tags in prompt 2026-02-20 08:29:01 -08:00
0xallam
b9dcf7f63d fix: Address code review feedback on tool format normalization 2026-02-20 08:29:01 -08:00
0xallam
e09b5b42c1 fix: Prevent assistant-message prefill rejected by Claude 4.6 2026-02-20 08:29:01 -08:00
0xallam
e7970de6d2 fix: Handle single-quoted and whitespace-padded tool call tags 2026-02-20 08:29:01 -08:00
0xallam
7614fcc512 fix: Strip quotes from parameter/function names in tool calls 2026-02-20 08:29:01 -08:00
0xallam
f4d522164d feat: Normalize alternative tool call formats (invoke/function_calls) 2026-02-20 08:29:01 -08:00
Ahmed Allam
6166be841b Resolve LLM API Base and Models (#317) 2026-02-20 07:14:10 -08:00
0xallam
bf8020fafb fix: Strip custom_llm_provider before cost lookup for proxied models 2026-02-20 06:52:27 -08:00
0xallam
3b3576b024 refactor: Centralize strix model resolution with separate API and capability names
- Replace fragile prefix matching with explicit STRIX_MODEL_MAP
- Add resolve_strix_model() returning (api_model, canonical_model)
- api_model (openai/ prefix) for API calls to OpenAI-compatible Strix API
- canonical_model (actual provider name) for litellm capability lookups
- Centralize resolution in LLMConfig instead of scattered call sites
2026-02-20 04:40:04 -08:00
octovimmer
d2c99ea4df resolve: merge conflict resolution, llm api base resolution 2026-02-19 17:37:00 -08:00
octovimmer
06ae3d3860 fix: linting errors 2026-02-19 17:25:10 -08:00
30 changed files with 275 additions and 196 deletions

View File

@@ -30,7 +30,7 @@ Thank you for your interest in contributing to Strix! This guide will help you g
3. **Configure your LLM provider** 3. **Configure your LLM provider**
```bash ```bash
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
``` ```

View File

@@ -15,7 +15,7 @@
<a href="https://docs.strix.ai"><img src="https://img.shields.io/badge/Docs-docs.strix.ai-2b9246?style=for-the-badge&logo=gitbook&logoColor=white" alt="Docs"></a> <a href="https://docs.strix.ai"><img src="https://img.shields.io/badge/Docs-docs.strix.ai-2b9246?style=for-the-badge&logo=gitbook&logoColor=white" alt="Docs"></a>
<a href="https://strix.ai"><img src="https://img.shields.io/badge/Website-strix.ai-f0f0f0?style=for-the-badge&logoColor=000000" alt="Website"></a> <a href="https://strix.ai"><img src="https://img.shields.io/badge/Website-strix.ai-f0f0f0?style=for-the-badge&logoColor=000000" alt="Website"></a>
[![](https://dcbadge.limes.pink/api/server/8Suzzd9z)](https://discord.gg/strix-ai) [![](https://dcbadge.limes.pink/api/server/strix-ai)](https://discord.gg/strix-ai)
<a href="https://deepwiki.com/usestrix/strix"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a> <a href="https://deepwiki.com/usestrix/strix"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
<a href="https://github.com/usestrix/strix"><img src="https://img.shields.io/github/stars/usestrix/strix?style=flat-square" alt="GitHub Stars"></a> <a href="https://github.com/usestrix/strix"><img src="https://img.shields.io/github/stars/usestrix/strix?style=flat-square" alt="GitHub Stars"></a>
@@ -82,11 +82,8 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
# Install Strix # Install Strix
curl -sSL https://strix.ai/install | bash curl -sSL https://strix.ai/install | bash
# Or via pipx
pipx install strix-agent
# Configure your AI provider # Configure your AI provider
export STRIX_LLM="anthropic/claude-sonnet-4-6" # or "strix/claude-sonnet-4.6" via Strix Router (https://models.strix.ai) export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai)
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
# Run your first security assessment # Run your first security assessment
@@ -203,7 +200,7 @@ jobs:
### Configuration ### Configuration
```bash ```bash
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
# Optional # Optional
@@ -217,8 +214,8 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high,
**Recommended models for best results:** **Recommended models for best results:**
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5` - [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview` - [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models. See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.

View File

@@ -9,7 +9,7 @@ if [ ! -f /app/certs/ca.p12 ]; then
exit 1 exit 1
fi fi
caido-cli --listen 127.0.0.1:${CAIDO_PORT} \ caido-cli --listen 0.0.0.0:${CAIDO_PORT} \
--allow-guests \ --allow-guests \
--no-logging \ --no-logging \
--no-open \ --no-open \

View File

@@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file.
## LLM Configuration ## LLM Configuration
<ParamField path="STRIX_LLM" type="string" required> <ParamField path="STRIX_LLM" type="string" required>
Model name in LiteLLM format (e.g., `anthropic/claude-sonnet-4-6`, `openai/gpt-5`). Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-6`).
</ParamField> </ParamField>
<ParamField path="LLM_API_KEY" type="string"> <ParamField path="LLM_API_KEY" type="string">
@@ -51,7 +51,7 @@ Configure Strix using environment variables or a config file.
## Docker Configuration ## Docker Configuration
<ParamField path="STRIX_IMAGE" default="ghcr.io/usestrix/strix-sandbox:0.1.11" type="string"> <ParamField path="STRIX_IMAGE" default="ghcr.io/usestrix/strix-sandbox:0.1.12" type="string">
Docker image to use for the sandbox container. Docker image to use for the sandbox container.
</ParamField> </ParamField>
@@ -86,7 +86,7 @@ strix --target ./app --config /path/to/config.json
```json ```json
{ {
"env": { "env": {
"STRIX_LLM": "anthropic/claude-sonnet-4-6", "STRIX_LLM": "openai/gpt-5",
"LLM_API_KEY": "sk-...", "LLM_API_KEY": "sk-...",
"STRIX_REASONING_EFFORT": "high" "STRIX_REASONING_EFFORT": "high"
} }
@@ -97,7 +97,7 @@ strix --target ./app --config /path/to/config.json
```bash ```bash
# Required # Required
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="sk-..." export LLM_API_KEY="sk-..."
# Optional: Enable web search # Optional: Enable web search

View File

@@ -32,7 +32,7 @@ description: "Contribute to Strix development"
</Step> </Step>
<Step title="Configure LLM"> <Step title="Configure LLM">
```bash ```bash
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
``` ```
</Step> </Step>

View File

@@ -78,7 +78,7 @@ Strix uses a graph of specialized agents for comprehensive security testing:
curl -sSL https://strix.ai/install | bash curl -sSL https://strix.ai/install | bash
# Configure # Configure
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
# Scan # Scan

View File

@@ -35,7 +35,7 @@ Add these secrets to your repository:
| Secret | Description | | Secret | Description |
|--------|-------------| |--------|-------------|
| `STRIX_LLM` | Model name (e.g., `anthropic/claude-sonnet-4-6`) | | `STRIX_LLM` | Model name (e.g., `openai/gpt-5`) |
| `LLM_API_KEY` | API key for your LLM provider | | `LLM_API_KEY` | API key for your LLM provider |
## Exit Codes ## Exit Codes

View File

@@ -6,7 +6,7 @@ description: "Configure Strix with Claude models"
## Setup ## Setup
```bash ```bash
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="sk-ant-..." export LLM_API_KEY="sk-ant-..."
``` ```
@@ -14,7 +14,7 @@ export LLM_API_KEY="sk-ant-..."
| Model | Description | | Model | Description |
|-------|-------------| |-------|-------------|
| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed (recommended) | | `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed |
| `anthropic/claude-opus-4-6` | Maximum capability for deep analysis | | `anthropic/claude-opus-4-6` | Maximum capability for deep analysis |
## Get API Key ## Get API Key

View File

@@ -25,7 +25,7 @@ Strix Router is currently in **beta**. It's completely optional — Strix works
```bash ```bash
export LLM_API_KEY='your-strix-api-key' export LLM_API_KEY='your-strix-api-key'
export STRIX_LLM='strix/claude-sonnet-4.6' export STRIX_LLM='strix/gpt-5'
``` ```
3. Run a scan: 3. Run a scan:

View File

@@ -10,7 +10,7 @@ Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibi
The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention. The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention.
```bash ```bash
export STRIX_LLM="strix/claude-sonnet-4.6" export STRIX_LLM="strix/gpt-5"
export LLM_API_KEY="your-strix-api-key" export LLM_API_KEY="your-strix-api-key"
``` ```
@@ -22,12 +22,12 @@ You can also use any LiteLLM-compatible provider with your own API keys:
| Model | Provider | Configuration | | Model | Provider | Configuration |
| ----------------- | ------------- | -------------------------------- | | ----------------- | ------------- | -------------------------------- |
| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
| GPT-5 | OpenAI | `openai/gpt-5` | | GPT-5 | OpenAI | `openai/gpt-5` |
| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` | | Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
```bash ```bash
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
``` ```
@@ -52,7 +52,7 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re
GPT-5 and Codex models. GPT-5 and Codex models.
</Card> </Card>
<Card title="Anthropic" href="/llm-providers/anthropic"> <Card title="Anthropic" href="/llm-providers/anthropic">
Claude Sonnet 4.6, Opus, and Haiku. Claude Opus, Sonnet, and Haiku.
</Card> </Card>
<Card title="OpenRouter" href="/llm-providers/openrouter"> <Card title="OpenRouter" href="/llm-providers/openrouter">
Access 100+ models through a single API. Access 100+ models through a single API.
@@ -76,8 +76,8 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re
Use LiteLLM's `provider/model-name` format: Use LiteLLM's `provider/model-name` format:
``` ```
anthropic/claude-sonnet-4-6
openai/gpt-5 openai/gpt-5
anthropic/claude-sonnet-4-6
vertex_ai/gemini-3-pro-preview vertex_ai/gemini-3-pro-preview
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0 bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
ollama/llama4 ollama/llama4

View File

@@ -30,20 +30,20 @@ Set your LLM provider:
<Tabs> <Tabs>
<Tab title="Strix Router"> <Tab title="Strix Router">
```bash ```bash
export STRIX_LLM="strix/claude-sonnet-4.6" export STRIX_LLM="strix/gpt-5"
export LLM_API_KEY="your-strix-api-key" export LLM_API_KEY="your-strix-api-key"
``` ```
</Tab> </Tab>
<Tab title="Bring Your Own Key"> <Tab title="Bring Your Own Key">
```bash ```bash
export STRIX_LLM="anthropic/claude-sonnet-4-6" export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key" export LLM_API_KEY="your-api-key"
``` ```
</Tab> </Tab>
</Tabs> </Tabs>
<Tip> <Tip>
For best results, use `strix/claude-sonnet-4.6`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`. For best results, use `strix/gpt-5`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`.
</Tip> </Tip>
## Run Your First Scan ## Run Your First Scan

142
poetry.lock generated
View File

@@ -190,7 +190,7 @@ description = "Python graph (network) package"
optional = false optional = false
python-versions = "*" python-versions = "*"
groups = ["dev"] groups = ["dev"]
markers = "python_version <= \"3.14\"" markers = "python_version < \"3.15\""
files = [ files = [
{file = "altgraph-0.17.5-py2.py3-none-any.whl", hash = "sha256:f3a22400bce1b0c701683820ac4f3b159cd301acab067c51c653e06961600597"}, {file = "altgraph-0.17.5-py2.py3-none-any.whl", hash = "sha256:f3a22400bce1b0c701683820ac4f3b159cd301acab067c51c653e06961600597"},
{file = "altgraph-0.17.5.tar.gz", hash = "sha256:c87b395dd12fabde9c99573a9749d67da8d29ef9de0125c7f536699b4a9bc9e7"}, {file = "altgraph-0.17.5.tar.gz", hash = "sha256:c87b395dd12fabde9c99573a9749d67da8d29ef9de0125c7f536699b4a9bc9e7"},
@@ -324,7 +324,7 @@ description = "LTS Port of Python audioop"
optional = true optional = true
python-versions = ">=3.13" python-versions = ">=3.13"
groups = ["main"] groups = ["main"]
markers = "extra == \"sandbox\" and python_version >= \"3.13\"" markers = "python_version >= \"3.13\" and extra == \"sandbox\""
files = [ files = [
{file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_universal2.whl", hash = "sha256:fd3d4602dc64914d462924a08c1a9816435a2155d74f325853c1f1ac3b2d9800"}, {file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_universal2.whl", hash = "sha256:fd3d4602dc64914d462924a08c1a9816435a2155d74f325853c1f1ac3b2d9800"},
{file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_x86_64.whl", hash = "sha256:550c114a8df0aafe9a05442a1162dfc8fec37e9af1d625ae6060fed6e756f303"}, {file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_x86_64.whl", hash = "sha256:550c114a8df0aafe9a05442a1162dfc8fec37e9af1d625ae6060fed6e756f303"},
@@ -622,7 +622,7 @@ description = "Extensible memoizing collections and decorators"
optional = true optional = true
python-versions = ">=3.7" python-versions = ">=3.7"
groups = ["main"] groups = ["main"]
markers = "extra == \"vertex\" or extra == \"sandbox\"" markers = "extra == \"sandbox\""
files = [ files = [
{file = "cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a"}, {file = "cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a"},
{file = "cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4"}, {file = "cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4"},
@@ -890,7 +890,7 @@ files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
] ]
markers = {main = "sys_platform == \"win32\" and extra == \"sandbox\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} markers = {main = "extra == \"sandbox\" and sys_platform == \"win32\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""}
[[package]] [[package]]
name = "contourpy" name = "contourpy"
@@ -1850,50 +1850,51 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.0)"]
[[package]] [[package]]
name = "google-auth" name = "google-auth"
version = "2.43.0" version = "2.48.0"
description = "Google Authentication Library" description = "Google Authentication Library"
optional = true optional = true
python-versions = ">=3.7" python-versions = ">=3.8"
groups = ["main"] groups = ["main"]
markers = "extra == \"vertex\"" markers = "extra == \"vertex\""
files = [ files = [
{file = "google_auth-2.43.0-py2.py3-none-any.whl", hash = "sha256:af628ba6fa493f75c7e9dbe9373d148ca9f4399b5ea29976519e0a3848eddd16"}, {file = "google_auth-2.48.0-py3-none-any.whl", hash = "sha256:2e2a537873d449434252a9632c28bfc268b0adb1e53f9fb62afc5333a975903f"},
{file = "google_auth-2.43.0.tar.gz", hash = "sha256:88228eee5fc21b62a1b5fe773ca15e67778cb07dc8363adcb4a8827b52d81483"}, {file = "google_auth-2.48.0.tar.gz", hash = "sha256:4f7e706b0cd3208a3d940a19a822c37a476ddba5450156c3e6624a71f7c841ce"},
] ]
[package.dependencies] [package.dependencies]
cachetools = ">=2.0.0,<7.0" cryptography = ">=38.0.3"
pyasn1-modules = ">=0.2.1" pyasn1-modules = ">=0.2.1"
requests = {version = ">=2.20.0,<3.0.0", optional = true, markers = "extra == \"requests\""} requests = {version = ">=2.20.0,<3.0.0", optional = true, markers = "extra == \"requests\""}
rsa = ">=3.1.4,<5" rsa = ">=3.1.4,<5"
[package.extras] [package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0)", "requests (>=2.20.0,<3.0.0)"] aiohttp = ["aiohttp (>=3.6.2,<4.0.0)", "requests (>=2.20.0,<3.0.0)"]
enterprise-cert = ["cryptography", "pyopenssl"] cryptography = ["cryptography (>=38.0.3)"]
pyjwt = ["cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "pyjwt (>=2.0)"] enterprise-cert = ["pyopenssl"]
pyopenssl = ["cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"] pyjwt = ["pyjwt (>=2.0)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
reauth = ["pyu2f (>=0.1.5)"] reauth = ["pyu2f (>=0.1.5)"]
requests = ["requests (>=2.20.0,<3.0.0)"] requests = ["requests (>=2.20.0,<3.0.0)"]
testing = ["aiohttp (<3.10.0)", "aiohttp (>=3.6.2,<4.0.0)", "aioresponses", "cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "cryptography (>=38.0.3)", "flask", "freezegun", "grpcio", "mock", "oauth2client", "packaging", "pyjwt (>=2.0)", "pyopenssl (<24.3.0)", "pyopenssl (>=20.0.0)", "pytest", "pytest-asyncio", "pytest-cov", "pytest-localserver", "pyu2f (>=0.1.5)", "requests (>=2.20.0,<3.0.0)", "responses", "urllib3"] testing = ["aiohttp (<3.10.0)", "aiohttp (>=3.6.2,<4.0.0)", "aioresponses", "flask", "freezegun", "grpcio", "oauth2client", "packaging", "pyjwt (>=2.0)", "pyopenssl (<24.3.0)", "pyopenssl (>=20.0.0)", "pytest", "pytest-asyncio", "pytest-cov", "pytest-localserver", "pyu2f (>=0.1.5)", "requests (>=2.20.0,<3.0.0)", "responses", "urllib3"]
urllib3 = ["packaging", "urllib3"] urllib3 = ["packaging", "urllib3"]
[[package]] [[package]]
name = "google-cloud-aiplatform" name = "google-cloud-aiplatform"
version = "1.129.0" version = "1.133.0"
description = "Vertex AI API client library" description = "Vertex AI API client library"
optional = true optional = true
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
markers = "extra == \"vertex\"" markers = "extra == \"vertex\""
files = [ files = [
{file = "google_cloud_aiplatform-1.129.0-py2.py3-none-any.whl", hash = "sha256:b0052143a1bc05894e59fc6f910e84c504e194fadf877f84fc790b38a2267739"}, {file = "google_cloud_aiplatform-1.133.0-py2.py3-none-any.whl", hash = "sha256:dfc81228e987ca10d1c32c7204e2131b3c8d6b7c8e0b4e23bf7c56816bc4c566"},
{file = "google_cloud_aiplatform-1.129.0.tar.gz", hash = "sha256:c53b9d6c529b4de2962b34425b0116f7a382a926b26e02c2196e372f9a31d196"}, {file = "google_cloud_aiplatform-1.133.0.tar.gz", hash = "sha256:3a6540711956dd178daaab3c2c05db476e46d94ac25912b8cf4f59b00b058ae0"},
] ]
[package.dependencies] [package.dependencies]
docstring_parser = "<1" docstring_parser = "<1"
google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.8.dev0,<3.0.0", extras = ["grpc"]} google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.8.dev0,<3.0.0", extras = ["grpc"]}
google-auth = ">=2.14.1,<3.0.0" google-auth = ">=2.47.0,<3.0.0"
google-cloud-bigquery = ">=1.15.0,<3.20.0 || >3.20.0,<4.0.0" google-cloud-bigquery = ">=1.15.0,<3.20.0 || >3.20.0,<4.0.0"
google-cloud-resource-manager = ">=1.3.3,<3.0.0" google-cloud-resource-manager = ">=1.3.3,<3.0.0"
google-cloud-storage = [ google-cloud-storage = [
@@ -1905,7 +1906,6 @@ packaging = ">=14.3"
proto-plus = ">=1.22.3,<2.0.0" proto-plus = ">=1.22.3,<2.0.0"
protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0" protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0"
pydantic = "<3" pydantic = "<3"
shapely = "<3.0.0"
typing_extensions = "*" typing_extensions = "*"
[package.extras] [package.extras]
@@ -1918,21 +1918,21 @@ cloud-profiler = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2
datasets = ["pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\""] datasets = ["pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\""]
endpoint = ["requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)"] endpoint = ["requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)"]
evaluation = ["jsonschema", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "pandas (>=1.0.0)", "pyyaml", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "tqdm (>=4.23.0)"] evaluation = ["jsonschema", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "pandas (>=1.0.0)", "pyyaml", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "tqdm (>=4.23.0)"]
full = ["docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.114.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "jsonschema", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)"] full = ["docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.124.4)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "jsonschema", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)"]
langchain = ["langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)"] langchain = ["langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)"]
langchain-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"] langchain-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"]
lit = ["explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""] lit = ["explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""]
llama-index = ["llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)"] llama-index = ["llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)"]
llama-index-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"] llama-index-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"]
metadata = ["numpy (>=1.15.0)", "pandas (>=1.0.0)"] metadata = ["numpy (>=1.15.0)", "pandas (>=1.0.0)"]
pipelines = ["pyyaml (>=5.3.1,<7)"] pipelines = ["pyyaml (>=5.3.1,<7)"]
prediction = ["docker (>=5.0.3)", "fastapi (>=0.71.0,<=0.114.0)", "httpx (>=0.23.0,<=0.28.1)", "starlette (>=0.17.1)", "uvicorn[standard] (>=0.16.0)"] prediction = ["docker (>=5.0.3)", "fastapi (>=0.71.0,<=0.124.4)", "httpx (>=0.23.0,<=0.28.1)", "starlette (>=0.17.1)", "uvicorn[standard] (>=0.16.0)"]
private-endpoints = ["requests (>=2.28.1)", "urllib3 (>=1.21.1,<1.27)"] private-endpoints = ["requests (>=2.28.1)", "urllib3 (>=1.21.1,<1.27)"]
ray = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\""] ray = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\""]
ray-testing = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "pytest-xdist", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "ray[train]", "scikit-learn (<1.6.0)", "tensorflow ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0)", "xgboost", "xgboost_ray"] ray-testing = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "pytest-xdist", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "ray[train]", "scikit-learn (<1.6.0)", "tensorflow ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0)", "xgboost", "xgboost_ray"]
reasoningengine = ["cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "typing_extensions"] reasoningengine = ["cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "typing_extensions"]
tensorboard = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2.0.0,<4.0.0)"] tensorboard = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2.0.0,<4.0.0)"]
testing = ["Pillow", "aiohttp", "bigframes ; python_version >= \"3.10\" and python_version < \"3.14\"", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.114.0)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "google-vizier (>=0.1.6)", "grpcio-testing", "grpcio-tools (>=1.63.0) ; python_version >= \"3.13\"", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "immutabledict", "ipython", "jsonschema", "kfp (>=2.6.0,<3.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "mock", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "protobuf (<=5.29.4)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pytest-asyncio", "pytest-cov", "pytest-xdist", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "sentencepiece (>=0.2.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (==2.14.1) ; python_version <= \"3.11\"", "tensorflow (==2.19.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0) ; python_version <= \"3.11\"", "torch (>=2.2.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)", "werkzeug (>=2.0.0,<4.0.0)", "xgboost"] testing = ["Pillow", "aiohttp", "bigframes ; python_version >= \"3.10\" and python_version < \"3.14\"", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.124.4)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "google-vizier (>=0.1.6)", "grpcio-testing", "grpcio-tools (>=1.63.0) ; python_version >= \"3.13\"", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "immutabledict", "ipython", "jsonschema", "kfp (>=2.6.0,<3.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "mock", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "protobuf (<=5.29.4)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pytest-asyncio", "pytest-cov", "pytest-xdist", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "sentencepiece (>=0.2.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (==2.14.1) ; python_version <= \"3.11\"", "tensorflow (==2.19.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0) ; python_version <= \"3.11\"", "torch (>=2.2.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)", "werkzeug (>=2.0.0,<4.0.0)", "xgboost"]
tokenization = ["sentencepiece (>=0.2.0)"] tokenization = ["sentencepiece (>=0.2.0)"]
vizier = ["google-vizier (>=0.1.6)"] vizier = ["google-vizier (>=0.1.6)"]
xai = ["tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""] xai = ["tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""]
@@ -3298,7 +3298,7 @@ description = "Mach-O header analysis and editing"
optional = false optional = false
python-versions = "*" python-versions = "*"
groups = ["dev"] groups = ["dev"]
markers = "sys_platform == \"darwin\" and python_version <= \"3.14\"" markers = "python_version < \"3.15\" and sys_platform == \"darwin\""
files = [ files = [
{file = "macholib-1.16.4-py2.py3-none-any.whl", hash = "sha256:da1a3fa8266e30f0ce7e97c6a54eefaae8edd1e5f86f3eb8b95457cae90265ea"}, {file = "macholib-1.16.4-py2.py3-none-any.whl", hash = "sha256:da1a3fa8266e30f0ce7e97c6a54eefaae8edd1e5f86f3eb8b95457cae90265ea"},
{file = "macholib-1.16.4.tar.gz", hash = "sha256:f408c93ab2e995cd2c46e34fe328b130404be143469e41bc366c807448979362"}, {file = "macholib-1.16.4.tar.gz", hash = "sha256:f408c93ab2e995cd2c46e34fe328b130404be143469e41bc366c807448979362"},
@@ -3882,7 +3882,7 @@ description = "Fundamental package for array computing in Python"
optional = true optional = true
python-versions = ">=3.11" python-versions = ">=3.11"
groups = ["main"] groups = ["main"]
markers = "extra == \"sandbox\" or extra == \"vertex\"" markers = "extra == \"sandbox\""
files = [ files = [
{file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"}, {file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"},
{file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"}, {file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"},
@@ -4347,7 +4347,7 @@ description = "Python PE parsing module"
optional = false optional = false
python-versions = ">=3.6.0" python-versions = ">=3.6.0"
groups = ["dev"] groups = ["dev"]
markers = "sys_platform == \"win32\" and python_version <= \"3.14\"" markers = "python_version < \"3.15\" and sys_platform == \"win32\""
files = [ files = [
{file = "pefile-2024.8.26-py3-none-any.whl", hash = "sha256:76f8b485dcd3b1bb8166f1128d395fa3d87af26360c2358fb75b80019b957c6f"}, {file = "pefile-2024.8.26-py3-none-any.whl", hash = "sha256:76f8b485dcd3b1bb8166f1128d395fa3d87af26360c2358fb75b80019b957c6f"},
{file = "pefile-2024.8.26.tar.gz", hash = "sha256:3ff6c5d8b43e8c37bb6e6dd5085658d658a7a0bdcd20b6a07b1fcfc1c4e9d632"}, {file = "pefile-2024.8.26.tar.gz", hash = "sha256:3ff6c5d8b43e8c37bb6e6dd5085658d658a7a0bdcd20b6a07b1fcfc1c4e9d632"},
@@ -4360,7 +4360,7 @@ description = "Pexpect allows easy control of interactive console applications."
optional = true optional = true
python-versions = "*" python-versions = "*"
groups = ["main"] groups = ["main"]
markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\" and extra == \"sandbox\"" markers = "extra == \"sandbox\" and sys_platform != \"win32\" and sys_platform != \"emscripten\""
files = [ files = [
{file = "pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523"}, {file = "pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523"},
{file = "pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f"}, {file = "pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f"},
@@ -4769,7 +4769,7 @@ description = "Run a subprocess in a pseudo terminal"
optional = true optional = true
python-versions = "*" python-versions = "*"
groups = ["main"] groups = ["main"]
markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\" and extra == \"sandbox\"" markers = "extra == \"sandbox\" and sys_platform != \"win32\" and sys_platform != \"emscripten\""
files = [ files = [
{file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"}, {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"},
{file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"}, {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"},
@@ -5085,7 +5085,7 @@ description = "PyInstaller bundles a Python application and all its dependencies
optional = false optional = false
python-versions = "<3.15,>=3.8" python-versions = "<3.15,>=3.8"
groups = ["dev"] groups = ["dev"]
markers = "python_version <= \"3.14\"" markers = "python_version < \"3.15\""
files = [ files = [
{file = "pyinstaller-6.17.0-py3-none-macosx_10_13_universal2.whl", hash = "sha256:4e446b8030c6e5a2f712e3f82011ecf6c7ead86008357b0d23a0ec4bcde31dac"}, {file = "pyinstaller-6.17.0-py3-none-macosx_10_13_universal2.whl", hash = "sha256:4e446b8030c6e5a2f712e3f82011ecf6c7ead86008357b0d23a0ec4bcde31dac"},
{file = "pyinstaller-6.17.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:aa9fd87aaa28239c6f0d0210114029bd03f8cac316a90bab071a5092d7c85ad7"}, {file = "pyinstaller-6.17.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:aa9fd87aaa28239c6f0d0210114029bd03f8cac316a90bab071a5092d7c85ad7"},
@@ -5121,7 +5121,7 @@ description = "Community maintained hooks for PyInstaller"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
groups = ["dev"] groups = ["dev"]
markers = "python_version <= \"3.14\"" markers = "python_version < \"3.15\""
files = [ files = [
{file = "pyinstaller_hooks_contrib-2025.10-py3-none-any.whl", hash = "sha256:aa7a378518772846221f63a84d6306d9827299323243db890851474dfd1231a9"}, {file = "pyinstaller_hooks_contrib-2025.10-py3-none-any.whl", hash = "sha256:aa7a378518772846221f63a84d6306d9827299323243db890851474dfd1231a9"},
{file = "pyinstaller_hooks_contrib-2025.10.tar.gz", hash = "sha256:a1a737e5c0dccf1cf6f19a25e2efd109b9fec9ddd625f97f553dac16ee884881"}, {file = "pyinstaller_hooks_contrib-2025.10.tar.gz", hash = "sha256:a1a737e5c0dccf1cf6f19a25e2efd109b9fec9ddd625f97f553dac16ee884881"},
@@ -5239,9 +5239,10 @@ diagrams = ["jinja2", "railroad-diagrams"]
name = "pypdf" name = "pypdf"
version = "6.7.1" version = "6.7.1"
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
optional = false optional = true
python-versions = ">=3.9" python-versions = ">=3.9"
groups = ["main"] groups = ["main"]
markers = "extra == \"sandbox\""
files = [ files = [
{file = "pypdf-6.7.1-py3-none-any.whl", hash = "sha256:a02ccbb06463f7c334ce1612e91b3e68a8e827f3cee100b9941771e6066b094e"}, {file = "pypdf-6.7.1-py3-none-any.whl", hash = "sha256:a02ccbb06463f7c334ce1612e91b3e68a8e827f3cee100b9941771e6066b094e"},
{file = "pypdf-6.7.1.tar.gz", hash = "sha256:6b7a63be5563a0a35d54c6d6b550d75c00b8ccf36384be96365355e296e6b3b0"}, {file = "pypdf-6.7.1.tar.gz", hash = "sha256:6b7a63be5563a0a35d54c6d6b550d75c00b8ccf36384be96365355e296e6b3b0"},
@@ -5502,7 +5503,7 @@ description = "A (partial) reimplementation of pywin32 using ctypes/cffi"
optional = false optional = false
python-versions = ">=3.6" python-versions = ">=3.6"
groups = ["dev"] groups = ["dev"]
markers = "sys_platform == \"win32\" and python_version <= \"3.14\"" markers = "python_version < \"3.15\" and sys_platform == \"win32\""
files = [ files = [
{file = "pywin32-ctypes-0.2.3.tar.gz", hash = "sha256:d162dc04946d704503b2edc4d55f3dba5c1d539ead017afa00142c38b9885755"}, {file = "pywin32-ctypes-0.2.3.tar.gz", hash = "sha256:d162dc04946d704503b2edc4d55f3dba5c1d539ead017afa00142c38b9885755"},
{file = "pywin32_ctypes-0.2.3-py3-none-any.whl", hash = "sha256:8a1513379d709975552d202d942d9837758905c8d01eb82b8bcc30918929e7b8"}, {file = "pywin32_ctypes-0.2.3-py3-none-any.whl", hash = "sha256:8a1513379d709975552d202d942d9837758905c8d01eb82b8bcc30918929e7b8"},
@@ -6149,81 +6150,6 @@ enabler = ["pytest-enabler (>=2.2)"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21) ; python_version >= \"3.9\" and sys_platform != \"cygwin\"", "jaraco.envs (>=2.2)", "jaraco.path (>=3.7.2)", "jaraco.test (>=5.5)", "packaging (>=24.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-home (>=0.5)", "pytest-perf ; sys_platform != \"cygwin\"", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel (>=0.44.0)"] test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21) ; python_version >= \"3.9\" and sys_platform != \"cygwin\"", "jaraco.envs (>=2.2)", "jaraco.path (>=3.7.2)", "jaraco.test (>=5.5)", "packaging (>=24.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-home (>=0.5)", "pytest-perf ; sys_platform != \"cygwin\"", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel (>=0.44.0)"]
type = ["importlib_metadata (>=7.0.2) ; python_version < \"3.10\"", "jaraco.develop (>=7.21) ; sys_platform != \"cygwin\"", "mypy (==1.14.*)", "pytest-mypy"] type = ["importlib_metadata (>=7.0.2) ; python_version < \"3.10\"", "jaraco.develop (>=7.21) ; sys_platform != \"cygwin\"", "mypy (==1.14.*)", "pytest-mypy"]
[[package]]
name = "shapely"
version = "2.1.2"
description = "Manipulation and analysis of geometric objects"
optional = true
python-versions = ">=3.10"
groups = ["main"]
markers = "extra == \"vertex\""
files = [
{file = "shapely-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7ae48c236c0324b4e139bea88a306a04ca630f49be66741b340729d380d8f52f"},
{file = "shapely-2.1.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:eba6710407f1daa8e7602c347dfc94adc02205ec27ed956346190d66579eb9ea"},
{file = "shapely-2.1.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ef4a456cc8b7b3d50ccec29642aa4aeda959e9da2fe9540a92754770d5f0cf1f"},
{file = "shapely-2.1.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e38a190442aacc67ff9f75ce60aec04893041f16f97d242209106d502486a142"},
{file = "shapely-2.1.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:40d784101f5d06a1fd30b55fc11ea58a61be23f930d934d86f19a180909908a4"},
{file = "shapely-2.1.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f6f6cd5819c50d9bcf921882784586aab34a4bd53e7553e175dece6db513a6f0"},
{file = "shapely-2.1.2-cp310-cp310-win32.whl", hash = "sha256:fe9627c39c59e553c90f5bc3128252cb85dc3b3be8189710666d2f8bc3a5503e"},
{file = "shapely-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:1d0bfb4b8f661b3b4ec3565fa36c340bfb1cda82087199711f86a88647d26b2f"},
{file = "shapely-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:91121757b0a36c9aac3427a651a7e6567110a4a67c97edf04f8d55d4765f6618"},
{file = "shapely-2.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:16a9c722ba774cf50b5d4541242b4cce05aafd44a015290c82ba8a16931ff63d"},
{file = "shapely-2.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cc4f7397459b12c0b196c9efe1f9d7e92463cbba142632b4cc6d8bbbbd3e2b09"},
{file = "shapely-2.1.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:136ab87b17e733e22f0961504d05e77e7be8c9b5a8184f685b4a91a84efe3c26"},
{file = "shapely-2.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:16c5d0fc45d3aa0a69074979f4f1928ca2734fb2e0dde8af9611e134e46774e7"},
{file = "shapely-2.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6ddc759f72b5b2b0f54a7e7cde44acef680a55019eb52ac63a7af2cf17cb9cd2"},
{file = "shapely-2.1.2-cp311-cp311-win32.whl", hash = "sha256:2fa78b49485391224755a856ed3b3bd91c8455f6121fee0db0e71cefb07d0ef6"},
{file = "shapely-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:c64d5c97b2f47e3cd9b712eaced3b061f2b71234b3fc263e0fcf7d889c6559dc"},
{file = "shapely-2.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fe2533caae6a91a543dec62e8360fe86ffcdc42a7c55f9dfd0128a977a896b94"},
{file = "shapely-2.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ba4d1333cc0bc94381d6d4308d2e4e008e0bd128bdcff5573199742ee3634359"},
{file = "shapely-2.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0bd308103340030feef6c111d3eb98d50dc13feea33affc8a6f9fa549e9458a3"},
{file = "shapely-2.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1e7d4d7ad262a48bb44277ca12c7c78cb1b0f56b32c10734ec9a1d30c0b0c54b"},
{file = "shapely-2.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e9eddfe513096a71896441a7c37db72da0687b34752c4e193577a145c71736fc"},
{file = "shapely-2.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:980c777c612514c0cf99bc8a9de6d286f5e186dcaf9091252fcd444e5638193d"},
{file = "shapely-2.1.2-cp312-cp312-win32.whl", hash = "sha256:9111274b88e4d7b54a95218e243282709b330ef52b7b86bc6aaf4f805306f454"},
{file = "shapely-2.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:743044b4cfb34f9a67205cee9279feaf60ba7d02e69febc2afc609047cb49179"},
{file = "shapely-2.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:b510dda1a3672d6879beb319bc7c5fd302c6c354584690973c838f46ec3e0fa8"},
{file = "shapely-2.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8cff473e81017594d20ec55d86b54bc635544897e13a7cfc12e36909c5309a2a"},
{file = "shapely-2.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe7b77dc63d707c09726b7908f575fc04ff1d1ad0f3fb92aec212396bc6cfe5e"},
{file = "shapely-2.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7ed1a5bbfb386ee8332713bf7508bc24e32d24b74fc9a7b9f8529a55db9f4ee6"},
{file = "shapely-2.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a84e0582858d841d54355246ddfcbd1fce3179f185da7470f41ce39d001ee1af"},
{file = "shapely-2.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:dc3487447a43d42adcdf52d7ac73804f2312cbfa5d433a7d2c506dcab0033dfd"},
{file = "shapely-2.1.2-cp313-cp313-win32.whl", hash = "sha256:9c3a3c648aedc9f99c09263b39f2d8252f199cb3ac154fadc173283d7d111350"},
{file = "shapely-2.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:ca2591bff6645c216695bdf1614fca9c82ea1144d4a7591a466fef64f28f0715"},
{file = "shapely-2.1.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:2d93d23bdd2ed9dc157b46bc2f19b7da143ca8714464249bef6771c679d5ff40"},
{file = "shapely-2.1.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:01d0d304b25634d60bd7cf291828119ab55a3bab87dc4af1e44b07fb225f188b"},
{file = "shapely-2.1.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8d8382dd120d64b03698b7298b89611a6ea6f55ada9d39942838b79c9bc89801"},
{file = "shapely-2.1.2-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:19efa3611eef966e776183e338b2d7ea43569ae99ab34f8d17c2c054d3205cc0"},
{file = "shapely-2.1.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:346ec0c1a0fcd32f57f00e4134d1200e14bf3f5ae12af87ba83ca275c502498c"},
{file = "shapely-2.1.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:6305993a35989391bd3476ee538a5c9a845861462327efe00dd11a5c8c709a99"},
{file = "shapely-2.1.2-cp313-cp313t-win32.whl", hash = "sha256:c8876673449f3401f278c86eb33224c5764582f72b653a415d0e6672fde887bf"},
{file = "shapely-2.1.2-cp313-cp313t-win_amd64.whl", hash = "sha256:4a44bc62a10d84c11a7a3d7c1c4fe857f7477c3506e24c9062da0db0ae0c449c"},
{file = "shapely-2.1.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:9a522f460d28e2bf4e12396240a5fc1518788b2fcd73535166d748399ef0c223"},
{file = "shapely-2.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1ff629e00818033b8d71139565527ced7d776c269a49bd78c9df84e8f852190c"},
{file = "shapely-2.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f67b34271dedc3c653eba4e3d7111aa421d5be9b4c4c7d38d30907f796cb30df"},
{file = "shapely-2.1.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:21952dc00df38a2c28375659b07a3979d22641aeb104751e769c3ee825aadecf"},
{file = "shapely-2.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1f2f33f486777456586948e333a56ae21f35ae273be99255a191f5c1fa302eb4"},
{file = "shapely-2.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:cf831a13e0d5a7eb519e96f58ec26e049b1fad411fc6fc23b162a7ce04d9cffc"},
{file = "shapely-2.1.2-cp314-cp314-win32.whl", hash = "sha256:61edcd8d0d17dd99075d320a1dd39c0cb9616f7572f10ef91b4b5b00c4aeb566"},
{file = "shapely-2.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:a444e7afccdb0999e203b976adb37ea633725333e5b119ad40b1ca291ecf311c"},
{file = "shapely-2.1.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5ebe3f84c6112ad3d4632b1fd2290665aa75d4cef5f6c5d77c4c95b324527c6a"},
{file = "shapely-2.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5860eb9f00a1d49ebb14e881f5caf6c2cf472c7fd38bd7f253bbd34f934eb076"},
{file = "shapely-2.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b705c99c76695702656327b819c9660768ec33f5ce01fa32b2af62b56ba400a1"},
{file = "shapely-2.1.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a1fd0ea855b2cf7c9cddaf25543e914dd75af9de08785f20ca3085f2c9ca60b0"},
{file = "shapely-2.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:df90e2db118c3671a0754f38e36802db75fe0920d211a27481daf50a711fdf26"},
{file = "shapely-2.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:361b6d45030b4ac64ddd0a26046906c8202eb60d0f9f53085f5179f1d23021a0"},
{file = "shapely-2.1.2-cp314-cp314t-win32.whl", hash = "sha256:b54df60f1fbdecc8ebc2c5b11870461a6417b3d617f555e5033f1505d36e5735"},
{file = "shapely-2.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:0036ac886e0923417932c2e6369b6c52e38e0ff5d9120b90eef5cd9a5fc5cae9"},
{file = "shapely-2.1.2.tar.gz", hash = "sha256:2ed4ecb28320a433db18a5bf029986aa8afcfd740745e78847e330d5d94922a9"},
]
[package.dependencies]
numpy = ">=1.21"
[package.extras]
docs = ["matplotlib", "numpydoc (==1.1.*)", "sphinx", "sphinx-book-theme", "sphinx-remove-toctrees"]
test = ["pytest", "pytest-cov", "scipy-doctest"]
[[package]] [[package]]
name = "six" name = "six"
version = "1.17.0" version = "1.17.0"
@@ -6532,7 +6458,7 @@ description = "Standard library aifc redistribution. \"dead battery\"."
optional = true optional = true
python-versions = "*" python-versions = "*"
groups = ["main"] groups = ["main"]
markers = "extra == \"sandbox\" and python_version >= \"3.13\"" markers = "python_version >= \"3.13\" and extra == \"sandbox\""
files = [ files = [
{file = "standard_aifc-3.13.0-py3-none-any.whl", hash = "sha256:f7ae09cc57de1224a0dd8e3eb8f73830be7c3d0bc485de4c1f82b4a7f645ac66"}, {file = "standard_aifc-3.13.0-py3-none-any.whl", hash = "sha256:f7ae09cc57de1224a0dd8e3eb8f73830be7c3d0bc485de4c1f82b4a7f645ac66"},
{file = "standard_aifc-3.13.0.tar.gz", hash = "sha256:64e249c7cb4b3daf2fdba4e95721f811bde8bdfc43ad9f936589b7bb2fae2e43"}, {file = "standard_aifc-3.13.0.tar.gz", hash = "sha256:64e249c7cb4b3daf2fdba4e95721f811bde8bdfc43ad9f936589b7bb2fae2e43"},
@@ -6549,7 +6475,7 @@ description = "Standard library chunk redistribution. \"dead battery\"."
optional = true optional = true
python-versions = "*" python-versions = "*"
groups = ["main"] groups = ["main"]
markers = "extra == \"sandbox\" and python_version >= \"3.13\"" markers = "python_version >= \"3.13\" and extra == \"sandbox\""
files = [ files = [
{file = "standard_chunk-3.13.0-py3-none-any.whl", hash = "sha256:17880a26c285189c644bd5bd8f8ed2bdb795d216e3293e6dbe55bbd848e2982c"}, {file = "standard_chunk-3.13.0-py3-none-any.whl", hash = "sha256:17880a26c285189c644bd5bd8f8ed2bdb795d216e3293e6dbe55bbd848e2982c"},
{file = "standard_chunk-3.13.0.tar.gz", hash = "sha256:4ac345d37d7e686d2755e01836b8d98eda0d1a3ee90375e597ae43aaf064d654"}, {file = "standard_chunk-3.13.0.tar.gz", hash = "sha256:4ac345d37d7e686d2755e01836b8d98eda0d1a3ee90375e597ae43aaf064d654"},

View File

@@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "strix-agent" name = "strix-agent"
version = "0.8.0" version = "0.8.2"
description = "Open-source AI Hackers for your apps" description = "Open-source AI Hackers for your apps"
authors = ["Strix <hi@usestrix.com>"] authors = ["Strix <hi@usestrix.com>"]
readme = "README.md" readme = "README.md"

View File

@@ -4,7 +4,7 @@ set -euo pipefail
APP=strix APP=strix
REPO="usestrix/strix" REPO="usestrix/strix"
STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.11" STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.12"
MUTED='\033[0;2m' MUTED='\033[0;2m'
RED='\033[0;31m' RED='\033[0;31m'
@@ -340,7 +340,7 @@ echo -e " ${MUTED}https://models.strix.ai${NC}"
echo "" echo ""
echo -e " ${CYAN}2.${NC} Set your environment:" echo -e " ${CYAN}2.${NC} Set your environment:"
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}" echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
echo -e " ${MUTED}export STRIX_LLM='strix/claude-sonnet-4.6'${NC}" echo -e " ${MUTED}export STRIX_LLM='strix/gpt-5'${NC}"
echo "" echo ""
echo -e " ${CYAN}3.${NC} Run a penetration test:" echo -e " ${CYAN}3.${NC} Run a penetration test:"
echo -e " ${MUTED}strix --target https://example.com${NC}" echo -e " ${MUTED}strix --target https://example.com${NC}"

View File

@@ -314,13 +314,37 @@ CRITICAL RULES:
4. Use ONLY the exact format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters. 4. Use ONLY the exact format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters.
5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. Literal "\n" instead of real line breaks will cause tools to fail. 5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. Literal "\n" instead of real line breaks will cause tools to fail.
6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants). 6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants).
- Correct: <function=think> ... </function>
- Incorrect: <thinking_tools.think> ... </function>
- Incorrect: <think> ... </think>
- Incorrect: {"think": {...}}
7. Parameters must use <parameter=param_name>value</parameter> exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values. 7. Parameters must use <parameter=param_name>value</parameter> exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values.
8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block. 8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block.
CORRECT format — use this EXACTLY:
<function=tool_name>
<parameter=param_name>value</parameter>
</function>
WRONG formats — NEVER use these:
- <invoke name="tool_name"><parameter name="param_name">value</parameter></invoke>
- <function_calls><invoke name="tool_name">...</invoke></function_calls>
- <tool_call><tool_name>...</tool_name></tool_call>
- {"tool_name": {"param_name": "value"}}
- ```<function=tool_name>...</function>```
- <function=tool_name>value_without_parameter_tags</function>
EVERY argument MUST be wrapped in <parameter=name>...</parameter> tags. NEVER put values directly in the function body without parameter tags. This WILL cause the tool call to fail.
Do NOT emit any extra XML tags in your output. In particular:
- NO <thinking>...</thinking> or <thought>...</thought> blocks
- NO <scratchpad>...</scratchpad> or <reasoning>...</reasoning> blocks
- NO <answer>...</answer> or <response>...</response> wrappers
If you need to reason, use the think tool. Your raw output must contain ONLY the tool call — no surrounding XML tags.
Notice: use <function=X> NOT <invoke name="X">, use <parameter=X> NOT <parameter name="X">, use </function> NOT </invoke>.
Example (terminal tool):
<function=terminal_execute>
<parameter=command>nmap -sV -p 1-1000 target.com</parameter>
</function>
Example (agent creation tool): Example (agent creation tool):
<function=create_agent> <function=create_agent>
<parameter=task>Perform targeted XSS testing on the search endpoint</parameter> <parameter=task>Perform targeted XSS testing on the search endpoint</parameter>

View File

@@ -333,6 +333,14 @@ class BaseAgent(metaclass=AgentMeta):
if "agent_id" in sandbox_info: if "agent_id" in sandbox_info:
self.state.sandbox_info["agent_id"] = sandbox_info["agent_id"] self.state.sandbox_info["agent_id"] = sandbox_info["agent_id"]
caido_port = sandbox_info.get("caido_port")
if caido_port:
from strix.telemetry.tracer import get_global_tracer
tracer = get_global_tracer()
if tracer:
tracer.caido_url = f"localhost:{caido_port}"
except Exception as e: except Exception as e:
from strix.telemetry import posthog from strix.telemetry import posthog

View File

@@ -40,7 +40,7 @@ class Config:
strix_disable_browser = "false" strix_disable_browser = "false"
# Runtime Configuration # Runtime Configuration
strix_image = "ghcr.io/usestrix/strix-sandbox:0.1.11" strix_image = "ghcr.io/usestrix/strix-sandbox:0.1.12"
strix_runtime_backend = "docker" strix_runtime_backend = "docker"
strix_sandbox_execution_timeout = "120" strix_sandbox_execution_timeout = "120"
strix_sandbox_connect_timeout = "10" strix_sandbox_connect_timeout = "10"
@@ -187,6 +187,9 @@ def resolve_llm_config() -> tuple[str | None, str | None, str | None]:
Returns: Returns:
tuple: (model_name, api_key, api_base) tuple: (model_name, api_key, api_base)
- model_name: Original model name (strix/ prefix preserved for display)
- api_key: LLM API key
- api_base: API base URL (auto-set to STRIX_API_BASE for strix/ models)
""" """
model = Config.get("strix_llm") model = Config.get("strix_llm")
if not model: if not model:
@@ -195,10 +198,8 @@ def resolve_llm_config() -> tuple[str | None, str | None, str | None]:
api_key = Config.get("llm_api_key") api_key = Config.get("llm_api_key")
if model.startswith("strix/"): if model.startswith("strix/"):
model_name = "openai/" + model[6:]
api_base: str | None = STRIX_API_BASE api_base: str | None = STRIX_API_BASE
else: else:
model_name = model
api_base = ( api_base = (
Config.get("llm_api_base") Config.get("llm_api_base")
or Config.get("openai_api_base") or Config.get("openai_api_base")
@@ -206,4 +207,4 @@ def resolve_llm_config() -> tuple[str | None, str | None, str | None]:
or Config.get("ollama_api_base") or Config.get("ollama_api_base")
) )
return model_name, api_key, api_base return model, api_key, api_base

View File

@@ -77,12 +77,21 @@ Toast.-information .toast--title {
margin-bottom: 0; margin-bottom: 0;
} }
#stats_display { #stats_scroll {
height: auto; height: auto;
max-height: 15; max-height: 15;
background: transparent; background: transparent;
padding: 0; padding: 0;
margin: 0; margin: 0;
border: round #333333;
scrollbar-size: 0 0;
}
#stats_display {
height: auto;
background: transparent;
padding: 0 1;
margin: 0;
} }
#vulnerabilities_panel { #vulnerabilities_panel {

View File

@@ -18,6 +18,8 @@ from rich.panel import Panel
from rich.text import Text from rich.text import Text
from strix.config import Config, apply_saved_config, save_current_config from strix.config import Config, apply_saved_config, save_current_config
from strix.config.config import resolve_llm_config
from strix.llm.utils import resolve_strix_model
apply_saved_config() apply_saved_config()
@@ -99,7 +101,7 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
error_text.append("", style="white") error_text.append("", style="white")
error_text.append("STRIX_LLM", style="bold cyan") error_text.append("STRIX_LLM", style="bold cyan")
error_text.append( error_text.append(
" - Model name to use with litellm (e.g., 'anthropic/claude-sonnet-4-6')\n", " - Model name to use with litellm (e.g., 'openai/gpt-5')\n",
style="white", style="white",
) )
@@ -139,9 +141,9 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
error_text.append("\nExample setup:\n", style="white") error_text.append("\nExample setup:\n", style="white")
if uses_strix_models: if uses_strix_models:
error_text.append("export STRIX_LLM='strix/claude-sonnet-4.6'\n", style="dim white") error_text.append("export STRIX_LLM='strix/gpt-5'\n", style="dim white")
else: else:
error_text.append("export STRIX_LLM='anthropic/claude-sonnet-4-6'\n", style="dim white") error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
if missing_optional_vars: if missing_optional_vars:
for var in missing_optional_vars: for var in missing_optional_vars:
@@ -204,12 +206,12 @@ def check_docker_installed() -> None:
async def warm_up_llm() -> None: async def warm_up_llm() -> None:
from strix.config.config import resolve_llm_config
console = Console() console = Console()
try: try:
model_name, api_key, api_base = resolve_llm_config() model_name, api_key, api_base = resolve_llm_config()
litellm_model, _ = resolve_strix_model(model_name)
litellm_model = litellm_model or model_name
test_messages = [ test_messages = [
{"role": "system", "content": "You are a helpful assistant."}, {"role": "system", "content": "You are a helpful assistant."},
@@ -219,7 +221,7 @@ async def warm_up_llm() -> None:
llm_timeout = int(Config.get("llm_timeout") or "300") llm_timeout = int(Config.get("llm_timeout") or "300")
completion_kwargs: dict[str, Any] = { completion_kwargs: dict[str, Any] = {
"model": model_name, "model": litellm_model,
"messages": test_messages, "messages": test_messages,
"timeout": llm_timeout, "timeout": llm_timeout,
} }
@@ -460,7 +462,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
console.print("\n") console.print("\n")
console.print(panel) console.print(panel)
console.print() console.print()
console.print("[#60a5fa]strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]") console.print("[#60a5fa]models.strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]")
console.print() console.print()

View File

@@ -3,8 +3,11 @@ import re
from dataclasses import dataclass from dataclasses import dataclass
from typing import Literal from typing import Literal
from strix.llm.utils import normalize_tool_format
_FUNCTION_TAG_PREFIX = "<function=" _FUNCTION_TAG_PREFIX = "<function="
_INVOKE_TAG_PREFIX = "<invoke "
_FUNC_PATTERN = re.compile(r"<function=([^>]+)>") _FUNC_PATTERN = re.compile(r"<function=([^>]+)>")
_FUNC_END_PATTERN = re.compile(r"</function>") _FUNC_END_PATTERN = re.compile(r"</function>")
@@ -21,9 +24,8 @@ def _get_safe_content(content: str) -> tuple[str, str]:
return content, "" return content, ""
suffix = content[last_lt:] suffix = content[last_lt:]
target = _FUNCTION_TAG_PREFIX # "<function="
if target.startswith(suffix): if _FUNCTION_TAG_PREFIX.startswith(suffix) or _INVOKE_TAG_PREFIX.startswith(suffix):
return content[:last_lt], suffix return content[:last_lt], suffix
return content, "" return content, ""
@@ -42,6 +44,8 @@ def parse_streaming_content(content: str) -> list[StreamSegment]:
if not content: if not content:
return [] return []
content = normalize_tool_format(content)
segments: list[StreamSegment] = [] segments: list[StreamSegment] = []
func_matches = list(_FUNC_PATTERN.finditer(content)) func_matches = list(_FUNC_PATTERN.finditer(content))

View File

@@ -687,7 +687,7 @@ class StrixTUIApp(App): # type: ignore[misc]
CSS_PATH = "assets/tui_styles.tcss" CSS_PATH = "assets/tui_styles.tcss"
ALLOW_SELECT = True ALLOW_SELECT = True
SIDEBAR_MIN_WIDTH = 140 SIDEBAR_MIN_WIDTH = 120
selected_agent_id: reactive[str | None] = reactive(default=None) selected_agent_id: reactive[str | None] = reactive(default=None)
show_splash: reactive[bool] = reactive(default=True) show_splash: reactive[bool] = reactive(default=True)
@@ -829,11 +829,11 @@ class StrixTUIApp(App): # type: ignore[misc]
agents_tree.guide_style = "dashed" agents_tree.guide_style = "dashed"
stats_display = Static("", id="stats_display") stats_display = Static("", id="stats_display")
stats_display.ALLOW_SELECT = False stats_scroll = VerticalScroll(stats_display, id="stats_scroll")
vulnerabilities_panel = VulnerabilitiesPanel(id="vulnerabilities_panel") vulnerabilities_panel = VulnerabilitiesPanel(id="vulnerabilities_panel")
sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_display, id="sidebar") sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_scroll, id="sidebar")
content_container.mount(chat_area_container) content_container.mount(chat_area_container)
content_container.mount(sidebar) content_container.mount(sidebar)
@@ -1272,6 +1272,9 @@ class StrixTUIApp(App): # type: ignore[misc]
if not self._is_widget_safe(stats_display): if not self._is_widget_safe(stats_display):
return return
if self.screen.selections:
return
stats_content = Text() stats_content = Text()
stats_text = build_tui_stats_text(self.tracer, self.agent_config) stats_text = build_tui_stats_text(self.tracer, self.agent_config)
@@ -1281,15 +1284,7 @@ class StrixTUIApp(App): # type: ignore[misc]
version = get_package_version() version = get_package_version()
stats_content.append(f"\nv{version}", style="white") stats_content.append(f"\nv{version}", style="white")
from rich.panel import Panel self._safe_widget_operation(stats_display.update, stats_content)
stats_panel = Panel(
stats_content,
border_style="#333333",
padding=(0, 1),
)
self._safe_widget_operation(stats_display.update, stats_panel)
def _update_vulnerabilities_panel(self) -> None: def _update_vulnerabilities_panel(self) -> None:
"""Update the vulnerabilities panel with current vulnerability data.""" """Update the vulnerabilities panel with current vulnerability data."""

View File

@@ -390,6 +390,12 @@ def build_tui_stats_text(tracer: Any, agent_config: dict[str, Any] | None = None
stats_text.append(" · ", style="white") stats_text.append(" · ", style="white")
stats_text.append(f"${total_stats['cost']:.2f}", style="white") stats_text.append(f"${total_stats['cost']:.2f}", style="white")
caido_url = getattr(tracer, "caido_url", None)
if caido_url:
stats_text.append("\n")
stats_text.append("Caido: ", style="bold white")
stats_text.append(caido_url, style="white")
return stats_text return stats_text

View File

@@ -1,5 +1,6 @@
from strix.config import Config from strix.config import Config
from strix.config.config import resolve_llm_config from strix.config.config import resolve_llm_config
from strix.llm.utils import resolve_strix_model
class LLMConfig: class LLMConfig:
@@ -17,6 +18,10 @@ class LLMConfig:
if not self.model_name: if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty") raise ValueError("STRIX_LLM environment variable must be set and not empty")
api_model, canonical = resolve_strix_model(self.model_name)
self.litellm_model: str = api_model or self.model_name
self.canonical_model: str = canonical or self.model_name
self.enable_prompt_caching = enable_prompt_caching self.enable_prompt_caching = enable_prompt_caching
self.skills = skills or [] self.skills = skills or []

View File

@@ -6,6 +6,7 @@ from typing import Any
import litellm import litellm
from strix.config.config import resolve_llm_config from strix.config.config import resolve_llm_config
from strix.llm.utils import resolve_strix_model
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -156,6 +157,8 @@ def check_duplicate(
comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned} comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned}
model_name, api_key, api_base = resolve_llm_config() model_name, api_key, api_base = resolve_llm_config()
litellm_model, _ = resolve_strix_model(model_name)
litellm_model = litellm_model or model_name
messages = [ messages = [
{"role": "system", "content": DEDUPE_SYSTEM_PROMPT}, {"role": "system", "content": DEDUPE_SYSTEM_PROMPT},
@@ -170,7 +173,7 @@ def check_duplicate(
] ]
completion_kwargs: dict[str, Any] = { completion_kwargs: dict[str, Any] = {
"model": model_name, "model": litellm_model,
"messages": messages, "messages": messages,
"timeout": 120, "timeout": 120,
} }

View File

@@ -14,6 +14,7 @@ from strix.llm.memory_compressor import MemoryCompressor
from strix.llm.utils import ( from strix.llm.utils import (
_truncate_to_first_function, _truncate_to_first_function,
fix_incomplete_tool_call, fix_incomplete_tool_call,
normalize_tool_format,
parse_tool_invocations, parse_tool_invocations,
) )
from strix.skills import load_skills from strix.skills import load_skills
@@ -63,7 +64,7 @@ class LLM:
self.agent_name = agent_name self.agent_name = agent_name
self.agent_id: str | None = None self.agent_id: str | None = None
self._total_stats = RequestStats() self._total_stats = RequestStats()
self.memory_compressor = MemoryCompressor(model_name=config.model_name) self.memory_compressor = MemoryCompressor(model_name=config.litellm_model)
self.system_prompt = self._load_system_prompt(agent_name) self.system_prompt = self._load_system_prompt(agent_name)
reasoning = Config.get("strix_reasoning_effort") reasoning = Config.get("strix_reasoning_effort")
@@ -143,10 +144,10 @@ class LLM:
delta = self._get_chunk_content(chunk) delta = self._get_chunk_content(chunk)
if delta: if delta:
accumulated += delta accumulated += delta
if "</function>" in accumulated: if "</function>" in accumulated or "</invoke>" in accumulated:
accumulated = accumulated[ end_tag = "</function>" if "</function>" in accumulated else "</invoke>"
: accumulated.find("</function>") + len("</function>") pos = accumulated.find(end_tag)
] accumulated = accumulated[: pos + len(end_tag)]
yield LLMResponse(content=accumulated) yield LLMResponse(content=accumulated)
done_streaming = 1 done_streaming = 1
continue continue
@@ -155,6 +156,7 @@ class LLM:
if chunks: if chunks:
self._update_usage_stats(stream_chunk_builder(chunks)) self._update_usage_stats(stream_chunk_builder(chunks))
accumulated = normalize_tool_format(accumulated)
accumulated = fix_incomplete_tool_call(_truncate_to_first_function(accumulated)) accumulated = fix_incomplete_tool_call(_truncate_to_first_function(accumulated))
yield LLMResponse( yield LLMResponse(
content=accumulated, content=accumulated,
@@ -184,6 +186,9 @@ class LLM:
conversation_history.extend(compressed) conversation_history.extend(compressed)
messages.extend(compressed) messages.extend(compressed)
if messages[-1].get("role") == "assistant":
messages.append({"role": "user", "content": "<meta>Continue the task.</meta>"})
if self._is_anthropic() and self.config.enable_prompt_caching: if self._is_anthropic() and self.config.enable_prompt_caching:
messages = self._add_cache_control(messages) messages = self._add_cache_control(messages)
@@ -194,7 +199,7 @@ class LLM:
messages = self._strip_images(messages) messages = self._strip_images(messages)
args: dict[str, Any] = { args: dict[str, Any] = {
"model": self.config.model_name, "model": self.config.litellm_model,
"messages": messages, "messages": messages,
"timeout": self.config.timeout, "timeout": self.config.timeout,
"stream_options": {"include_usage": True}, "stream_options": {"include_usage": True},
@@ -229,8 +234,8 @@ class LLM:
def _update_usage_stats(self, response: Any) -> None: def _update_usage_stats(self, response: Any) -> None:
try: try:
if hasattr(response, "usage") and response.usage: if hasattr(response, "usage") and response.usage:
input_tokens = getattr(response.usage, "prompt_tokens", 0) input_tokens = getattr(response.usage, "prompt_tokens", 0) or 0
output_tokens = getattr(response.usage, "completion_tokens", 0) output_tokens = getattr(response.usage, "completion_tokens", 0) or 0
cached_tokens = 0 cached_tokens = 0
if hasattr(response.usage, "prompt_tokens_details"): if hasattr(response.usage, "prompt_tokens_details"):
@@ -238,14 +243,11 @@ class LLM:
if hasattr(prompt_details, "cached_tokens"): if hasattr(prompt_details, "cached_tokens"):
cached_tokens = prompt_details.cached_tokens or 0 cached_tokens = prompt_details.cached_tokens or 0
cost = self._extract_cost(response)
else: else:
input_tokens = 0 input_tokens = 0
output_tokens = 0 output_tokens = 0
cached_tokens = 0 cached_tokens = 0
try:
cost = completion_cost(response) or 0.0
except Exception: # noqa: BLE001
cost = 0.0 cost = 0.0
self._total_stats.input_tokens += input_tokens self._total_stats.input_tokens += input_tokens
@@ -256,6 +258,18 @@ class LLM:
except Exception: # noqa: BLE001, S110 # nosec B110 except Exception: # noqa: BLE001, S110 # nosec B110
pass pass
def _extract_cost(self, response: Any) -> float:
if hasattr(response, "usage") and response.usage:
direct_cost = getattr(response.usage, "cost", None)
if direct_cost is not None:
return float(direct_cost)
try:
if hasattr(response, "_hidden_params"):
response._hidden_params.pop("custom_llm_provider", None)
return completion_cost(response, model=self.config.canonical_model) or 0.0
except Exception: # noqa: BLE001
return 0.0
def _should_retry(self, e: Exception) -> bool: def _should_retry(self, e: Exception) -> bool:
code = getattr(e, "status_code", None) or getattr( code = getattr(e, "status_code", None) or getattr(
getattr(e, "response", None), "status_code", None getattr(e, "response", None), "status_code", None
@@ -275,13 +289,13 @@ class LLM:
def _supports_vision(self) -> bool: def _supports_vision(self) -> bool:
try: try:
return bool(supports_vision(model=self.config.model_name)) return bool(supports_vision(model=self.config.canonical_model))
except Exception: # noqa: BLE001 except Exception: # noqa: BLE001
return False return False
def _supports_reasoning(self) -> bool: def _supports_reasoning(self) -> bool:
try: try:
return bool(supports_reasoning(model=self.config.model_name)) return bool(supports_reasoning(model=self.config.canonical_model))
except Exception: # noqa: BLE001 except Exception: # noqa: BLE001
return False return False
@@ -302,7 +316,7 @@ class LLM:
return result return result
def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]: def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
if not messages or not supports_prompt_caching(self.config.model_name): if not messages or not supports_prompt_caching(self.config.canonical_model):
return messages return messages
result = list(messages) result = list(messages)

View File

@@ -91,7 +91,7 @@ def _summarize_messages(
if not messages: if not messages:
empty_summary = "<context_summary message_count='0'>{text}</context_summary>" empty_summary = "<context_summary message_count='0'>{text}</context_summary>"
return { return {
"role": "assistant", "role": "user",
"content": empty_summary.format(text="No messages to summarize"), "content": empty_summary.format(text="No messages to summarize"),
} }
@@ -123,7 +123,7 @@ def _summarize_messages(
return messages[0] return messages[0]
summary_msg = "<context_summary message_count='{count}'>{text}</context_summary>" summary_msg = "<context_summary message_count='{count}'>{text}</context_summary>"
return { return {
"role": "assistant", "role": "user",
"content": summary_msg.format(count=len(messages), text=summary), "content": summary_msg.format(count=len(messages), text=summary),
} }
except Exception: except Exception:
@@ -158,7 +158,7 @@ class MemoryCompressor:
): ):
self.max_images = max_images self.max_images = max_images
self.model_name = model_name or Config.get("strix_llm") self.model_name = model_name or Config.get("strix_llm")
self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "30") self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "120")
if not self.model_name: if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty") raise ValueError("STRIX_LLM environment variable must be set and not empty")

View File

@@ -3,11 +3,75 @@ import re
from typing import Any from typing import Any
_INVOKE_OPEN = re.compile(r'<invoke\s+name=["\']([^"\']+)["\']>')
_PARAM_NAME_ATTR = re.compile(r'<parameter\s+name=["\']([^"\']+)["\']>')
_FUNCTION_CALLS_TAG = re.compile(r"</?function_calls>")
_STRIP_TAG_QUOTES = re.compile(r"<(function|parameter)\s*=\s*([^>]*?)>")
def normalize_tool_format(content: str) -> str:
"""Convert alternative tool-call XML formats to the expected one.
Handles:
<function_calls>...</function_calls> → stripped
<invoke name="X"> → <function=X>
<parameter name="X"> → <parameter=X>
</invoke> → </function>
<function="X"> → <function=X>
<parameter="X"> → <parameter=X>
"""
if "<invoke" in content or "<function_calls" in content:
content = _FUNCTION_CALLS_TAG.sub("", content)
content = _INVOKE_OPEN.sub(r"<function=\1>", content)
content = _PARAM_NAME_ATTR.sub(r"<parameter=\1>", content)
content = content.replace("</invoke>", "</function>")
return _STRIP_TAG_QUOTES.sub(
lambda m: f"<{m.group(1)}={m.group(2).strip().strip(chr(34) + chr(39))}>", content
)
STRIX_MODEL_MAP: dict[str, str] = {
"claude-sonnet-4.6": "anthropic/claude-sonnet-4-6",
"claude-opus-4.6": "anthropic/claude-opus-4-6",
"gpt-5.2": "openai/gpt-5.2",
"gpt-5.1": "openai/gpt-5.1",
"gpt-5": "openai/gpt-5",
"gpt-5.2-codex": "openai/gpt-5.2-codex",
"gpt-5.1-codex-max": "openai/gpt-5.1-codex-max",
"gpt-5.1-codex": "openai/gpt-5.1-codex",
"gpt-5-codex": "openai/gpt-5-codex",
"gemini-3-pro-preview": "gemini/gemini-3-pro-preview",
"gemini-3-flash-preview": "gemini/gemini-3-flash-preview",
"glm-5": "openrouter/z-ai/glm-5",
"glm-4.7": "openrouter/z-ai/glm-4.7",
}
def resolve_strix_model(model_name: str | None) -> tuple[str | None, str | None]:
"""Resolve a strix/ model into names for API calls and capability lookups.
Returns (api_model, canonical_model):
- api_model: openai/<base> for API calls (Strix API is OpenAI-compatible)
- canonical_model: actual provider model name for litellm capability lookups
Non-strix models return the same name for both.
"""
if not model_name or not model_name.startswith("strix/"):
return model_name, model_name
base_model = model_name[6:]
api_model = f"openai/{base_model}"
canonical_model = STRIX_MODEL_MAP.get(base_model, api_model)
return api_model, canonical_model
def _truncate_to_first_function(content: str) -> str: def _truncate_to_first_function(content: str) -> str:
if not content: if not content:
return content return content
function_starts = [match.start() for match in re.finditer(r"<function=", content)] function_starts = [
match.start() for match in re.finditer(r"<function=|<invoke\s+name=", content)
]
if len(function_starts) >= 2: if len(function_starts) >= 2:
second_function_start = function_starts[1] second_function_start = function_starts[1]
@@ -18,6 +82,7 @@ def _truncate_to_first_function(content: str) -> str:
def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None: def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
content = normalize_tool_format(content)
content = fix_incomplete_tool_call(content) content = fix_incomplete_tool_call(content)
tool_invocations: list[dict[str, Any]] = [] tool_invocations: list[dict[str, Any]] = []
@@ -47,12 +112,14 @@ def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
def fix_incomplete_tool_call(content: str) -> str: def fix_incomplete_tool_call(content: str) -> str:
"""Fix incomplete tool calls by adding missing </function> tag.""" """Fix incomplete tool calls by adding missing closing tag.
if (
"<function=" in content Handles both ``<function=…>`` and ``<invoke name="">`` formats.
and content.count("<function=") == 1 """
and "</function>" not in content has_open = "<function=" in content or "<invoke " in content
): count_open = content.count("<function=") + content.count("<invoke ")
has_close = "</function>" in content or "</invoke>" in content
if has_open and count_open == 1 and not has_close:
content = content.rstrip() content = content.rstrip()
content = content + "function>" if content.endswith("</") else content + "\n</function>" content = content + "function>" if content.endswith("</") else content + "\n</function>"
return content return content
@@ -73,6 +140,7 @@ def clean_content(content: str) -> str:
if not content: if not content:
return "" return ""
content = normalize_tool_format(content)
content = fix_incomplete_tool_call(content) content = fix_incomplete_tool_call(content)
tool_pattern = r"<function=[^>]+>.*?</function>" tool_pattern = r"<function=[^>]+>.*?</function>"

View File

@@ -22,6 +22,7 @@ from .runtime import AbstractRuntime, SandboxInfo
HOST_GATEWAY_HOSTNAME = "host.docker.internal" HOST_GATEWAY_HOSTNAME = "host.docker.internal"
DOCKER_TIMEOUT = 60 DOCKER_TIMEOUT = 60
CONTAINER_TOOL_SERVER_PORT = 48081 CONTAINER_TOOL_SERVER_PORT = 48081
CONTAINER_CAIDO_PORT = 48080
class DockerRuntime(AbstractRuntime): class DockerRuntime(AbstractRuntime):
@@ -37,6 +38,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container: Container | None = None self._scan_container: Container | None = None
self._tool_server_port: int | None = None self._tool_server_port: int | None = None
self._tool_server_token: str | None = None self._tool_server_token: str | None = None
self._caido_port: int | None = None
def _find_available_port(self) -> int: def _find_available_port(self) -> int:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
@@ -78,6 +80,10 @@ class DockerRuntime(AbstractRuntime):
if port_bindings.get(port_key): if port_bindings.get(port_key):
self._tool_server_port = int(port_bindings[port_key][0]["HostPort"]) self._tool_server_port = int(port_bindings[port_key][0]["HostPort"])
caido_port_key = f"{CONTAINER_CAIDO_PORT}/tcp"
if port_bindings.get(caido_port_key):
self._caido_port = int(port_bindings[caido_port_key][0]["HostPort"])
def _wait_for_tool_server(self, max_retries: int = 30, timeout: int = 5) -> None: def _wait_for_tool_server(self, max_retries: int = 30, timeout: int = 5) -> None:
host = self._resolve_docker_host() host = self._resolve_docker_host()
health_url = f"http://{host}:{self._tool_server_port}/health" health_url = f"http://{host}:{self._tool_server_port}/health"
@@ -121,6 +127,7 @@ class DockerRuntime(AbstractRuntime):
time.sleep(1) time.sleep(1)
self._tool_server_port = self._find_available_port() self._tool_server_port = self._find_available_port()
self._caido_port = self._find_available_port()
self._tool_server_token = secrets.token_urlsafe(32) self._tool_server_token = secrets.token_urlsafe(32)
execution_timeout = Config.get("strix_sandbox_execution_timeout") or "120" execution_timeout = Config.get("strix_sandbox_execution_timeout") or "120"
@@ -130,7 +137,10 @@ class DockerRuntime(AbstractRuntime):
detach=True, detach=True,
name=container_name, name=container_name,
hostname=container_name, hostname=container_name,
ports={f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port}, ports={
f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port,
f"{CONTAINER_CAIDO_PORT}/tcp": self._caido_port,
},
cap_add=["NET_ADMIN", "NET_RAW"], cap_add=["NET_ADMIN", "NET_RAW"],
labels={"strix-scan-id": scan_id}, labels={"strix-scan-id": scan_id},
environment={ environment={
@@ -152,6 +162,7 @@ class DockerRuntime(AbstractRuntime):
if attempt < max_retries: if attempt < max_retries:
self._tool_server_port = None self._tool_server_port = None
self._tool_server_token = None self._tool_server_token = None
self._caido_port = None
time.sleep(2**attempt) time.sleep(2**attempt)
else: else:
return container return container
@@ -173,6 +184,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None self._scan_container = None
self._tool_server_port = None self._tool_server_port = None
self._tool_server_token = None self._tool_server_token = None
self._caido_port = None
try: try:
container = self.client.containers.get(container_name) container = self.client.containers.get(container_name)
@@ -260,7 +272,7 @@ class DockerRuntime(AbstractRuntime):
raise RuntimeError("Docker container ID is unexpectedly None") raise RuntimeError("Docker container ID is unexpectedly None")
token = existing_token or self._tool_server_token token = existing_token or self._tool_server_token
if self._tool_server_port is None or token is None: if self._tool_server_port is None or self._caido_port is None or token is None:
raise RuntimeError("Tool server not initialized") raise RuntimeError("Tool server not initialized")
host = self._resolve_docker_host() host = self._resolve_docker_host()
@@ -273,6 +285,7 @@ class DockerRuntime(AbstractRuntime):
"api_url": api_url, "api_url": api_url,
"auth_token": token, "auth_token": token,
"tool_server_port": self._tool_server_port, "tool_server_port": self._tool_server_port,
"caido_port": self._caido_port,
"agent_id": agent_id, "agent_id": agent_id,
} }
@@ -314,6 +327,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None self._scan_container = None
self._tool_server_port = None self._tool_server_port = None
self._tool_server_token = None self._tool_server_token = None
self._caido_port = None
except (NotFound, DockerException): except (NotFound, DockerException):
pass pass
@@ -323,6 +337,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None self._scan_container = None
self._tool_server_port = None self._tool_server_port = None
self._tool_server_token = None self._tool_server_token = None
self._caido_port = None
if container_name is None: if container_name is None:
return return

View File

@@ -7,6 +7,7 @@ class SandboxInfo(TypedDict):
api_url: str api_url: str
auth_token: str | None auth_token: str | None
tool_server_port: int tool_server_port: int
caido_port: int
agent_id: str agent_id: str

View File

@@ -56,6 +56,7 @@ class Tracer:
self._next_message_id = 1 self._next_message_id = 1
self._saved_vuln_ids: set[str] = set() self._saved_vuln_ids: set[str] = set()
self.caido_url: str | None = None
self.vulnerability_found_callback: Callable[[dict[str, Any]], None] | None = None self.vulnerability_found_callback: Callable[[dict[str, Any]], None] | None = None
def set_run_name(self, run_name: str) -> None: def set_run_name(self, run_name: str) -> None: