From 154040f9fb76dcdfcb55d13e2405daa11bac9031 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Tue, 17 Feb 2026 14:11:58 -0800 Subject: [PATCH 01/43] fix: Improve code_locations schema for accurate block-level fixes and multi-part suggestions Rewrote the code_locations parameter description to make fix_before/fix_after semantics explicit: they are literal block-level replacements mapped directly to GitHub/GitLab PR suggestion blocks. Added guidance for multi-part fixes (separate locations for non-contiguous changes like imports + code), common mistakes to avoid, and updated all examples to demonstrate multi-line ranges. --- .../reporting/reporting_actions_schema.xml | 85 ++++++++++++++----- 1 file changed, 66 insertions(+), 19 deletions(-) diff --git a/strix/tools/reporting/reporting_actions_schema.xml b/strix/tools/reporting/reporting_actions_schema.xml index 39e6a77..b0b9afb 100644 --- a/strix/tools/reporting/reporting_actions_schema.xml +++ b/strix/tools/reporting/reporting_actions_schema.xml @@ -113,30 +113,58 @@ Do NOT use broad/parent CWEs like CWE-74, CWE-20, CWE-200, CWE-284, or CWE-693.< Nested XML list of code locations where the vulnerability exists. MANDATORY for white-box testing. -Order: first location is where the issue manifests (typically the sink). Additional locations provide data flow context (source → propagation → sink). +CRITICAL — HOW fix_before/fix_after WORK: +fix_before and fix_after are LITERAL BLOCK-LEVEL REPLACEMENTS used directly for GitHub/GitLab PR suggestion blocks. When a reviewer clicks "Accept suggestion", the platform replaces the EXACT lines from start_line to end_line with the fix_after content. This means: + +1. fix_before MUST be an EXACT, VERBATIM copy of the source code at lines start_line through end_line. Same whitespace, same indentation, same line breaks. If fix_before does not match the actual file content character-for-character, the suggestion will be wrong or will corrupt the code when accepted. + +2. fix_after is the COMPLETE replacement for that entire block. It replaces ALL lines from start_line to end_line. It can be more lines, fewer lines, or the same number of lines as fix_before. + +3. start_line and end_line define the EXACT line range being replaced. They must precisely cover the lines in fix_before — no more, no less. If the vulnerable code spans lines 45-48, then start_line=45 and end_line=48, and fix_before must contain all 4 lines exactly as they appear in the file. + +MULTI-PART FIXES: +Many fixes require changes in multiple non-contiguous parts of a file (e.g., adding an import at the top AND changing code lower down), or across multiple files. Since each fix_before/fix_after pair covers ONE contiguous block, you MUST create SEPARATE location entries for each part of the fix: + +- Each location covers one contiguous block of lines to change +- Use the label field to describe how each part relates to the overall fix (e.g., "Add import for parameterized query library", "Replace string interpolation with parameterized query") +- Order fix locations logically: primary fix first (where the vulnerability manifests), then supporting changes (imports, config, etc.) + +COMMON MISTAKES TO AVOID: +- Do NOT guess line numbers. Read the file and verify the exact lines before reporting. +- Do NOT paraphrase or reformat code in fix_before. It must be a verbatim copy. +- Do NOT set start_line=end_line when the vulnerable code spans multiple lines. Cover the full range. +- Do NOT put an import addition and a code change in the same fix_before/fix_after if they are not on adjacent lines. Split them into separate locations. +- Do NOT include lines outside the vulnerable/fixed code in fix_before just to "pad" the range. Each location element fields: - file (REQUIRED): Path relative to repository root. No leading slash, no absolute paths, no ".." traversal. Correct: "src/db/queries.ts" or "app/routes/users.py" Wrong: "/workspace/repo/src/db/queries.ts", "./src/db/queries.ts", "../../etc/passwd" -- start_line (REQUIRED): Exact 1-based line number where the vulnerable code begins. Must be a positive integer. You must be certain of this number — do not guess or approximate. Go back and verify against the actual file content if needed. -- end_line (REQUIRED): Exact 1-based line number where the vulnerable code ends. Must be >= start_line. Set equal to start_line if the vulnerability is on a single line. -- snippet (optional): The actual source code at this location, copied verbatim from the file. Do not paraphrase or summarize code — paste it exactly as it appears. -- label (optional): Short role description for this location in the data flow, e.g. "User input from request parameter (source)", "Unsanitized input passed to SQL query (sink)". -- fix_before (optional): The vulnerable code to be replaced, copied verbatim. Must match the actual source exactly — do not paraphrase, summarize, or add/remove whitespace. Only include on locations where a fix is proposed. -- fix_after (optional): The corrected code that should replace fix_before. Must be syntactically valid and ready to apply as a direct replacement. Only include on locations where a fix is proposed. +- start_line (REQUIRED): Exact 1-based line number where the vulnerable/affected code begins. Must be a positive integer. You must be certain of this number — go back and verify against the actual file content if needed. +- end_line (REQUIRED): Exact 1-based line number where the vulnerable/affected code ends. Must be >= start_line. Set equal to start_line ONLY if the code is truly on a single line. +- snippet (optional): The actual source code at this location, copied verbatim from the file. +- label (optional): Short role description for this location. For multi-part fixes, use this to explain the purpose of each change (e.g., "Add import for escape utility", "Sanitize user input before SQL query"). +- fix_before (optional): The vulnerable code to be replaced — VERBATIM copy of lines start_line through end_line. Must match the actual source character-for-character including whitespace and indentation. +- fix_after (optional): The corrected code that replaces the entire fix_before block. Must be syntactically valid and ready to apply as a direct replacement. Locations without fix_before/fix_after are informational context (e.g. showing the source of tainted data). -Locations with fix_before/fix_after are actionable fixes (used for PR review suggestions). +Locations with fix_before/fix_after are actionable fixes (used directly for PR suggestion blocks). src/db/queries.ts 42 - 42 - db.query(`SELECT * FROM users WHERE id = ${id}`) + 45 + const query = ( + `SELECT * FROM users ` + + `WHERE id = ${id}` +); - db.query(`SELECT * FROM users WHERE id = ${id}`) - db.query('SELECT * FROM users WHERE id = $1', [id]) + const query = ( + `SELECT * FROM users ` + + `WHERE id = ${id}` +); + const query = 'SELECT * FROM users WHERE id = $1'; +const result = await db.query(query, [id]); src/routes/users.ts @@ -299,14 +327,33 @@ if __name__ == "__main__": src/services/link-preview.ts - 47 - 47 - const response = await fetch(userUrl) + 45 + 48 + const options = { timeout: 5000 }; + const response = await fetch(userUrl, options); + const html = await response.text(); + return extractMetadata(html); - const response = await fetch(userUrl) - const validated = await validateAndResolveUrl(userUrl) -if (!validated) throw new ForbiddenError('URL not allowed') -const response = await fetch(validated) + const options = { timeout: 5000 }; + const response = await fetch(userUrl, options); + const html = await response.text(); + return extractMetadata(html); + const validated = await validateAndResolveUrl(userUrl); + if (!validated) throw new ForbiddenError('URL not allowed'); + const options = { timeout: 5000 }; + const response = await fetch(validated, options); + const html = await response.text(); + return extractMetadata(html); + + + src/services/link-preview.ts + 2 + 2 + import { extractMetadata } from '../utils/html'; + + import { extractMetadata } from '../utils/html'; + import { extractMetadata } from '../utils/html'; +import { validateAndResolveUrl } from '../utils/url-validator'; src/routes/api/v1/links.ts From 30550dd189743aacb9c564b6ba861ad04a27d9e1 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Tue, 17 Feb 2026 14:59:13 -0800 Subject: [PATCH 02/43] fix: Add rule against duplicating changes across code_locations --- strix/tools/reporting/reporting_actions_schema.xml | 1 + 1 file changed, 1 insertion(+) diff --git a/strix/tools/reporting/reporting_actions_schema.xml b/strix/tools/reporting/reporting_actions_schema.xml index b0b9afb..0f4780a 100644 --- a/strix/tools/reporting/reporting_actions_schema.xml +++ b/strix/tools/reporting/reporting_actions_schema.xml @@ -135,6 +135,7 @@ COMMON MISTAKES TO AVOID: - Do NOT set start_line=end_line when the vulnerable code spans multiple lines. Cover the full range. - Do NOT put an import addition and a code change in the same fix_before/fix_after if they are not on adjacent lines. Split them into separate locations. - Do NOT include lines outside the vulnerable/fixed code in fix_before just to "pad" the range. +- Do NOT duplicate changes across locations. Each location's fix_after must ONLY contain changes for its own line range. Never repeat a change that is already covered by another location. Each location element fields: - file (REQUIRED): Path relative to repository root. No leading slash, no absolute paths, no ".." traversal. From e38f523a458274e5731521c9bc5ea2f141c00872 Mon Sep 17 00:00:00 2001 From: octovimmer Date: Thu, 19 Feb 2026 13:43:18 -0800 Subject: [PATCH 03/43] Strix LLM Documentation and Config Changes (#315) * feat: add to readme new keys * feat: shoutout strix models, docs * fix: mypy error * fix: base api * docs: update quickstart and models * fixes: changes to docs uniform api_key variable naming * test: git commit hook * nevermind it was nothing * docs: Update default model to claude-sonnet-4.6 and improve Strix Router docs - Replace gpt-5 and opus-4.6 defaults with claude-sonnet-4.6 across all docs and code - Rewrite Strix Router (models.mdx) page with clearer structure and messaging - Add Strix Router as recommended option in overview.mdx and quickstart prerequisites - Update stale Claude 4.5 references to 4.6 in anthropic.mdx, openrouter.mdx, bug_report.md - Fix install.sh links to point to models.strix.ai and correct docs URLs - Update error message examples in main.py to use claude-sonnet-4-6 --------- Co-authored-by: 0xallam --- .github/ISSUE_TEMPLATE/bug_report.md | 2 +- CONTRIBUTING.md | 2 +- README.md | 10 ++-- docs/advanced/configuration.mdx | 6 +-- docs/contributing.mdx | 2 +- docs/index.mdx | 2 +- docs/integrations/github-actions.mdx | 2 +- docs/llm-providers/anthropic.mdx | 6 +-- docs/llm-providers/models.mdx | 80 ++++++++++++++++++++++++++++ docs/llm-providers/openrouter.mdx | 2 +- docs/llm-providers/overview.mdx | 41 ++++++++++---- docs/quickstart.mdx | 22 +++++--- scripts/install.sh | 12 +++-- strix/config/config.py | 30 +++++++++++ strix/interface/main.py | 25 ++++----- strix/llm/config.py | 4 +- strix/llm/dedupe.py | 11 +--- strix/llm/llm.py | 13 ++--- strix/llm/memory_compressor.py | 10 +--- 19 files changed, 208 insertions(+), 74 deletions(-) create mode 100644 docs/llm-providers/models.mdx diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index f9b9106..85919ce 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -27,7 +27,7 @@ If applicable, add screenshots to help explain your problem. - OS: [e.g. Ubuntu 22.04] - Strix Version or Commit: [e.g. 0.1.18] - Python Version: [e.g. 3.12] -- LLM Used: [e.g. GPT-5, Claude Sonnet 4] +- LLM Used: [e.g. GPT-5, Claude Sonnet 4.6] **Additional context** Add any other context about the problem here. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d6b9a64..e272e0b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -30,7 +30,7 @@ Thank you for your interest in contributing to Strix! This guide will help you g 3. **Configure your LLM provider** ```bash - export STRIX_LLM="openai/gpt-5" + export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="your-api-key" ``` diff --git a/README.md b/README.md index 5e2a0be..fd79f86 100644 --- a/README.md +++ b/README.md @@ -72,7 +72,9 @@ Strix are autonomous AI agents that act just like real hackers - they run your c **Prerequisites:** - Docker (running) -- An LLM provider key (e.g. [get OpenAI API key](https://platform.openai.com/api-keys) or use a local LLM) +- An LLM API key: + - Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.) + - Or [Strix Router](https://models.strix.ai) — single API key for multiple providers with $10 free credit on signup ### Installation & First Scan @@ -84,7 +86,7 @@ curl -sSL https://strix.ai/install | bash pipx install strix-agent # Configure your AI provider -export STRIX_LLM="openai/gpt-5" +export STRIX_LLM="anthropic/claude-sonnet-4-6" # or "strix/claude-sonnet-4.6" via Strix Router (https://models.strix.ai) export LLM_API_KEY="your-api-key" # Run your first security assessment @@ -201,7 +203,7 @@ jobs: ### Configuration ```bash -export STRIX_LLM="openai/gpt-5" +export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="your-api-key" # Optional @@ -215,8 +217,8 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high, **Recommended models for best results:** +- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6` - [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5` -- [Anthropic Claude Sonnet 4.5](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-5` - [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview` See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models. diff --git a/docs/advanced/configuration.mdx b/docs/advanced/configuration.mdx index 9a6d9e4..bd71c68 100644 --- a/docs/advanced/configuration.mdx +++ b/docs/advanced/configuration.mdx @@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file. ## LLM Configuration - Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-5`). + Model name in LiteLLM format (e.g., `anthropic/claude-sonnet-4-6`, `openai/gpt-5`). @@ -86,7 +86,7 @@ strix --target ./app --config /path/to/config.json ```json { "env": { - "STRIX_LLM": "openai/gpt-5", + "STRIX_LLM": "anthropic/claude-sonnet-4-6", "LLM_API_KEY": "sk-...", "STRIX_REASONING_EFFORT": "high" } @@ -97,7 +97,7 @@ strix --target ./app --config /path/to/config.json ```bash # Required -export STRIX_LLM="openai/gpt-5" +export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="sk-..." # Optional: Enable web search diff --git a/docs/contributing.mdx b/docs/contributing.mdx index b2e50a0..ffa3192 100644 --- a/docs/contributing.mdx +++ b/docs/contributing.mdx @@ -32,7 +32,7 @@ description: "Contribute to Strix development" ```bash - export STRIX_LLM="openai/gpt-5" + export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="your-api-key" ``` diff --git a/docs/index.mdx b/docs/index.mdx index ef5ab9a..14de192 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -78,7 +78,7 @@ Strix uses a graph of specialized agents for comprehensive security testing: curl -sSL https://strix.ai/install | bash # Configure -export STRIX_LLM="openai/gpt-5" +export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="your-api-key" # Scan diff --git a/docs/integrations/github-actions.mdx b/docs/integrations/github-actions.mdx index 827dce0..fcc5eb9 100644 --- a/docs/integrations/github-actions.mdx +++ b/docs/integrations/github-actions.mdx @@ -35,7 +35,7 @@ Add these secrets to your repository: | Secret | Description | |--------|-------------| -| `STRIX_LLM` | Model name (e.g., `openai/gpt-5`) | +| `STRIX_LLM` | Model name (e.g., `anthropic/claude-sonnet-4-6`) | | `LLM_API_KEY` | API key for your LLM provider | ## Exit Codes diff --git a/docs/llm-providers/anthropic.mdx b/docs/llm-providers/anthropic.mdx index 81680a1..b7b3085 100644 --- a/docs/llm-providers/anthropic.mdx +++ b/docs/llm-providers/anthropic.mdx @@ -6,7 +6,7 @@ description: "Configure Strix with Claude models" ## Setup ```bash -export STRIX_LLM="anthropic/claude-sonnet-4-5" +export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="sk-ant-..." ``` @@ -14,8 +14,8 @@ export LLM_API_KEY="sk-ant-..." | Model | Description | |-------|-------------| -| `anthropic/claude-sonnet-4-5` | Best balance of intelligence and speed (recommended) | -| `anthropic/claude-opus-4-5` | Maximum capability for deep analysis | +| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed (recommended) | +| `anthropic/claude-opus-4-6` | Maximum capability for deep analysis | ## Get API Key diff --git a/docs/llm-providers/models.mdx b/docs/llm-providers/models.mdx new file mode 100644 index 0000000..54007a9 --- /dev/null +++ b/docs/llm-providers/models.mdx @@ -0,0 +1,80 @@ +--- +title: "Strix Router" +description: "Access top LLMs through a single API with high rate limits and zero data retention" +--- + +Strix Router gives you access to the best LLMs through a single API key. + + +Strix Router is currently in **beta**. It's completely optional — Strix works with any [LiteLLM-compatible provider](/llm-providers/overview) using your own API keys, or with [local models](/llm-providers/local). Strix Router is just the setup we test and optimize for. + + +## Why Use Strix Router? + +- **High rate limits** — No throttling during long-running scans +- **Zero data retention** — Routes to providers with zero data retention policies enabled +- **Failover & load balancing** — Automatic fallback across providers for reliability +- **Simple setup** — One API key, one environment variable, no provider accounts needed +- **No markup** — Same token pricing as the underlying providers, no extra fees +- **$10 free credit** — Try it free on signup, no credit card required + +## Quick Start + +1. Get your API key at [models.strix.ai](https://models.strix.ai) +2. Set your environment: + +```bash +export LLM_API_KEY='your-strix-api-key' +export STRIX_LLM='strix/claude-sonnet-4.6' +``` + +3. Run a scan: + +```bash +strix --target ./your-app +``` + +## Available Models + +### Anthropic + +| Model | ID | +|-------|-----| +| Claude Sonnet 4.6 | `strix/claude-sonnet-4.6` | +| Claude Opus 4.6 | `strix/claude-opus-4.6` | + +### OpenAI + +| Model | ID | +|-------|-----| +| GPT-5.2 | `strix/gpt-5.2` | +| GPT-5.1 | `strix/gpt-5.1` | +| GPT-5 | `strix/gpt-5` | +| GPT-5.2 Codex | `strix/gpt-5.2-codex` | +| GPT-5.1 Codex Max | `strix/gpt-5.1-codex-max` | +| GPT-5.1 Codex | `strix/gpt-5.1-codex` | +| GPT-5 Codex | `strix/gpt-5-codex` | + +### Google + +| Model | ID | +|-------|-----| +| Gemini 3 Pro | `strix/gemini-3-pro-preview` | +| Gemini 3 Flash | `strix/gemini-3-flash-preview` | + +### Other + +| Model | ID | +|-------|-----| +| GLM-5 | `strix/glm-5` | +| GLM-4.7 | `strix/glm-4.7` | + +## Configuration Reference + + + Your Strix API key from [models.strix.ai](https://models.strix.ai). + + + + Model ID from the tables above. Must be prefixed with `strix/`. + diff --git a/docs/llm-providers/openrouter.mdx b/docs/llm-providers/openrouter.mdx index 31919c1..d4d36bf 100644 --- a/docs/llm-providers/openrouter.mdx +++ b/docs/llm-providers/openrouter.mdx @@ -19,7 +19,7 @@ Access any model on OpenRouter using the format `openrouter//`: | Model | Configuration | |-------|---------------| | GPT-5 | `openrouter/openai/gpt-5` | -| Claude 4.5 Sonnet | `openrouter/anthropic/claude-sonnet-4.5` | +| Claude Sonnet 4.6 | `openrouter/anthropic/claude-sonnet-4.6` | | Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` | | GLM-4.7 | `openrouter/z-ai/glm-4.7` | diff --git a/docs/llm-providers/overview.mdx b/docs/llm-providers/overview.mdx index 9027aac..567af50 100644 --- a/docs/llm-providers/overview.mdx +++ b/docs/llm-providers/overview.mdx @@ -5,31 +5,54 @@ description: "Configure your AI model for Strix" Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers. -## Recommended Models +## Strix Router (Recommended) -For best results, use one of these models: +The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention. + +```bash +export STRIX_LLM="strix/claude-sonnet-4.6" +export LLM_API_KEY="your-strix-api-key" +``` + +Get your API key at [models.strix.ai](https://models.strix.ai). + +## Bring Your Own Key + +You can also use any LiteLLM-compatible provider with your own API keys: | Model | Provider | Configuration | | ----------------- | ------------- | -------------------------------- | +| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` | | GPT-5 | OpenAI | `openai/gpt-5` | -| Claude 4.5 Sonnet | Anthropic | `anthropic/claude-sonnet-4-5` | | Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` | -## Quick Setup - ```bash -export STRIX_LLM="openai/gpt-5" +export STRIX_LLM="anthropic/claude-sonnet-4-6" export LLM_API_KEY="your-api-key" ``` +## Local Models + +Run models locally with [Ollama](https://ollama.com), [LM Studio](https://lmstudio.ai), or any OpenAI-compatible server: + +```bash +export STRIX_LLM="ollama/llama4" +export LLM_API_BASE="http://localhost:11434" +``` + +See the [Local Models guide](/llm-providers/local) for setup instructions and recommended models. + ## Provider Guides + + Recommended models router with high rate limits. + GPT-5 and Codex models. - Claude 4.5 Sonnet, Opus, and Haiku. + Claude Sonnet 4.6, Opus, and Haiku. Access 100+ models through a single API. @@ -38,7 +61,7 @@ export LLM_API_KEY="your-api-key" Gemini 3 models via Google Cloud. - Claude 4.5 and Titan models via AWS. + Claude and Titan models via AWS. GPT-5 via Azure. @@ -53,8 +76,8 @@ export LLM_API_KEY="your-api-key" Use LiteLLM's `provider/model-name` format: ``` +anthropic/claude-sonnet-4-6 openai/gpt-5 -anthropic/claude-sonnet-4-5 vertex_ai/gemini-3-pro-preview bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0 ollama/llama4 diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx index 487caae..32eac3d 100644 --- a/docs/quickstart.mdx +++ b/docs/quickstart.mdx @@ -6,7 +6,7 @@ description: "Install Strix and run your first security scan" ## Prerequisites - Docker (running) -- An LLM provider API key (OpenAI, Anthropic, or local model) +- An LLM API key — use [Strix Router](/llm-providers/models) for the easiest setup, or bring your own key from any [supported provider](/llm-providers/overview) ## Installation @@ -27,13 +27,23 @@ description: "Install Strix and run your first security scan" Set your LLM provider: -```bash -export STRIX_LLM="openai/gpt-5" -export LLM_API_KEY="your-api-key" -``` + + + ```bash + export STRIX_LLM="strix/claude-sonnet-4.6" + export LLM_API_KEY="your-strix-api-key" + ``` + + + ```bash + export STRIX_LLM="anthropic/claude-sonnet-4-6" + export LLM_API_KEY="your-api-key" + ``` + + -For best results, use `openai/gpt-5`, `anthropic/claude-sonnet-4-5`, or `vertex_ai/gemini-3-pro-preview`. +For best results, use `strix/claude-sonnet-4.6`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`. ## Run Your First Scan diff --git a/scripts/install.sh b/scripts/install.sh index ae5b11d..7fb158b 100755 --- a/scripts/install.sh +++ b/scripts/install.sh @@ -335,14 +335,18 @@ echo -e "${MUTED} AI Penetration Testing Agent${NC}" echo "" echo -e "${MUTED}To get started:${NC}" echo "" -echo -e " ${CYAN}1.${NC} Set your LLM provider:" -echo -e " ${MUTED}export STRIX_LLM='openai/gpt-5'${NC}" -echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}" +echo -e " ${CYAN}1.${NC} Get your Strix API key:" +echo -e " ${MUTED}https://models.strix.ai${NC}" echo "" -echo -e " ${CYAN}2.${NC} Run a penetration test:" +echo -e " ${CYAN}2.${NC} Set your environment:" +echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}" +echo -e " ${MUTED}export STRIX_LLM='strix/claude-sonnet-4.6'${NC}" +echo "" +echo -e " ${CYAN}3.${NC} Run a penetration test:" echo -e " ${MUTED}strix --target https://example.com${NC}" echo "" echo -e "${MUTED}For more information visit ${NC}https://strix.ai" +echo -e "${MUTED}Supported models ${NC}https://docs.strix.ai/llm-providers/overview" echo -e "${MUTED}Join our community ${NC}https://discord.gg/strix-ai" echo "" diff --git a/strix/config/config.py b/strix/config/config.py index 387834b..53a3726 100644 --- a/strix/config/config.py +++ b/strix/config/config.py @@ -5,6 +5,9 @@ from pathlib import Path from typing import Any +STRIX_API_BASE = "https://models.strix.ai/api/v1" + + class Config: """Configuration Manager for Strix.""" @@ -177,3 +180,30 @@ def apply_saved_config(force: bool = False) -> dict[str, str]: def save_current_config() -> bool: return Config.save_current() + + +def resolve_llm_config() -> tuple[str | None, str | None, str | None]: + """Resolve LLM model, api_key, and api_base based on STRIX_LLM prefix. + + Returns: + tuple: (model_name, api_key, api_base) + """ + model = Config.get("strix_llm") + if not model: + return None, None, None + + api_key = Config.get("llm_api_key") + + if model.startswith("strix/"): + model_name = "openai/" + model[6:] + api_base: str | None = STRIX_API_BASE + else: + model_name = model + api_base = ( + Config.get("llm_api_base") + or Config.get("openai_api_base") + or Config.get("litellm_base_url") + or Config.get("ollama_api_base") + ) + + return model_name, api_key, api_base diff --git a/strix/interface/main.py b/strix/interface/main.py index edd7dd5..e7ab6c0 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -51,10 +51,13 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915 missing_required_vars = [] missing_optional_vars = [] - if not Config.get("strix_llm"): + strix_llm = Config.get("strix_llm") + uses_strix_models = strix_llm and strix_llm.startswith("strix/") + + if not strix_llm: missing_required_vars.append("STRIX_LLM") - has_base_url = any( + has_base_url = uses_strix_models or any( [ Config.get("llm_api_base"), Config.get("openai_api_base"), @@ -96,7 +99,7 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915 error_text.append("• ", style="white") error_text.append("STRIX_LLM", style="bold cyan") error_text.append( - " - Model name to use with litellm (e.g., 'openai/gpt-5')\n", + " - Model name to use with litellm (e.g., 'anthropic/claude-sonnet-4-6')\n", style="white", ) @@ -135,7 +138,10 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915 ) error_text.append("\nExample setup:\n", style="white") - error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white") + if uses_strix_models: + error_text.append("export STRIX_LLM='strix/claude-sonnet-4.6'\n", style="dim white") + else: + error_text.append("export STRIX_LLM='anthropic/claude-sonnet-4-6'\n", style="dim white") if missing_optional_vars: for var in missing_optional_vars: @@ -198,17 +204,12 @@ def check_docker_installed() -> None: async def warm_up_llm() -> None: + from strix.config.config import resolve_llm_config + console = Console() try: - model_name = Config.get("strix_llm") - api_key = Config.get("llm_api_key") - api_base = ( - Config.get("llm_api_base") - or Config.get("openai_api_base") - or Config.get("litellm_base_url") - or Config.get("ollama_api_base") - ) + model_name, api_key, api_base = resolve_llm_config() test_messages = [ {"role": "system", "content": "You are a helpful assistant."}, diff --git a/strix/llm/config.py b/strix/llm/config.py index 3426327..1ee2ddd 100644 --- a/strix/llm/config.py +++ b/strix/llm/config.py @@ -1,4 +1,5 @@ from strix.config import Config +from strix.config.config import resolve_llm_config class LLMConfig: @@ -10,7 +11,8 @@ class LLMConfig: timeout: int | None = None, scan_mode: str = "deep", ): - self.model_name = model_name or Config.get("strix_llm") + resolved_model, self.api_key, self.api_base = resolve_llm_config() + self.model_name = model_name or resolved_model if not self.model_name: raise ValueError("STRIX_LLM environment variable must be set and not empty") diff --git a/strix/llm/dedupe.py b/strix/llm/dedupe.py index 9edd6b7..ec15192 100644 --- a/strix/llm/dedupe.py +++ b/strix/llm/dedupe.py @@ -5,7 +5,7 @@ from typing import Any import litellm -from strix.config import Config +from strix.config.config import resolve_llm_config logger = logging.getLogger(__name__) @@ -155,14 +155,7 @@ def check_duplicate( comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned} - model_name = Config.get("strix_llm") - api_key = Config.get("llm_api_key") - api_base = ( - Config.get("llm_api_base") - or Config.get("openai_api_base") - or Config.get("litellm_base_url") - or Config.get("ollama_api_base") - ) + model_name, api_key, api_base = resolve_llm_config() messages = [ {"role": "system", "content": DEDUPE_SYSTEM_PROMPT}, diff --git a/strix/llm/llm.py b/strix/llm/llm.py index 311de35..8133fe3 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -200,15 +200,10 @@ class LLM: "stream_options": {"include_usage": True}, } - if api_key := Config.get("llm_api_key"): - args["api_key"] = api_key - if api_base := ( - Config.get("llm_api_base") - or Config.get("openai_api_base") - or Config.get("litellm_base_url") - or Config.get("ollama_api_base") - ): - args["api_base"] = api_base + if self.config.api_key: + args["api_key"] = self.config.api_key + if self.config.api_base: + args["api_base"] = self.config.api_base if self._supports_reasoning(): args["reasoning_effort"] = self._reasoning_effort diff --git a/strix/llm/memory_compressor.py b/strix/llm/memory_compressor.py index ef0b9ab..f5981f6 100644 --- a/strix/llm/memory_compressor.py +++ b/strix/llm/memory_compressor.py @@ -3,7 +3,7 @@ from typing import Any import litellm -from strix.config import Config +from strix.config.config import Config, resolve_llm_config logger = logging.getLogger(__name__) @@ -104,13 +104,7 @@ def _summarize_messages( conversation = "\n".join(formatted) prompt = SUMMARY_PROMPT_TEMPLATE.format(conversation=conversation) - api_key = Config.get("llm_api_key") - api_base = ( - Config.get("llm_api_base") - or Config.get("openai_api_base") - or Config.get("litellm_base_url") - or Config.get("ollama_api_base") - ) + _, api_key, api_base = resolve_llm_config() try: completion_args: dict[str, Any] = { From 62bb47a88170f94712e2d275b45df80efb915af8 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Thu, 19 Feb 2026 13:46:44 -0800 Subject: [PATCH 04/43] docs: Add Strix Router page to navigation sidebar --- docs/docs.json | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/docs.json b/docs/docs.json index e15b496..27ee5dc 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -32,6 +32,7 @@ "group": "LLM Providers", "pages": [ "llm-providers/overview", + "llm-providers/models", "llm-providers/openai", "llm-providers/anthropic", "llm-providers/openrouter", From cec741758250ebe8ac72c16138fbf7de2e777f4c Mon Sep 17 00:00:00 2001 From: 0xallam Date: Thu, 19 Feb 2026 13:52:13 -0800 Subject: [PATCH 05/43] docs: Cache bust discord badge --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index fd79f86..3cb4c9f 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Docs Website -[![](https://dcbadge.limes.pink/api/server/8Suzzd9z)](https://discord.gg/strix-ai) +[![](https://dcbadge.limes.pink/api/server/8Suzzd9z?v=2)](https://discord.gg/strix-ai) Ask DeepWiki GitHub Stars From 8cb026b1beffafe0c99b898c25211c774ec9f43e Mon Sep 17 00:00:00 2001 From: 0xallam Date: Thu, 19 Feb 2026 13:53:27 -0800 Subject: [PATCH 06/43] docs: Revert discord badge cache bust --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 3cb4c9f..fd79f86 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Docs Website -[![](https://dcbadge.limes.pink/api/server/8Suzzd9z?v=2)](https://discord.gg/strix-ai) +[![](https://dcbadge.limes.pink/api/server/8Suzzd9z)](https://discord.gg/strix-ai) Ask DeepWiki GitHub Stars From cc6d46a838b749c51c0a0e7889bf881e616c9b9f Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 19 Feb 2026 19:42:25 +0000 Subject: [PATCH 07/43] chore(deps): bump pypdf from 6.6.2 to 6.7.1 Bumps [pypdf](https://github.com/py-pdf/pypdf) from 6.6.2 to 6.7.1. - [Release notes](https://github.com/py-pdf/pypdf/releases) - [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md) - [Commits](https://github.com/py-pdf/pypdf/compare/6.6.2...6.7.1) --- updated-dependencies: - dependency-name: pypdf dependency-version: 6.7.1 dependency-type: indirect ... Signed-off-by: dependabot[bot] --- poetry.lock | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/poetry.lock b/poetry.lock index 37a393c..9367ef4 100644 --- a/poetry.lock +++ b/poetry.lock @@ -4373,9 +4373,10 @@ ptyprocess = ">=0.5" name = "pillow" version = "12.1.1" description = "Python Imaging Library (fork)" -optional = false +optional = true python-versions = ">=3.10" groups = ["main"] +markers = "extra == \"sandbox\"" files = [ {file = "pillow-12.1.1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1f1625b72740fdda5d77b4def688eb8fd6490975d06b909fd19f13f391e077e0"}, {file = "pillow-12.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:178aa072084bd88ec759052feca8e56cbb14a60b39322b99a049e58090479713"}, @@ -4744,9 +4745,10 @@ testing = ["google-api-core (>=1.31.5)"] name = "protobuf" version = "6.33.5" description = "" -optional = false +optional = true python-versions = ">=3.9" groups = ["main"] +markers = "extra == \"vertex\"" files = [ {file = "protobuf-6.33.5-cp310-abi3-win32.whl", hash = "sha256:d71b040839446bac0f4d162e758bea99c8251161dae9d0983a3b88dee345153b"}, {file = "protobuf-6.33.5-cp310-abi3-win_amd64.whl", hash = "sha256:3093804752167bcab3998bec9f1048baae6e29505adaf1afd14a37bddede533c"}, @@ -5235,21 +5237,20 @@ diagrams = ["jinja2", "railroad-diagrams"] [[package]] name = "pypdf" -version = "6.6.2" +version = "6.7.1" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" -optional = true +optional = false python-versions = ">=3.9" groups = ["main"] -markers = "extra == \"sandbox\"" files = [ - {file = "pypdf-6.6.2-py3-none-any.whl", hash = "sha256:44c0c9811cfb3b83b28f1c3d054531d5b8b81abaedee0d8cb403650d023832ba"}, - {file = "pypdf-6.6.2.tar.gz", hash = "sha256:0a3ea3b3303982333404e22d8f75d7b3144f9cf4b2970b96856391a516f9f016"}, + {file = "pypdf-6.7.1-py3-none-any.whl", hash = "sha256:a02ccbb06463f7c334ce1612e91b3e68a8e827f3cee100b9941771e6066b094e"}, + {file = "pypdf-6.7.1.tar.gz", hash = "sha256:6b7a63be5563a0a35d54c6d6b550d75c00b8ccf36384be96365355e296e6b3b0"}, ] [package.extras] crypto = ["cryptography"] cryptodome = ["PyCryptodome"] -dev = ["black", "flit", "pip-tools", "pre-commit", "pytest-cov", "pytest-socket", "pytest-timeout", "pytest-xdist", "wheel"] +dev = ["flit", "pip-tools", "pre-commit", "pytest-cov", "pytest-socket", "pytest-timeout", "pytest-xdist", "wheel"] docs = ["myst_parser", "sphinx", "sphinx_rtd_theme"] full = ["Pillow (>=8.0.0)", "cryptography"] image = ["Pillow (>=8.0.0)"] From 1833f1a021eca77e1a83cc3c8e4424a06da5666c Mon Sep 17 00:00:00 2001 From: 0xallam Date: Thu, 19 Feb 2026 14:12:59 -0800 Subject: [PATCH 08/43] chore: Bump version to 0.8.0 --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index 6c32647..ab8ca11 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "strix-agent" -version = "0.7.0" +version = "0.8.0" description = "Open-source AI Hackers for your apps" authors = ["Strix "] readme = "README.md" From 06ae3d3860c16ada8167ea42b7a8a07f9c1542d4 Mon Sep 17 00:00:00 2001 From: octovimmer Date: Thu, 19 Feb 2026 17:25:10 -0800 Subject: [PATCH 09/43] fix: linting errors --- strix/interface/main.py | 5 +++- strix/llm/dedupe.py | 5 +++- strix/llm/llm.py | 40 +++++++++++++++++++++---------- strix/llm/memory_compressor.py | 8 +++++-- strix/llm/utils.py | 44 ++++++++++++++++++++++++++++++++++ 5 files changed, 86 insertions(+), 16 deletions(-) diff --git a/strix/interface/main.py b/strix/interface/main.py index edd7dd5..58d52a0 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -18,6 +18,7 @@ from rich.panel import Panel from rich.text import Text from strix.config import Config, apply_saved_config, save_current_config +from strix.llm.utils import get_litellm_model_name, get_strix_api_base apply_saved_config() @@ -208,6 +209,7 @@ async def warm_up_llm() -> None: or Config.get("openai_api_base") or Config.get("litellm_base_url") or Config.get("ollama_api_base") + or get_strix_api_base(model_name) ) test_messages = [ @@ -217,8 +219,9 @@ async def warm_up_llm() -> None: llm_timeout = int(Config.get("llm_timeout") or "300") + litellm_model = get_litellm_model_name(model_name) or model_name completion_kwargs: dict[str, Any] = { - "model": model_name, + "model": litellm_model, "messages": test_messages, "timeout": llm_timeout, } diff --git a/strix/llm/dedupe.py b/strix/llm/dedupe.py index 9edd6b7..f8cdb08 100644 --- a/strix/llm/dedupe.py +++ b/strix/llm/dedupe.py @@ -6,6 +6,7 @@ from typing import Any import litellm from strix.config import Config +from strix.llm.utils import get_litellm_model_name, get_strix_api_base logger = logging.getLogger(__name__) @@ -162,6 +163,7 @@ def check_duplicate( or Config.get("openai_api_base") or Config.get("litellm_base_url") or Config.get("ollama_api_base") + or get_strix_api_base(model_name) ) messages = [ @@ -176,8 +178,9 @@ def check_duplicate( }, ] + litellm_model = get_litellm_model_name(model_name) or model_name completion_kwargs: dict[str, Any] = { - "model": model_name, + "model": litellm_model, "messages": messages, "timeout": 120, } diff --git a/strix/llm/llm.py b/strix/llm/llm.py index 311de35..d1b6370 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -14,6 +14,8 @@ from strix.llm.memory_compressor import MemoryCompressor from strix.llm.utils import ( _truncate_to_first_function, fix_incomplete_tool_call, + get_litellm_model_name, + get_strix_api_base, parse_tool_invocations, ) from strix.skills import load_skills @@ -189,12 +191,16 @@ class LLM: return messages + def _get_litellm_model_name(self) -> str: + model = self.config.model_name # Validated non-empty in LLMConfig.__init__ + return get_litellm_model_name(model) or model # type: ignore[return-value] + def _build_completion_args(self, messages: list[dict[str, Any]]) -> dict[str, Any]: if not self._supports_vision(): messages = self._strip_images(messages) args: dict[str, Any] = { - "model": self.config.model_name, + "model": self._get_litellm_model_name(), "messages": messages, "timeout": self.config.timeout, "stream_options": {"include_usage": True}, @@ -202,12 +208,15 @@ class LLM: if api_key := Config.get("llm_api_key"): args["api_key"] = api_key - if api_base := ( + + api_base = ( Config.get("llm_api_base") or Config.get("openai_api_base") or Config.get("litellm_base_url") or Config.get("ollama_api_base") - ): + or get_strix_api_base(self.config.model_name) + ) + if api_base: args["api_base"] = api_base if self._supports_reasoning(): args["reasoning_effort"] = self._reasoning_effort @@ -234,8 +243,8 @@ class LLM: def _update_usage_stats(self, response: Any) -> None: try: if hasattr(response, "usage") and response.usage: - input_tokens = getattr(response.usage, "prompt_tokens", 0) - output_tokens = getattr(response.usage, "completion_tokens", 0) + input_tokens = getattr(response.usage, "prompt_tokens", 0) or 0 + output_tokens = getattr(response.usage, "completion_tokens", 0) or 0 cached_tokens = 0 if hasattr(response.usage, "prompt_tokens_details"): @@ -243,14 +252,11 @@ class LLM: if hasattr(prompt_details, "cached_tokens"): cached_tokens = prompt_details.cached_tokens or 0 + cost = self._extract_cost(response) else: input_tokens = 0 output_tokens = 0 cached_tokens = 0 - - try: - cost = completion_cost(response) or 0.0 - except Exception: # noqa: BLE001 cost = 0.0 self._total_stats.input_tokens += input_tokens @@ -261,6 +267,16 @@ class LLM: except Exception: # noqa: BLE001, S110 # nosec B110 pass + def _extract_cost(self, response: Any) -> float: + if hasattr(response, "usage") and response.usage: + direct_cost = getattr(response.usage, "cost", None) + if direct_cost is not None: + return float(direct_cost) + try: + return completion_cost(response, model=self._get_litellm_model_name()) or 0.0 + except Exception: # noqa: BLE001 + return 0.0 + def _should_retry(self, e: Exception) -> bool: code = getattr(e, "status_code", None) or getattr( getattr(e, "response", None), "status_code", None @@ -280,13 +296,13 @@ class LLM: def _supports_vision(self) -> bool: try: - return bool(supports_vision(model=self.config.model_name)) + return bool(supports_vision(model=self._get_litellm_model_name())) except Exception: # noqa: BLE001 return False def _supports_reasoning(self) -> bool: try: - return bool(supports_reasoning(model=self.config.model_name)) + return bool(supports_reasoning(model=self._get_litellm_model_name())) except Exception: # noqa: BLE001 return False @@ -307,7 +323,7 @@ class LLM: return result def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]: - if not messages or not supports_prompt_caching(self.config.model_name): + if not messages or not supports_prompt_caching(self._get_litellm_model_name()): return messages result = list(messages) diff --git a/strix/llm/memory_compressor.py b/strix/llm/memory_compressor.py index ef0b9ab..e46b331 100644 --- a/strix/llm/memory_compressor.py +++ b/strix/llm/memory_compressor.py @@ -4,6 +4,7 @@ from typing import Any import litellm from strix.config import Config +from strix.llm.utils import get_litellm_model_name, get_strix_api_base logger = logging.getLogger(__name__) @@ -45,7 +46,8 @@ keeping the summary concise and to the point.""" def _count_tokens(text: str, model: str) -> int: try: - count = litellm.token_counter(model=model, text=text) + litellm_model = get_litellm_model_name(model) or model + count = litellm.token_counter(model=litellm_model, text=text) return int(count) except Exception: logger.exception("Failed to count tokens") @@ -110,11 +112,13 @@ def _summarize_messages( or Config.get("openai_api_base") or Config.get("litellm_base_url") or Config.get("ollama_api_base") + or get_strix_api_base(model) ) try: + litellm_model = get_litellm_model_name(model) or model completion_args: dict[str, Any] = { - "model": model, + "model": litellm_model, "messages": [{"role": "user", "content": prompt}], "timeout": timeout, } diff --git a/strix/llm/utils.py b/strix/llm/utils.py index 81431f0..8abe6ba 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -3,6 +3,50 @@ import re from typing import Any +STRIX_API_BASE = "https://models.strix.ai/api/v1" + +STRIX_PROVIDER_PREFIXES: dict[str, str] = { + "claude-": "anthropic", + "gpt-": "openai", + "gemini-": "gemini", +} + + +def is_strix_model(model_name: str | None) -> bool: + """Check if model uses strix/ prefix.""" + return bool(model_name and model_name.startswith("strix/")) + + +def get_strix_api_base(model_name: str | None) -> str | None: + """Return Strix API base URL if using strix/ model, None otherwise.""" + if is_strix_model(model_name): + return STRIX_API_BASE + return None + + +def get_litellm_model_name(model_name: str | None) -> str | None: + """Convert strix/ prefixed model to litellm-compatible provider/model format. + + Maps strix/ models to their corresponding litellm provider: + - strix/claude-* -> anthropic/claude-* + - strix/gpt-* -> openai/gpt-* + - strix/gemini-* -> gemini/gemini-* + - Other models -> openai/ (routed via Strix API) + """ + if not model_name: + return model_name + if not model_name.startswith("strix/"): + return model_name + + base_model = model_name[6:] + + for prefix, provider in STRIX_PROVIDER_PREFIXES.items(): + if base_model.startswith(prefix): + return f"{provider}/{base_model}" + + return f"openai/{base_model}" + + def _truncate_to_first_function(content: str) -> str: if not content: return content From 3b3576b024a8bd4f88ce36877a517cb4c3b6c944 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 04:40:04 -0800 Subject: [PATCH 10/43] refactor: Centralize strix model resolution with separate API and capability names - Replace fragile prefix matching with explicit STRIX_MODEL_MAP - Add resolve_strix_model() returning (api_model, canonical_model) - api_model (openai/ prefix) for API calls to OpenAI-compatible Strix API - canonical_model (actual provider name) for litellm capability lookups - Centralize resolution in LLMConfig instead of scattered call sites --- strix/interface/main.py | 5 ++-- strix/llm/config.py | 5 ++++ strix/llm/dedupe.py | 5 ++-- strix/llm/llm.py | 17 +++++-------- strix/llm/memory_compressor.py | 9 +++---- strix/llm/utils.py | 46 ++++++++++++++++++---------------- 6 files changed, 45 insertions(+), 42 deletions(-) diff --git a/strix/interface/main.py b/strix/interface/main.py index 5df4ac5..f049bbf 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -19,7 +19,7 @@ from rich.text import Text from strix.config import Config, apply_saved_config, save_current_config from strix.config.config import resolve_llm_config -from strix.llm.utils import get_litellm_model_name +from strix.llm.utils import resolve_strix_model apply_saved_config() @@ -210,6 +210,8 @@ async def warm_up_llm() -> None: try: model_name, api_key, api_base = resolve_llm_config() + litellm_model, _ = resolve_strix_model(model_name) + litellm_model = litellm_model or model_name test_messages = [ {"role": "system", "content": "You are a helpful assistant."}, @@ -218,7 +220,6 @@ async def warm_up_llm() -> None: llm_timeout = int(Config.get("llm_timeout") or "300") - litellm_model = get_litellm_model_name(model_name) or model_name completion_kwargs: dict[str, Any] = { "model": litellm_model, "messages": test_messages, diff --git a/strix/llm/config.py b/strix/llm/config.py index 1ee2ddd..a2217bb 100644 --- a/strix/llm/config.py +++ b/strix/llm/config.py @@ -1,5 +1,6 @@ from strix.config import Config from strix.config.config import resolve_llm_config +from strix.llm.utils import resolve_strix_model class LLMConfig: @@ -17,6 +18,10 @@ class LLMConfig: if not self.model_name: raise ValueError("STRIX_LLM environment variable must be set and not empty") + api_model, canonical = resolve_strix_model(self.model_name) + self.litellm_model: str = api_model or self.model_name + self.canonical_model: str = canonical or self.model_name + self.enable_prompt_caching = enable_prompt_caching self.skills = skills or [] diff --git a/strix/llm/dedupe.py b/strix/llm/dedupe.py index 33b3bc9..0ea6088 100644 --- a/strix/llm/dedupe.py +++ b/strix/llm/dedupe.py @@ -6,7 +6,7 @@ from typing import Any import litellm from strix.config.config import resolve_llm_config -from strix.llm.utils import get_litellm_model_name +from strix.llm.utils import resolve_strix_model logger = logging.getLogger(__name__) @@ -157,6 +157,8 @@ def check_duplicate( comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned} model_name, api_key, api_base = resolve_llm_config() + litellm_model, _ = resolve_strix_model(model_name) + litellm_model = litellm_model or model_name messages = [ {"role": "system", "content": DEDUPE_SYSTEM_PROMPT}, @@ -170,7 +172,6 @@ def check_duplicate( }, ] - litellm_model = get_litellm_model_name(model_name) or model_name completion_kwargs: dict[str, Any] = { "model": litellm_model, "messages": messages, diff --git a/strix/llm/llm.py b/strix/llm/llm.py index d6373ec..c38bbe1 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -14,7 +14,6 @@ from strix.llm.memory_compressor import MemoryCompressor from strix.llm.utils import ( _truncate_to_first_function, fix_incomplete_tool_call, - get_litellm_model_name, parse_tool_invocations, ) from strix.skills import load_skills @@ -64,7 +63,7 @@ class LLM: self.agent_name = agent_name self.agent_id: str | None = None self._total_stats = RequestStats() - self.memory_compressor = MemoryCompressor(model_name=config.model_name) + self.memory_compressor = MemoryCompressor(model_name=config.litellm_model) self.system_prompt = self._load_system_prompt(agent_name) reasoning = Config.get("strix_reasoning_effort") @@ -190,16 +189,12 @@ class LLM: return messages - def _get_litellm_model_name(self) -> str: - model = self.config.model_name # Validated non-empty in LLMConfig.__init__ - return get_litellm_model_name(model) or model # type: ignore[return-value] - def _build_completion_args(self, messages: list[dict[str, Any]]) -> dict[str, Any]: if not self._supports_vision(): messages = self._strip_images(messages) args: dict[str, Any] = { - "model": self._get_litellm_model_name(), + "model": self.config.litellm_model, "messages": messages, "timeout": self.config.timeout, "stream_options": {"include_usage": True}, @@ -264,7 +259,7 @@ class LLM: if direct_cost is not None: return float(direct_cost) try: - return completion_cost(response, model=self._get_litellm_model_name()) or 0.0 + return completion_cost(response, model=self.config.canonical_model) or 0.0 except Exception: # noqa: BLE001 return 0.0 @@ -287,13 +282,13 @@ class LLM: def _supports_vision(self) -> bool: try: - return bool(supports_vision(model=self._get_litellm_model_name())) + return bool(supports_vision(model=self.config.canonical_model)) except Exception: # noqa: BLE001 return False def _supports_reasoning(self) -> bool: try: - return bool(supports_reasoning(model=self._get_litellm_model_name())) + return bool(supports_reasoning(model=self.config.canonical_model)) except Exception: # noqa: BLE001 return False @@ -314,7 +309,7 @@ class LLM: return result def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]: - if not messages or not supports_prompt_caching(self._get_litellm_model_name()): + if not messages or not supports_prompt_caching(self.config.canonical_model): return messages result = list(messages) diff --git a/strix/llm/memory_compressor.py b/strix/llm/memory_compressor.py index 4590972..28730e8 100644 --- a/strix/llm/memory_compressor.py +++ b/strix/llm/memory_compressor.py @@ -4,7 +4,6 @@ from typing import Any import litellm from strix.config.config import Config, resolve_llm_config -from strix.llm.utils import get_litellm_model_name logger = logging.getLogger(__name__) @@ -46,8 +45,7 @@ keeping the summary concise and to the point.""" def _count_tokens(text: str, model: str) -> int: try: - litellm_model = get_litellm_model_name(model) or model - count = litellm.token_counter(model=litellm_model, text=text) + count = litellm.token_counter(model=model, text=text) return int(count) except Exception: logger.exception("Failed to count tokens") @@ -109,9 +107,8 @@ def _summarize_messages( _, api_key, api_base = resolve_llm_config() try: - litellm_model = get_litellm_model_name(model) or model completion_args: dict[str, Any] = { - "model": litellm_model, + "model": model, "messages": [{"role": "user", "content": prompt}], "timeout": timeout, } @@ -161,7 +158,7 @@ class MemoryCompressor: ): self.max_images = max_images self.model_name = model_name or Config.get("strix_llm") - self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "30") + self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "120") if not self.model_name: raise ValueError("STRIX_LLM environment variable must be set and not empty") diff --git a/strix/llm/utils.py b/strix/llm/utils.py index c7d83a9..bef04ce 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -3,34 +3,38 @@ import re from typing import Any -STRIX_PROVIDER_PREFIXES: dict[str, str] = { - "claude-": "anthropic", - "gpt-": "openai", - "gemini-": "gemini", +STRIX_MODEL_MAP: dict[str, str] = { + "claude-sonnet-4.6": "anthropic/claude-sonnet-4-6", + "claude-opus-4.6": "anthropic/claude-opus-4-6", + "gpt-5.2": "openai/gpt-5.2", + "gpt-5.1": "openai/gpt-5.1", + "gpt-5": "openai/gpt-5", + "gpt-5.2-codex": "openai/gpt-5.2-codex", + "gpt-5.1-codex-max": "openai/gpt-5.1-codex-max", + "gpt-5.1-codex": "openai/gpt-5.1-codex", + "gpt-5-codex": "openai/gpt-5-codex", + "gemini-3-pro-preview": "gemini/gemini-3-pro-preview", + "gemini-3-flash-preview": "gemini/gemini-3-flash-preview", + "glm-5": "openrouter/z-ai/glm-5", + "glm-4.7": "openrouter/z-ai/glm-4.7", } -def get_litellm_model_name(model_name: str | None) -> str | None: - """Convert strix/ prefixed model to litellm-compatible provider/model format. +def resolve_strix_model(model_name: str | None) -> tuple[str | None, str | None]: + """Resolve a strix/ model into names for API calls and capability lookups. - Maps strix/ models to their corresponding litellm provider: - - strix/claude-* -> anthropic/claude-* - - strix/gpt-* -> openai/gpt-* - - strix/gemini-* -> gemini/gemini-* - - Other models -> openai/ (routed via Strix API) + Returns (api_model, canonical_model): + - api_model: openai/ for API calls (Strix API is OpenAI-compatible) + - canonical_model: actual provider model name for litellm capability lookups + Non-strix models return the same name for both. """ - if not model_name: - return model_name - if not model_name.startswith("strix/"): - return model_name + if not model_name or not model_name.startswith("strix/"): + return model_name, model_name base_model = model_name[6:] - - for prefix, provider in STRIX_PROVIDER_PREFIXES.items(): - if base_model.startswith(prefix): - return f"{provider}/{base_model}" - - return f"openai/{base_model}" + api_model = f"openai/{base_model}" + canonical_model = STRIX_MODEL_MAP.get(base_model, api_model) + return api_model, canonical_model def _truncate_to_first_function(content: str) -> str: From bf8020fafb9d530794b5934375024d3b384884be Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 06:52:27 -0800 Subject: [PATCH 11/43] fix: Strip custom_llm_provider before cost lookup for proxied models --- strix/llm/llm.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/strix/llm/llm.py b/strix/llm/llm.py index c38bbe1..50501aa 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -259,6 +259,8 @@ class LLM: if direct_cost is not None: return float(direct_cost) try: + if hasattr(response, "_hidden_params"): + response._hidden_params.pop("custom_llm_provider", None) return completion_cost(response, model=self.config.canonical_model) or 0.0 except Exception: # noqa: BLE001 return 0.0 From f4d522164d36bdfc8a8e8348a185bd55852d1bcc Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 07:31:45 -0800 Subject: [PATCH 12/43] feat: Normalize alternative tool call formats (invoke/function_calls) --- strix/agents/StrixAgent/system_prompt.jinja | 24 ++++++++++-- strix/interface/streaming_parser.py | 8 +++- strix/llm/llm.py | 12 ++++-- strix/llm/utils.py | 43 +++++++++++++++++---- 4 files changed, 70 insertions(+), 17 deletions(-) diff --git a/strix/agents/StrixAgent/system_prompt.jinja b/strix/agents/StrixAgent/system_prompt.jinja index bde3157..bcc1359 100644 --- a/strix/agents/StrixAgent/system_prompt.jinja +++ b/strix/agents/StrixAgent/system_prompt.jinja @@ -314,13 +314,29 @@ CRITICAL RULES: 4. Use ONLY the exact format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters. 5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. Literal "\n" instead of real line breaks will cause tools to fail. 6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants). - - Correct: ... - - Incorrect: ... - - Incorrect: ... - - Incorrect: {"think": {...}} 7. Parameters must use value exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values. 8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block. +CORRECT format — use this EXACTLY: + +value + + +WRONG formats — NEVER use these: +- value +- ... +- ... +- {"tool_name": {"param_name": "value"}} +- ```...``` + +Do NOT emit any extra XML tags in your output. In particular: +- NO ... or ... blocks +- NO ... or ... blocks +- NO ... or ... wrappers +If you need to reason, use the think tool. Your raw output must contain ONLY the tool call — no surrounding XML tags. + +Notice: use NOT , use NOT , use NOT . + Example (agent creation tool): Perform targeted XSS testing on the search endpoint diff --git a/strix/interface/streaming_parser.py b/strix/interface/streaming_parser.py index 95e9523..2ea69fa 100644 --- a/strix/interface/streaming_parser.py +++ b/strix/interface/streaming_parser.py @@ -3,8 +3,11 @@ import re from dataclasses import dataclass from typing import Literal +from strix.llm.utils import normalize_tool_format + _FUNCTION_TAG_PREFIX = "]+)>") _FUNC_END_PATTERN = re.compile(r"") @@ -21,9 +24,8 @@ def _get_safe_content(content: str) -> tuple[str, str]: return content, "" suffix = content[last_lt:] - target = _FUNCTION_TAG_PREFIX # " list[StreamSegment]: if not content: return [] + content = normalize_tool_format(content) + segments: list[StreamSegment] = [] func_matches = list(_FUNC_PATTERN.finditer(content)) diff --git a/strix/llm/llm.py b/strix/llm/llm.py index 50501aa..0cace0e 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -14,6 +14,7 @@ from strix.llm.memory_compressor import MemoryCompressor from strix.llm.utils import ( _truncate_to_first_function, fix_incomplete_tool_call, + normalize_tool_format, parse_tool_invocations, ) from strix.skills import load_skills @@ -143,10 +144,12 @@ class LLM: delta = self._get_chunk_content(chunk) if delta: accumulated += delta - if "" in accumulated: - accumulated = accumulated[ - : accumulated.find("") + len("") - ] + if "" in accumulated or "" in accumulated: + for end_tag in ("", ""): + pos = accumulated.find(end_tag) + if pos != -1: + accumulated = accumulated[: pos + len(end_tag)] + break yield LLMResponse(content=accumulated) done_streaming = 1 continue @@ -155,6 +158,7 @@ class LLM: if chunks: self._update_usage_stats(stream_chunk_builder(chunks)) + accumulated = normalize_tool_format(accumulated) accumulated = fix_incomplete_tool_call(_truncate_to_first_function(accumulated)) yield LLMResponse( content=accumulated, diff --git a/strix/llm/utils.py b/strix/llm/utils.py index bef04ce..56caa34 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -3,6 +3,29 @@ import re from typing import Any +_INVOKE_OPEN = re.compile(r'') +_PARAM_NAME_ATTR = re.compile(r'') +_FUNCTION_CALLS_TAG = re.compile(r"") + + +def normalize_tool_format(content: str) -> str: + """Convert alternative tool-call XML format to the expected one. + + Handles: + ... → stripped + + + → + """ + if "", content) + content = _PARAM_NAME_ATTR.sub(r"", content) + return content.replace("", "") + + STRIX_MODEL_MAP: dict[str, str] = { "claude-sonnet-4.6": "anthropic/claude-sonnet-4-6", "claude-opus-4.6": "anthropic/claude-opus-4-6", @@ -41,7 +64,9 @@ def _truncate_to_first_function(content: str) -> str: if not content: return content - function_starts = [match.start() for match in re.finditer(r"= 2: second_function_start = function_starts[1] @@ -52,6 +77,7 @@ def _truncate_to_first_function(content: str) -> str: def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None: + content = normalize_tool_format(content) content = fix_incomplete_tool_call(content) tool_invocations: list[dict[str, Any]] = [] @@ -81,12 +107,14 @@ def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None: def fix_incomplete_tool_call(content: str) -> str: - """Fix incomplete tool calls by adding missing tag.""" - if ( - "" not in content - ): + """Fix incomplete tool calls by adding missing closing tag. + + Handles both ```` and ```` formats. + """ + has_open = "" in content + if has_open and count_open == 1 and not has_close: content = content.rstrip() content = content + "function>" if content.endswith("" return content @@ -107,6 +135,7 @@ def clean_content(content: str) -> str: if not content: return "" + content = normalize_tool_format(content) content = fix_incomplete_tool_call(content) tool_pattern = r"]+>.*?" From 7614fcc512d96bcc0a41c660a2fe7affe36e8242 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 07:47:08 -0800 Subject: [PATCH 13/43] fix: Strip quotes from parameter/function names in tool calls --- strix/llm/utils.py | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/strix/llm/utils.py b/strix/llm/utils.py index 56caa34..b1c244f 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -6,24 +6,29 @@ from typing import Any _INVOKE_OPEN = re.compile(r'') _PARAM_NAME_ATTR = re.compile(r'') _FUNCTION_CALLS_TAG = re.compile(r"") +_QUOTED_FUNCTION = re.compile(r'') +_QUOTED_PARAMETER = re.compile(r'') def normalize_tool_format(content: str) -> str: - """Convert alternative tool-call XML format to the expected one. + """Convert alternative tool-call XML formats to the expected one. Handles: ... → stripped → + + """ - if "", content) + content = _PARAM_NAME_ATTR.sub(r"", content) + content = content.replace("", "") - content = _FUNCTION_CALLS_TAG.sub("", content) - content = _INVOKE_OPEN.sub(r"", content) - content = _PARAM_NAME_ATTR.sub(r"", content) - return content.replace("", "") + content = _QUOTED_FUNCTION.sub(r"", content) + return _QUOTED_PARAMETER.sub(r"", content) STRIX_MODEL_MAP: dict[str, str] = { From e7970de6d2263288ef70daba9873cf77f65cbfdc Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 07:48:54 -0800 Subject: [PATCH 14/43] fix: Handle single-quoted and whitespace-padded tool call tags --- strix/llm/utils.py | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/strix/llm/utils.py b/strix/llm/utils.py index b1c244f..b866661 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -3,11 +3,11 @@ import re from typing import Any -_INVOKE_OPEN = re.compile(r'') -_PARAM_NAME_ATTR = re.compile(r'') +_INVOKE_OPEN = re.compile(r'') +_PARAM_NAME_ATTR = re.compile(r'') _FUNCTION_CALLS_TAG = re.compile(r"") -_QUOTED_FUNCTION = re.compile(r'') -_QUOTED_PARAMETER = re.compile(r'') +_QUOTED_FUNCTION = re.compile(r"""""") +_QUOTED_PARAMETER = re.compile(r"""""") def normalize_tool_format(content: str) -> str: @@ -28,7 +28,10 @@ def normalize_tool_format(content: str) -> str: content = content.replace("", "") content = _QUOTED_FUNCTION.sub(r"", content) - return _QUOTED_PARAMETER.sub(r"", content) + content = _QUOTED_PARAMETER.sub(r"", content) + + content = re.sub(r"Continue the task."}) + if self._is_anthropic() and self.config.enable_prompt_caching: messages = self._add_cache_control(messages) diff --git a/strix/llm/memory_compressor.py b/strix/llm/memory_compressor.py index 28730e8..8cad510 100644 --- a/strix/llm/memory_compressor.py +++ b/strix/llm/memory_compressor.py @@ -91,7 +91,7 @@ def _summarize_messages( if not messages: empty_summary = "{text}" return { - "role": "assistant", + "role": "user", "content": empty_summary.format(text="No messages to summarize"), } @@ -123,7 +123,7 @@ def _summarize_messages( return messages[0] summary_msg = "{text}" return { - "role": "assistant", + "role": "user", "content": summary_msg.format(count=len(messages), text=summary), } except Exception: From b9dcf7f63d683a379812ab5911316f36764f0df0 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 08:08:59 -0800 Subject: [PATCH 16/43] fix: Address code review feedback on tool format normalization --- strix/llm/llm.py | 8 +++----- strix/llm/utils.py | 4 ++-- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/strix/llm/llm.py b/strix/llm/llm.py index 507e8c8..d941361 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -145,11 +145,9 @@ class LLM: if delta: accumulated += delta if "" in accumulated or "" in accumulated: - for end_tag in ("", ""): - pos = accumulated.find(end_tag) - if pos != -1: - accumulated = accumulated[: pos + len(end_tag)] - break + end_tag = "" if "" in accumulated else "" + pos = accumulated.find(end_tag) + accumulated = accumulated[: pos + len(end_tag)] yield LLMResponse(content=accumulated) done_streaming = 1 continue diff --git a/strix/llm/utils.py b/strix/llm/utils.py index b866661..1ba6a5f 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -3,8 +3,8 @@ import re from typing import Any -_INVOKE_OPEN = re.compile(r'') -_PARAM_NAME_ATTR = re.compile(r'') +_INVOKE_OPEN = re.compile(r'') +_PARAM_NAME_ATTR = re.compile(r'') _FUNCTION_CALLS_TAG = re.compile(r"") _QUOTED_FUNCTION = re.compile(r"""""") _QUOTED_PARAMETER = re.compile(r"""""") From 027cea2f2565e91ef64a2a001e7cbe487a9a34ad Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 08:26:44 -0800 Subject: [PATCH 17/43] fix: Handle stray quotes in tag names and enforce parameter tags in prompt --- strix/agents/StrixAgent/system_prompt.jinja | 8 ++++++++ strix/llm/utils.py | 11 ++++------- 2 files changed, 12 insertions(+), 7 deletions(-) diff --git a/strix/agents/StrixAgent/system_prompt.jinja b/strix/agents/StrixAgent/system_prompt.jinja index bcc1359..36c8850 100644 --- a/strix/agents/StrixAgent/system_prompt.jinja +++ b/strix/agents/StrixAgent/system_prompt.jinja @@ -328,6 +328,9 @@ WRONG formats — NEVER use these: - ... - {"tool_name": {"param_name": "value"}} - ```...``` +- value_without_parameter_tags + +EVERY argument MUST be wrapped in ... tags. NEVER put values directly in the function body without parameter tags. This WILL cause the tool call to fail. Do NOT emit any extra XML tags in your output. In particular: - NO ... or ... blocks @@ -337,6 +340,11 @@ If you need to reason, use the think tool. Your raw output must contain ONLY the Notice: use NOT , use NOT , use NOT . +Example (terminal tool): + +nmap -sV -p 1-1000 target.com + + Example (agent creation tool): Perform targeted XSS testing on the search endpoint diff --git a/strix/llm/utils.py b/strix/llm/utils.py index 1ba6a5f..8ab1693 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -6,8 +6,7 @@ from typing import Any _INVOKE_OPEN = re.compile(r'') _PARAM_NAME_ATTR = re.compile(r'') _FUNCTION_CALLS_TAG = re.compile(r"") -_QUOTED_FUNCTION = re.compile(r"""""") -_QUOTED_PARAMETER = re.compile(r"""""") +_STRIP_TAG_QUOTES = re.compile(r"<(function|parameter)\s*=\s*([^>]*?)>") def normalize_tool_format(content: str) -> str: @@ -27,11 +26,9 @@ def normalize_tool_format(content: str) -> str: content = _PARAM_NAME_ATTR.sub(r"", content) content = content.replace("", "") - content = _QUOTED_FUNCTION.sub(r"", content) - content = _QUOTED_PARAMETER.sub(r"", content) - - content = re.sub(r"", content + ) STRIX_MODEL_MAP: dict[str, str] = { From 7fb4b63b96fda17ff8d60b7bd9e93c8c50e29537 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 10:35:58 -0800 Subject: [PATCH 18/43] fix: Change default model from claude-sonnet-4-6 to gpt-5 across docs and code --- CONTRIBUTING.md | 2 +- README.md | 6 +++--- docs/advanced/configuration.mdx | 6 +++--- docs/contributing.mdx | 2 +- docs/index.mdx | 2 +- docs/integrations/github-actions.mdx | 2 +- docs/llm-providers/anthropic.mdx | 4 ++-- docs/llm-providers/models.mdx | 2 +- docs/llm-providers/overview.mdx | 10 +++++----- docs/quickstart.mdx | 6 +++--- scripts/install.sh | 2 +- strix/interface/main.py | 6 +++--- 12 files changed, 25 insertions(+), 25 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e272e0b..d6b9a64 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -30,7 +30,7 @@ Thank you for your interest in contributing to Strix! This guide will help you g 3. **Configure your LLM provider** ```bash - export STRIX_LLM="anthropic/claude-sonnet-4-6" + export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" ``` diff --git a/README.md b/README.md index fd79f86..e13edc0 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,7 @@ curl -sSL https://strix.ai/install | bash pipx install strix-agent # Configure your AI provider -export STRIX_LLM="anthropic/claude-sonnet-4-6" # or "strix/claude-sonnet-4.6" via Strix Router (https://models.strix.ai) +export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai) export LLM_API_KEY="your-api-key" # Run your first security assessment @@ -203,7 +203,7 @@ jobs: ### Configuration ```bash -export STRIX_LLM="anthropic/claude-sonnet-4-6" +export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" # Optional @@ -217,8 +217,8 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high, **Recommended models for best results:** -- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6` - [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5` +- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6` - [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview` See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models. diff --git a/docs/advanced/configuration.mdx b/docs/advanced/configuration.mdx index bd71c68..d82f822 100644 --- a/docs/advanced/configuration.mdx +++ b/docs/advanced/configuration.mdx @@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file. ## LLM Configuration - Model name in LiteLLM format (e.g., `anthropic/claude-sonnet-4-6`, `openai/gpt-5`). + Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-6`). @@ -86,7 +86,7 @@ strix --target ./app --config /path/to/config.json ```json { "env": { - "STRIX_LLM": "anthropic/claude-sonnet-4-6", + "STRIX_LLM": "openai/gpt-5", "LLM_API_KEY": "sk-...", "STRIX_REASONING_EFFORT": "high" } @@ -97,7 +97,7 @@ strix --target ./app --config /path/to/config.json ```bash # Required -export STRIX_LLM="anthropic/claude-sonnet-4-6" +export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="sk-..." # Optional: Enable web search diff --git a/docs/contributing.mdx b/docs/contributing.mdx index ffa3192..b2e50a0 100644 --- a/docs/contributing.mdx +++ b/docs/contributing.mdx @@ -32,7 +32,7 @@ description: "Contribute to Strix development" ```bash - export STRIX_LLM="anthropic/claude-sonnet-4-6" + export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" ``` diff --git a/docs/index.mdx b/docs/index.mdx index 14de192..ef5ab9a 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -78,7 +78,7 @@ Strix uses a graph of specialized agents for comprehensive security testing: curl -sSL https://strix.ai/install | bash # Configure -export STRIX_LLM="anthropic/claude-sonnet-4-6" +export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" # Scan diff --git a/docs/integrations/github-actions.mdx b/docs/integrations/github-actions.mdx index fcc5eb9..827dce0 100644 --- a/docs/integrations/github-actions.mdx +++ b/docs/integrations/github-actions.mdx @@ -35,7 +35,7 @@ Add these secrets to your repository: | Secret | Description | |--------|-------------| -| `STRIX_LLM` | Model name (e.g., `anthropic/claude-sonnet-4-6`) | +| `STRIX_LLM` | Model name (e.g., `openai/gpt-5`) | | `LLM_API_KEY` | API key for your LLM provider | ## Exit Codes diff --git a/docs/llm-providers/anthropic.mdx b/docs/llm-providers/anthropic.mdx index b7b3085..47a94be 100644 --- a/docs/llm-providers/anthropic.mdx +++ b/docs/llm-providers/anthropic.mdx @@ -6,7 +6,7 @@ description: "Configure Strix with Claude models" ## Setup ```bash -export STRIX_LLM="anthropic/claude-sonnet-4-6" +export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="sk-ant-..." ``` @@ -14,7 +14,7 @@ export LLM_API_KEY="sk-ant-..." | Model | Description | |-------|-------------| -| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed (recommended) | +| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed | | `anthropic/claude-opus-4-6` | Maximum capability for deep analysis | ## Get API Key diff --git a/docs/llm-providers/models.mdx b/docs/llm-providers/models.mdx index 54007a9..6c26da1 100644 --- a/docs/llm-providers/models.mdx +++ b/docs/llm-providers/models.mdx @@ -25,7 +25,7 @@ Strix Router is currently in **beta**. It's completely optional — Strix works ```bash export LLM_API_KEY='your-strix-api-key' -export STRIX_LLM='strix/claude-sonnet-4.6' +export STRIX_LLM='strix/gpt-5' ``` 3. Run a scan: diff --git a/docs/llm-providers/overview.mdx b/docs/llm-providers/overview.mdx index 567af50..b3df76d 100644 --- a/docs/llm-providers/overview.mdx +++ b/docs/llm-providers/overview.mdx @@ -10,7 +10,7 @@ Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibi The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention. ```bash -export STRIX_LLM="strix/claude-sonnet-4.6" +export STRIX_LLM="strix/gpt-5" export LLM_API_KEY="your-strix-api-key" ``` @@ -22,12 +22,12 @@ You can also use any LiteLLM-compatible provider with your own API keys: | Model | Provider | Configuration | | ----------------- | ------------- | -------------------------------- | -| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` | | GPT-5 | OpenAI | `openai/gpt-5` | +| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` | | Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` | ```bash -export STRIX_LLM="anthropic/claude-sonnet-4-6" +export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" ``` @@ -52,7 +52,7 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re GPT-5 and Codex models. - Claude Sonnet 4.6, Opus, and Haiku. + Claude Opus, Sonnet, and Haiku. Access 100+ models through a single API. @@ -76,8 +76,8 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re Use LiteLLM's `provider/model-name` format: ``` -anthropic/claude-sonnet-4-6 openai/gpt-5 +anthropic/claude-sonnet-4-6 vertex_ai/gemini-3-pro-preview bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0 ollama/llama4 diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx index 32eac3d..bd7a8d9 100644 --- a/docs/quickstart.mdx +++ b/docs/quickstart.mdx @@ -30,20 +30,20 @@ Set your LLM provider: ```bash - export STRIX_LLM="strix/claude-sonnet-4.6" + export STRIX_LLM="strix/gpt-5" export LLM_API_KEY="your-strix-api-key" ``` ```bash - export STRIX_LLM="anthropic/claude-sonnet-4-6" + export STRIX_LLM="openai/gpt-5" export LLM_API_KEY="your-api-key" ``` -For best results, use `strix/claude-sonnet-4.6`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`. +For best results, use `strix/gpt-5`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`. ## Run Your First Scan diff --git a/scripts/install.sh b/scripts/install.sh index 7fb158b..67a0e19 100755 --- a/scripts/install.sh +++ b/scripts/install.sh @@ -340,7 +340,7 @@ echo -e " ${MUTED}https://models.strix.ai${NC}" echo "" echo -e " ${CYAN}2.${NC} Set your environment:" echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}" -echo -e " ${MUTED}export STRIX_LLM='strix/claude-sonnet-4.6'${NC}" +echo -e " ${MUTED}export STRIX_LLM='strix/gpt-5'${NC}" echo "" echo -e " ${CYAN}3.${NC} Run a penetration test:" echo -e " ${MUTED}strix --target https://example.com${NC}" diff --git a/strix/interface/main.py b/strix/interface/main.py index f049bbf..2ccf5d8 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -101,7 +101,7 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915 error_text.append("• ", style="white") error_text.append("STRIX_LLM", style="bold cyan") error_text.append( - " - Model name to use with litellm (e.g., 'anthropic/claude-sonnet-4-6')\n", + " - Model name to use with litellm (e.g., 'openai/gpt-5')\n", style="white", ) @@ -141,9 +141,9 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915 error_text.append("\nExample setup:\n", style="white") if uses_strix_models: - error_text.append("export STRIX_LLM='strix/claude-sonnet-4.6'\n", style="dim white") + error_text.append("export STRIX_LLM='strix/gpt-5'\n", style="dim white") else: - error_text.append("export STRIX_LLM='anthropic/claude-sonnet-4-6'\n", style="dim white") + error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white") if missing_optional_vars: for var in missing_optional_vars: From 643f6ba54aef3f124786283aad96e8b05a1c122f Mon Sep 17 00:00:00 2001 From: 0xallam Date: Fri, 20 Feb 2026 10:36:48 -0800 Subject: [PATCH 19/43] chore: Bump version to 0.8.1 --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index ab8ca11..f19f1d8 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "strix-agent" -version = "0.8.0" +version = "0.8.1" description = "Open-source AI Hackers for your apps" authors = ["Strix "] readme = "README.md" From 551b780f5241e96bad4871a2bc62fea952d2d330 Mon Sep 17 00:00:00 2001 From: Ahmed Allam <49919286+0xallam@users.noreply.github.com> Date: Sun, 22 Feb 2026 00:10:06 +0400 Subject: [PATCH 20/43] Update installation instructions Removed pipx installation instructions for strix-agent. --- README.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/README.md b/README.md index e13edc0..46a3b28 100644 --- a/README.md +++ b/README.md @@ -82,9 +82,6 @@ Strix are autonomous AI agents that act just like real hackers - they run your c # Install Strix curl -sSL https://strix.ai/install | bash -# Or via pipx -pipx install strix-agent - # Configure your AI provider export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai) export LLM_API_KEY="your-api-key" From 522c010f6fcb0878dc1d24cc56a8d11402af1937 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Sun, 22 Feb 2026 09:03:05 -0800 Subject: [PATCH 21/43] fix: Update end screen to display models.strix.ai instead of strix.ai and discord --- strix/interface/main.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/strix/interface/main.py b/strix/interface/main.py index 2ccf5d8..85c0427 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -462,7 +462,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) -> console.print("\n") console.print(panel) console.print() - console.print("[#60a5fa]strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]") + console.print("[#60a5fa]models.strix.ai[/]") console.print() From 00c571b2cad5372c3b2f911eaab79cf5b9e7d1cc Mon Sep 17 00:00:00 2001 From: 0xallam Date: Sun, 22 Feb 2026 09:28:52 -0800 Subject: [PATCH 22/43] fix: Lower sidebar min width from 140 to 120 for smaller terminals --- strix/interface/tui.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/strix/interface/tui.py b/strix/interface/tui.py index cb1adff..eeaa2ea 100644 --- a/strix/interface/tui.py +++ b/strix/interface/tui.py @@ -687,7 +687,7 @@ class StrixTUIApp(App): # type: ignore[misc] CSS_PATH = "assets/tui_styles.tcss" ALLOW_SELECT = True - SIDEBAR_MIN_WIDTH = 140 + SIDEBAR_MIN_WIDTH = 120 selected_agent_id: reactive[str | None] = reactive(default=None) show_splash: reactive[bool] = reactive(default=True) From 939bc2a09015a70442009a03fe6880c982d4db65 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Sat, 21 Feb 2026 22:08:06 +0000 Subject: [PATCH 23/43] chore(deps): bump google-cloud-aiplatform from 1.129.0 to 1.133.0 Bumps [google-cloud-aiplatform](https://github.com/googleapis/python-aiplatform) from 1.129.0 to 1.133.0. - [Release notes](https://github.com/googleapis/python-aiplatform/releases) - [Changelog](https://github.com/googleapis/python-aiplatform/blob/main/CHANGELOG.md) - [Commits](https://github.com/googleapis/python-aiplatform/compare/v1.129.0...v1.133.0) --- updated-dependencies: - dependency-name: google-cloud-aiplatform dependency-version: 1.133.0 dependency-type: direct:production ... Signed-off-by: dependabot[bot] --- poetry.lock | 142 +++++++++++++--------------------------------------- 1 file changed, 34 insertions(+), 108 deletions(-) diff --git a/poetry.lock b/poetry.lock index 9367ef4..8b742c7 100644 --- a/poetry.lock +++ b/poetry.lock @@ -190,7 +190,7 @@ description = "Python graph (network) package" optional = false python-versions = "*" groups = ["dev"] -markers = "python_version <= \"3.14\"" +markers = "python_version < \"3.15\"" files = [ {file = "altgraph-0.17.5-py2.py3-none-any.whl", hash = "sha256:f3a22400bce1b0c701683820ac4f3b159cd301acab067c51c653e06961600597"}, {file = "altgraph-0.17.5.tar.gz", hash = "sha256:c87b395dd12fabde9c99573a9749d67da8d29ef9de0125c7f536699b4a9bc9e7"}, @@ -324,7 +324,7 @@ description = "LTS Port of Python audioop" optional = true python-versions = ">=3.13" groups = ["main"] -markers = "extra == \"sandbox\" and python_version >= \"3.13\"" +markers = "python_version >= \"3.13\" and extra == \"sandbox\"" files = [ {file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_universal2.whl", hash = "sha256:fd3d4602dc64914d462924a08c1a9816435a2155d74f325853c1f1ac3b2d9800"}, {file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_x86_64.whl", hash = "sha256:550c114a8df0aafe9a05442a1162dfc8fec37e9af1d625ae6060fed6e756f303"}, @@ -622,7 +622,7 @@ description = "Extensible memoizing collections and decorators" optional = true python-versions = ">=3.7" groups = ["main"] -markers = "extra == \"vertex\" or extra == \"sandbox\"" +markers = "extra == \"sandbox\"" files = [ {file = "cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a"}, {file = "cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4"}, @@ -890,7 +890,7 @@ files = [ {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, ] -markers = {main = "sys_platform == \"win32\" and extra == \"sandbox\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} +markers = {main = "extra == \"sandbox\" and sys_platform == \"win32\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} [[package]] name = "contourpy" @@ -1850,50 +1850,51 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.0)"] [[package]] name = "google-auth" -version = "2.43.0" +version = "2.48.0" description = "Google Authentication Library" optional = true -python-versions = ">=3.7" +python-versions = ">=3.8" groups = ["main"] markers = "extra == \"vertex\"" files = [ - {file = "google_auth-2.43.0-py2.py3-none-any.whl", hash = "sha256:af628ba6fa493f75c7e9dbe9373d148ca9f4399b5ea29976519e0a3848eddd16"}, - {file = "google_auth-2.43.0.tar.gz", hash = "sha256:88228eee5fc21b62a1b5fe773ca15e67778cb07dc8363adcb4a8827b52d81483"}, + {file = "google_auth-2.48.0-py3-none-any.whl", hash = "sha256:2e2a537873d449434252a9632c28bfc268b0adb1e53f9fb62afc5333a975903f"}, + {file = "google_auth-2.48.0.tar.gz", hash = "sha256:4f7e706b0cd3208a3d940a19a822c37a476ddba5450156c3e6624a71f7c841ce"}, ] [package.dependencies] -cachetools = ">=2.0.0,<7.0" +cryptography = ">=38.0.3" pyasn1-modules = ">=0.2.1" requests = {version = ">=2.20.0,<3.0.0", optional = true, markers = "extra == \"requests\""} rsa = ">=3.1.4,<5" [package.extras] aiohttp = ["aiohttp (>=3.6.2,<4.0.0)", "requests (>=2.20.0,<3.0.0)"] -enterprise-cert = ["cryptography", "pyopenssl"] -pyjwt = ["cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "pyjwt (>=2.0)"] -pyopenssl = ["cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"] +cryptography = ["cryptography (>=38.0.3)"] +enterprise-cert = ["pyopenssl"] +pyjwt = ["pyjwt (>=2.0)"] +pyopenssl = ["pyopenssl (>=20.0.0)"] reauth = ["pyu2f (>=0.1.5)"] requests = ["requests (>=2.20.0,<3.0.0)"] -testing = ["aiohttp (<3.10.0)", "aiohttp (>=3.6.2,<4.0.0)", "aioresponses", "cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "cryptography (>=38.0.3)", "flask", "freezegun", "grpcio", "mock", "oauth2client", "packaging", "pyjwt (>=2.0)", "pyopenssl (<24.3.0)", "pyopenssl (>=20.0.0)", "pytest", "pytest-asyncio", "pytest-cov", "pytest-localserver", "pyu2f (>=0.1.5)", "requests (>=2.20.0,<3.0.0)", "responses", "urllib3"] +testing = ["aiohttp (<3.10.0)", "aiohttp (>=3.6.2,<4.0.0)", "aioresponses", "flask", "freezegun", "grpcio", "oauth2client", "packaging", "pyjwt (>=2.0)", "pyopenssl (<24.3.0)", "pyopenssl (>=20.0.0)", "pytest", "pytest-asyncio", "pytest-cov", "pytest-localserver", "pyu2f (>=0.1.5)", "requests (>=2.20.0,<3.0.0)", "responses", "urllib3"] urllib3 = ["packaging", "urllib3"] [[package]] name = "google-cloud-aiplatform" -version = "1.129.0" +version = "1.133.0" description = "Vertex AI API client library" optional = true python-versions = ">=3.9" groups = ["main"] markers = "extra == \"vertex\"" files = [ - {file = "google_cloud_aiplatform-1.129.0-py2.py3-none-any.whl", hash = "sha256:b0052143a1bc05894e59fc6f910e84c504e194fadf877f84fc790b38a2267739"}, - {file = "google_cloud_aiplatform-1.129.0.tar.gz", hash = "sha256:c53b9d6c529b4de2962b34425b0116f7a382a926b26e02c2196e372f9a31d196"}, + {file = "google_cloud_aiplatform-1.133.0-py2.py3-none-any.whl", hash = "sha256:dfc81228e987ca10d1c32c7204e2131b3c8d6b7c8e0b4e23bf7c56816bc4c566"}, + {file = "google_cloud_aiplatform-1.133.0.tar.gz", hash = "sha256:3a6540711956dd178daaab3c2c05db476e46d94ac25912b8cf4f59b00b058ae0"}, ] [package.dependencies] docstring_parser = "<1" google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.8.dev0,<3.0.0", extras = ["grpc"]} -google-auth = ">=2.14.1,<3.0.0" +google-auth = ">=2.47.0,<3.0.0" google-cloud-bigquery = ">=1.15.0,<3.20.0 || >3.20.0,<4.0.0" google-cloud-resource-manager = ">=1.3.3,<3.0.0" google-cloud-storage = [ @@ -1905,7 +1906,6 @@ packaging = ">=14.3" proto-plus = ">=1.22.3,<2.0.0" protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0" pydantic = "<3" -shapely = "<3.0.0" typing_extensions = "*" [package.extras] @@ -1918,21 +1918,21 @@ cloud-profiler = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2 datasets = ["pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\""] endpoint = ["requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)"] evaluation = ["jsonschema", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "pandas (>=1.0.0)", "pyyaml", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "tqdm (>=4.23.0)"] -full = ["docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.114.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "jsonschema", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)"] +full = ["docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.124.4)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "jsonschema", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)"] langchain = ["langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)"] langchain-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"] -lit = ["explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""] +lit = ["explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""] llama-index = ["llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)"] llama-index-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"] metadata = ["numpy (>=1.15.0)", "pandas (>=1.0.0)"] pipelines = ["pyyaml (>=5.3.1,<7)"] -prediction = ["docker (>=5.0.3)", "fastapi (>=0.71.0,<=0.114.0)", "httpx (>=0.23.0,<=0.28.1)", "starlette (>=0.17.1)", "uvicorn[standard] (>=0.16.0)"] +prediction = ["docker (>=5.0.3)", "fastapi (>=0.71.0,<=0.124.4)", "httpx (>=0.23.0,<=0.28.1)", "starlette (>=0.17.1)", "uvicorn[standard] (>=0.16.0)"] private-endpoints = ["requests (>=2.28.1)", "urllib3 (>=1.21.1,<1.27)"] ray = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\""] ray-testing = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "pytest-xdist", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "ray[train]", "scikit-learn (<1.6.0)", "tensorflow ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0)", "xgboost", "xgboost_ray"] reasoningengine = ["cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "typing_extensions"] tensorboard = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2.0.0,<4.0.0)"] -testing = ["Pillow", "aiohttp", "bigframes ; python_version >= \"3.10\" and python_version < \"3.14\"", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.114.0)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "google-vizier (>=0.1.6)", "grpcio-testing", "grpcio-tools (>=1.63.0) ; python_version >= \"3.13\"", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "immutabledict", "ipython", "jsonschema", "kfp (>=2.6.0,<3.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "mock", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "protobuf (<=5.29.4)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pytest-asyncio", "pytest-cov", "pytest-xdist", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "sentencepiece (>=0.2.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (==2.14.1) ; python_version <= \"3.11\"", "tensorflow (==2.19.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0) ; python_version <= \"3.11\"", "torch (>=2.2.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)", "werkzeug (>=2.0.0,<4.0.0)", "xgboost"] +testing = ["Pillow", "aiohttp", "bigframes ; python_version >= \"3.10\" and python_version < \"3.14\"", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.124.4)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "google-vizier (>=0.1.6)", "grpcio-testing", "grpcio-tools (>=1.63.0) ; python_version >= \"3.13\"", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "immutabledict", "ipython", "jsonschema", "kfp (>=2.6.0,<3.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "mock", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "protobuf (<=5.29.4)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pytest-asyncio", "pytest-cov", "pytest-xdist", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "sentencepiece (>=0.2.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (==2.14.1) ; python_version <= \"3.11\"", "tensorflow (==2.19.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0) ; python_version <= \"3.11\"", "torch (>=2.2.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)", "werkzeug (>=2.0.0,<4.0.0)", "xgboost"] tokenization = ["sentencepiece (>=0.2.0)"] vizier = ["google-vizier (>=0.1.6)"] xai = ["tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""] @@ -3298,7 +3298,7 @@ description = "Mach-O header analysis and editing" optional = false python-versions = "*" groups = ["dev"] -markers = "sys_platform == \"darwin\" and python_version <= \"3.14\"" +markers = "python_version < \"3.15\" and sys_platform == \"darwin\"" files = [ {file = "macholib-1.16.4-py2.py3-none-any.whl", hash = "sha256:da1a3fa8266e30f0ce7e97c6a54eefaae8edd1e5f86f3eb8b95457cae90265ea"}, {file = "macholib-1.16.4.tar.gz", hash = "sha256:f408c93ab2e995cd2c46e34fe328b130404be143469e41bc366c807448979362"}, @@ -3882,7 +3882,7 @@ description = "Fundamental package for array computing in Python" optional = true python-versions = ">=3.11" groups = ["main"] -markers = "extra == \"sandbox\" or extra == \"vertex\"" +markers = "extra == \"sandbox\"" files = [ {file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"}, {file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"}, @@ -4347,7 +4347,7 @@ description = "Python PE parsing module" optional = false python-versions = ">=3.6.0" groups = ["dev"] -markers = "sys_platform == \"win32\" and python_version <= \"3.14\"" +markers = "python_version < \"3.15\" and sys_platform == \"win32\"" files = [ {file = "pefile-2024.8.26-py3-none-any.whl", hash = "sha256:76f8b485dcd3b1bb8166f1128d395fa3d87af26360c2358fb75b80019b957c6f"}, {file = "pefile-2024.8.26.tar.gz", hash = "sha256:3ff6c5d8b43e8c37bb6e6dd5085658d658a7a0bdcd20b6a07b1fcfc1c4e9d632"}, @@ -4360,7 +4360,7 @@ description = "Pexpect allows easy control of interactive console applications." optional = true python-versions = "*" groups = ["main"] -markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\" and extra == \"sandbox\"" +markers = "extra == \"sandbox\" and sys_platform != \"win32\" and sys_platform != \"emscripten\"" files = [ {file = "pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523"}, {file = "pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f"}, @@ -4769,7 +4769,7 @@ description = "Run a subprocess in a pseudo terminal" optional = true python-versions = "*" groups = ["main"] -markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\" and extra == \"sandbox\"" +markers = "extra == \"sandbox\" and sys_platform != \"win32\" and sys_platform != \"emscripten\"" files = [ {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"}, {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"}, @@ -5085,7 +5085,7 @@ description = "PyInstaller bundles a Python application and all its dependencies optional = false python-versions = "<3.15,>=3.8" groups = ["dev"] -markers = "python_version <= \"3.14\"" +markers = "python_version < \"3.15\"" files = [ {file = "pyinstaller-6.17.0-py3-none-macosx_10_13_universal2.whl", hash = "sha256:4e446b8030c6e5a2f712e3f82011ecf6c7ead86008357b0d23a0ec4bcde31dac"}, {file = "pyinstaller-6.17.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:aa9fd87aaa28239c6f0d0210114029bd03f8cac316a90bab071a5092d7c85ad7"}, @@ -5121,7 +5121,7 @@ description = "Community maintained hooks for PyInstaller" optional = false python-versions = ">=3.8" groups = ["dev"] -markers = "python_version <= \"3.14\"" +markers = "python_version < \"3.15\"" files = [ {file = "pyinstaller_hooks_contrib-2025.10-py3-none-any.whl", hash = "sha256:aa7a378518772846221f63a84d6306d9827299323243db890851474dfd1231a9"}, {file = "pyinstaller_hooks_contrib-2025.10.tar.gz", hash = "sha256:a1a737e5c0dccf1cf6f19a25e2efd109b9fec9ddd625f97f553dac16ee884881"}, @@ -5239,9 +5239,10 @@ diagrams = ["jinja2", "railroad-diagrams"] name = "pypdf" version = "6.7.1" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" -optional = false +optional = true python-versions = ">=3.9" groups = ["main"] +markers = "extra == \"sandbox\"" files = [ {file = "pypdf-6.7.1-py3-none-any.whl", hash = "sha256:a02ccbb06463f7c334ce1612e91b3e68a8e827f3cee100b9941771e6066b094e"}, {file = "pypdf-6.7.1.tar.gz", hash = "sha256:6b7a63be5563a0a35d54c6d6b550d75c00b8ccf36384be96365355e296e6b3b0"}, @@ -5502,7 +5503,7 @@ description = "A (partial) reimplementation of pywin32 using ctypes/cffi" optional = false python-versions = ">=3.6" groups = ["dev"] -markers = "sys_platform == \"win32\" and python_version <= \"3.14\"" +markers = "python_version < \"3.15\" and sys_platform == \"win32\"" files = [ {file = "pywin32-ctypes-0.2.3.tar.gz", hash = "sha256:d162dc04946d704503b2edc4d55f3dba5c1d539ead017afa00142c38b9885755"}, {file = "pywin32_ctypes-0.2.3-py3-none-any.whl", hash = "sha256:8a1513379d709975552d202d942d9837758905c8d01eb82b8bcc30918929e7b8"}, @@ -6149,81 +6150,6 @@ enabler = ["pytest-enabler (>=2.2)"] test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21) ; python_version >= \"3.9\" and sys_platform != \"cygwin\"", "jaraco.envs (>=2.2)", "jaraco.path (>=3.7.2)", "jaraco.test (>=5.5)", "packaging (>=24.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-home (>=0.5)", "pytest-perf ; sys_platform != \"cygwin\"", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel (>=0.44.0)"] type = ["importlib_metadata (>=7.0.2) ; python_version < \"3.10\"", "jaraco.develop (>=7.21) ; sys_platform != \"cygwin\"", "mypy (==1.14.*)", "pytest-mypy"] -[[package]] -name = "shapely" -version = "2.1.2" -description = "Manipulation and analysis of geometric objects" -optional = true -python-versions = ">=3.10" -groups = ["main"] -markers = "extra == \"vertex\"" -files = [ - {file = "shapely-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7ae48c236c0324b4e139bea88a306a04ca630f49be66741b340729d380d8f52f"}, - {file = "shapely-2.1.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:eba6710407f1daa8e7602c347dfc94adc02205ec27ed956346190d66579eb9ea"}, - {file = "shapely-2.1.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ef4a456cc8b7b3d50ccec29642aa4aeda959e9da2fe9540a92754770d5f0cf1f"}, - {file = "shapely-2.1.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e38a190442aacc67ff9f75ce60aec04893041f16f97d242209106d502486a142"}, - {file = "shapely-2.1.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:40d784101f5d06a1fd30b55fc11ea58a61be23f930d934d86f19a180909908a4"}, - {file = "shapely-2.1.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f6f6cd5819c50d9bcf921882784586aab34a4bd53e7553e175dece6db513a6f0"}, - {file = "shapely-2.1.2-cp310-cp310-win32.whl", hash = "sha256:fe9627c39c59e553c90f5bc3128252cb85dc3b3be8189710666d2f8bc3a5503e"}, - {file = "shapely-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:1d0bfb4b8f661b3b4ec3565fa36c340bfb1cda82087199711f86a88647d26b2f"}, - {file = "shapely-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:91121757b0a36c9aac3427a651a7e6567110a4a67c97edf04f8d55d4765f6618"}, - {file = "shapely-2.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:16a9c722ba774cf50b5d4541242b4cce05aafd44a015290c82ba8a16931ff63d"}, - {file = "shapely-2.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cc4f7397459b12c0b196c9efe1f9d7e92463cbba142632b4cc6d8bbbbd3e2b09"}, - {file = "shapely-2.1.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:136ab87b17e733e22f0961504d05e77e7be8c9b5a8184f685b4a91a84efe3c26"}, - {file = "shapely-2.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:16c5d0fc45d3aa0a69074979f4f1928ca2734fb2e0dde8af9611e134e46774e7"}, - {file = "shapely-2.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6ddc759f72b5b2b0f54a7e7cde44acef680a55019eb52ac63a7af2cf17cb9cd2"}, - {file = "shapely-2.1.2-cp311-cp311-win32.whl", hash = "sha256:2fa78b49485391224755a856ed3b3bd91c8455f6121fee0db0e71cefb07d0ef6"}, - {file = "shapely-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:c64d5c97b2f47e3cd9b712eaced3b061f2b71234b3fc263e0fcf7d889c6559dc"}, - {file = "shapely-2.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fe2533caae6a91a543dec62e8360fe86ffcdc42a7c55f9dfd0128a977a896b94"}, - {file = "shapely-2.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ba4d1333cc0bc94381d6d4308d2e4e008e0bd128bdcff5573199742ee3634359"}, - {file = "shapely-2.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0bd308103340030feef6c111d3eb98d50dc13feea33affc8a6f9fa549e9458a3"}, - {file = "shapely-2.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1e7d4d7ad262a48bb44277ca12c7c78cb1b0f56b32c10734ec9a1d30c0b0c54b"}, - {file = "shapely-2.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e9eddfe513096a71896441a7c37db72da0687b34752c4e193577a145c71736fc"}, - {file = "shapely-2.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:980c777c612514c0cf99bc8a9de6d286f5e186dcaf9091252fcd444e5638193d"}, - {file = "shapely-2.1.2-cp312-cp312-win32.whl", hash = "sha256:9111274b88e4d7b54a95218e243282709b330ef52b7b86bc6aaf4f805306f454"}, - {file = "shapely-2.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:743044b4cfb34f9a67205cee9279feaf60ba7d02e69febc2afc609047cb49179"}, - {file = "shapely-2.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:b510dda1a3672d6879beb319bc7c5fd302c6c354584690973c838f46ec3e0fa8"}, - {file = "shapely-2.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8cff473e81017594d20ec55d86b54bc635544897e13a7cfc12e36909c5309a2a"}, - {file = "shapely-2.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe7b77dc63d707c09726b7908f575fc04ff1d1ad0f3fb92aec212396bc6cfe5e"}, - {file = "shapely-2.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7ed1a5bbfb386ee8332713bf7508bc24e32d24b74fc9a7b9f8529a55db9f4ee6"}, - {file = "shapely-2.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a84e0582858d841d54355246ddfcbd1fce3179f185da7470f41ce39d001ee1af"}, - {file = "shapely-2.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:dc3487447a43d42adcdf52d7ac73804f2312cbfa5d433a7d2c506dcab0033dfd"}, - {file = "shapely-2.1.2-cp313-cp313-win32.whl", hash = "sha256:9c3a3c648aedc9f99c09263b39f2d8252f199cb3ac154fadc173283d7d111350"}, - {file = "shapely-2.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:ca2591bff6645c216695bdf1614fca9c82ea1144d4a7591a466fef64f28f0715"}, - {file = "shapely-2.1.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:2d93d23bdd2ed9dc157b46bc2f19b7da143ca8714464249bef6771c679d5ff40"}, - {file = "shapely-2.1.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:01d0d304b25634d60bd7cf291828119ab55a3bab87dc4af1e44b07fb225f188b"}, - {file = "shapely-2.1.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8d8382dd120d64b03698b7298b89611a6ea6f55ada9d39942838b79c9bc89801"}, - {file = "shapely-2.1.2-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:19efa3611eef966e776183e338b2d7ea43569ae99ab34f8d17c2c054d3205cc0"}, - {file = "shapely-2.1.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:346ec0c1a0fcd32f57f00e4134d1200e14bf3f5ae12af87ba83ca275c502498c"}, - {file = "shapely-2.1.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:6305993a35989391bd3476ee538a5c9a845861462327efe00dd11a5c8c709a99"}, - {file = "shapely-2.1.2-cp313-cp313t-win32.whl", hash = "sha256:c8876673449f3401f278c86eb33224c5764582f72b653a415d0e6672fde887bf"}, - {file = "shapely-2.1.2-cp313-cp313t-win_amd64.whl", hash = "sha256:4a44bc62a10d84c11a7a3d7c1c4fe857f7477c3506e24c9062da0db0ae0c449c"}, - {file = "shapely-2.1.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:9a522f460d28e2bf4e12396240a5fc1518788b2fcd73535166d748399ef0c223"}, - {file = "shapely-2.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1ff629e00818033b8d71139565527ced7d776c269a49bd78c9df84e8f852190c"}, - {file = "shapely-2.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f67b34271dedc3c653eba4e3d7111aa421d5be9b4c4c7d38d30907f796cb30df"}, - {file = "shapely-2.1.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:21952dc00df38a2c28375659b07a3979d22641aeb104751e769c3ee825aadecf"}, - {file = "shapely-2.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1f2f33f486777456586948e333a56ae21f35ae273be99255a191f5c1fa302eb4"}, - {file = "shapely-2.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:cf831a13e0d5a7eb519e96f58ec26e049b1fad411fc6fc23b162a7ce04d9cffc"}, - {file = "shapely-2.1.2-cp314-cp314-win32.whl", hash = "sha256:61edcd8d0d17dd99075d320a1dd39c0cb9616f7572f10ef91b4b5b00c4aeb566"}, - {file = "shapely-2.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:a444e7afccdb0999e203b976adb37ea633725333e5b119ad40b1ca291ecf311c"}, - {file = "shapely-2.1.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5ebe3f84c6112ad3d4632b1fd2290665aa75d4cef5f6c5d77c4c95b324527c6a"}, - {file = "shapely-2.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5860eb9f00a1d49ebb14e881f5caf6c2cf472c7fd38bd7f253bbd34f934eb076"}, - {file = "shapely-2.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b705c99c76695702656327b819c9660768ec33f5ce01fa32b2af62b56ba400a1"}, - {file = "shapely-2.1.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a1fd0ea855b2cf7c9cddaf25543e914dd75af9de08785f20ca3085f2c9ca60b0"}, - {file = "shapely-2.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:df90e2db118c3671a0754f38e36802db75fe0920d211a27481daf50a711fdf26"}, - {file = "shapely-2.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:361b6d45030b4ac64ddd0a26046906c8202eb60d0f9f53085f5179f1d23021a0"}, - {file = "shapely-2.1.2-cp314-cp314t-win32.whl", hash = "sha256:b54df60f1fbdecc8ebc2c5b11870461a6417b3d617f555e5033f1505d36e5735"}, - {file = "shapely-2.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:0036ac886e0923417932c2e6369b6c52e38e0ff5d9120b90eef5cd9a5fc5cae9"}, - {file = "shapely-2.1.2.tar.gz", hash = "sha256:2ed4ecb28320a433db18a5bf029986aa8afcfd740745e78847e330d5d94922a9"}, -] - -[package.dependencies] -numpy = ">=1.21" - -[package.extras] -docs = ["matplotlib", "numpydoc (==1.1.*)", "sphinx", "sphinx-book-theme", "sphinx-remove-toctrees"] -test = ["pytest", "pytest-cov", "scipy-doctest"] - [[package]] name = "six" version = "1.17.0" @@ -6532,7 +6458,7 @@ description = "Standard library aifc redistribution. \"dead battery\"." optional = true python-versions = "*" groups = ["main"] -markers = "extra == \"sandbox\" and python_version >= \"3.13\"" +markers = "python_version >= \"3.13\" and extra == \"sandbox\"" files = [ {file = "standard_aifc-3.13.0-py3-none-any.whl", hash = "sha256:f7ae09cc57de1224a0dd8e3eb8f73830be7c3d0bc485de4c1f82b4a7f645ac66"}, {file = "standard_aifc-3.13.0.tar.gz", hash = "sha256:64e249c7cb4b3daf2fdba4e95721f811bde8bdfc43ad9f936589b7bb2fae2e43"}, @@ -6549,7 +6475,7 @@ description = "Standard library chunk redistribution. \"dead battery\"." optional = true python-versions = "*" groups = ["main"] -markers = "extra == \"sandbox\" and python_version >= \"3.13\"" +markers = "python_version >= \"3.13\" and extra == \"sandbox\"" files = [ {file = "standard_chunk-3.13.0-py3-none-any.whl", hash = "sha256:17880a26c285189c644bd5bd8f8ed2bdb795d216e3293e6dbe55bbd848e2982c"}, {file = "standard_chunk-3.13.0.tar.gz", hash = "sha256:4ac345d37d7e686d2755e01836b8d98eda0d1a3ee90375e597ae43aaf064d654"}, From 0ca9af3b3e2c0dcb25dc3df4f42e096d258687d9 Mon Sep 17 00:00:00 2001 From: mason5052 Date: Sun, 22 Feb 2026 21:47:24 -0500 Subject: [PATCH 24/43] docs: fix Discord badge expired invite code The badge image URL used invite code which is expired, causing the badge to render 'Invalid invite' instead of the server info. Updated to use the vanity URL which resolves correctly. Fixes #313 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 46a3b28..60dece0 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Docs Website -[![](https://dcbadge.limes.pink/api/server/8Suzzd9z)](https://discord.gg/strix-ai) +[![](https://dcbadge.limes.pink/api/server/strix-ai)](https://discord.gg/strix-ai) Ask DeepWiki GitHub Stars From d84d72d986a30371a6961f3867db781bfb4947b8 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Mon, 23 Feb 2026 18:24:29 -0800 Subject: [PATCH 25/43] feat: Expose Caido proxy port to host for human-in-the-loop interaction Users can now access the Caido web UI from their browser to inspect traffic, replay requests, and perform manual testing alongside the automated scan. - Map Caido port (48080) to a random host port in DockerRuntime - Add caido_port to SandboxInfo and track across container lifecycle - Display Caido URL in TUI sidebar stats panel with selectable text - Bind Caido to 0.0.0.0 in entrypoint (requires image rebuild) - Bump sandbox image to 0.1.12 - Restore discord link in exit screen --- containers/docker-entrypoint.sh | 2 +- docs/advanced/configuration.mdx | 2 +- scripts/install.sh | 2 +- strix/agents/base_agent.py | 8 ++++++++ strix/config/config.py | 2 +- strix/interface/assets/tui_styles.tcss | 11 ++++++++++- strix/interface/main.py | 2 +- strix/interface/tui.py | 17 ++++++----------- strix/interface/utils.py | 6 ++++++ strix/runtime/docker_runtime.py | 19 +++++++++++++++++-- strix/runtime/runtime.py | 1 + strix/telemetry/tracer.py | 1 + 12 files changed, 54 insertions(+), 19 deletions(-) diff --git a/containers/docker-entrypoint.sh b/containers/docker-entrypoint.sh index 8d6fc58..cbef471 100644 --- a/containers/docker-entrypoint.sh +++ b/containers/docker-entrypoint.sh @@ -9,7 +9,7 @@ if [ ! -f /app/certs/ca.p12 ]; then exit 1 fi -caido-cli --listen 127.0.0.1:${CAIDO_PORT} \ +caido-cli --listen 0.0.0.0:${CAIDO_PORT} \ --allow-guests \ --no-logging \ --no-open \ diff --git a/docs/advanced/configuration.mdx b/docs/advanced/configuration.mdx index d82f822..9c94630 100644 --- a/docs/advanced/configuration.mdx +++ b/docs/advanced/configuration.mdx @@ -51,7 +51,7 @@ Configure Strix using environment variables or a config file. ## Docker Configuration - + Docker image to use for the sandbox container. diff --git a/scripts/install.sh b/scripts/install.sh index 67a0e19..868a95e 100755 --- a/scripts/install.sh +++ b/scripts/install.sh @@ -4,7 +4,7 @@ set -euo pipefail APP=strix REPO="usestrix/strix" -STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.11" +STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.12" MUTED='\033[0;2m' RED='\033[0;31m' diff --git a/strix/agents/base_agent.py b/strix/agents/base_agent.py index f955892..99d0332 100644 --- a/strix/agents/base_agent.py +++ b/strix/agents/base_agent.py @@ -333,6 +333,14 @@ class BaseAgent(metaclass=AgentMeta): if "agent_id" in sandbox_info: self.state.sandbox_info["agent_id"] = sandbox_info["agent_id"] + + caido_port = sandbox_info.get("caido_port") + if caido_port: + from strix.telemetry.tracer import get_global_tracer + + tracer = get_global_tracer() + if tracer: + tracer.caido_url = f"localhost:{caido_port}" except Exception as e: from strix.telemetry import posthog diff --git a/strix/config/config.py b/strix/config/config.py index f8836b2..7578b61 100644 --- a/strix/config/config.py +++ b/strix/config/config.py @@ -40,7 +40,7 @@ class Config: strix_disable_browser = "false" # Runtime Configuration - strix_image = "ghcr.io/usestrix/strix-sandbox:0.1.11" + strix_image = "ghcr.io/usestrix/strix-sandbox:0.1.12" strix_runtime_backend = "docker" strix_sandbox_execution_timeout = "120" strix_sandbox_connect_timeout = "10" diff --git a/strix/interface/assets/tui_styles.tcss b/strix/interface/assets/tui_styles.tcss index 7ebefd2..d1097de 100644 --- a/strix/interface/assets/tui_styles.tcss +++ b/strix/interface/assets/tui_styles.tcss @@ -77,12 +77,21 @@ Toast.-information .toast--title { margin-bottom: 0; } -#stats_display { +#stats_scroll { height: auto; max-height: 15; background: transparent; padding: 0; margin: 0; + border: round #333333; + scrollbar-size: 0 0; +} + +#stats_display { + height: auto; + background: transparent; + padding: 0 1; + margin: 0; } #vulnerabilities_panel { diff --git a/strix/interface/main.py b/strix/interface/main.py index 85c0427..33785e6 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -462,7 +462,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) -> console.print("\n") console.print(panel) console.print() - console.print("[#60a5fa]models.strix.ai[/]") + console.print("[#60a5fa]models.strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]") console.print() diff --git a/strix/interface/tui.py b/strix/interface/tui.py index eeaa2ea..1a62255 100644 --- a/strix/interface/tui.py +++ b/strix/interface/tui.py @@ -829,11 +829,11 @@ class StrixTUIApp(App): # type: ignore[misc] agents_tree.guide_style = "dashed" stats_display = Static("", id="stats_display") - stats_display.ALLOW_SELECT = False + stats_scroll = VerticalScroll(stats_display, id="stats_scroll") vulnerabilities_panel = VulnerabilitiesPanel(id="vulnerabilities_panel") - sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_display, id="sidebar") + sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_scroll, id="sidebar") content_container.mount(chat_area_container) content_container.mount(sidebar) @@ -1272,6 +1272,9 @@ class StrixTUIApp(App): # type: ignore[misc] if not self._is_widget_safe(stats_display): return + if self.screen.selections: + return + stats_content = Text() stats_text = build_tui_stats_text(self.tracer, self.agent_config) @@ -1281,15 +1284,7 @@ class StrixTUIApp(App): # type: ignore[misc] version = get_package_version() stats_content.append(f"\nv{version}", style="white") - from rich.panel import Panel - - stats_panel = Panel( - stats_content, - border_style="#333333", - padding=(0, 1), - ) - - self._safe_widget_operation(stats_display.update, stats_panel) + self._safe_widget_operation(stats_display.update, stats_content) def _update_vulnerabilities_panel(self) -> None: """Update the vulnerabilities panel with current vulnerability data.""" diff --git a/strix/interface/utils.py b/strix/interface/utils.py index fe5bdfc..5b9e52b 100644 --- a/strix/interface/utils.py +++ b/strix/interface/utils.py @@ -390,6 +390,12 @@ def build_tui_stats_text(tracer: Any, agent_config: dict[str, Any] | None = None stats_text.append(" · ", style="white") stats_text.append(f"${total_stats['cost']:.2f}", style="white") + caido_url = getattr(tracer, "caido_url", None) + if caido_url: + stats_text.append("\n") + stats_text.append("Caido: ", style="bold white") + stats_text.append(caido_url, style="white") + return stats_text diff --git a/strix/runtime/docker_runtime.py b/strix/runtime/docker_runtime.py index b783dcc..d57d358 100644 --- a/strix/runtime/docker_runtime.py +++ b/strix/runtime/docker_runtime.py @@ -22,6 +22,7 @@ from .runtime import AbstractRuntime, SandboxInfo HOST_GATEWAY_HOSTNAME = "host.docker.internal" DOCKER_TIMEOUT = 60 CONTAINER_TOOL_SERVER_PORT = 48081 +CONTAINER_CAIDO_PORT = 48080 class DockerRuntime(AbstractRuntime): @@ -37,6 +38,7 @@ class DockerRuntime(AbstractRuntime): self._scan_container: Container | None = None self._tool_server_port: int | None = None self._tool_server_token: str | None = None + self._caido_port: int | None = None def _find_available_port(self) -> int: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: @@ -78,6 +80,10 @@ class DockerRuntime(AbstractRuntime): if port_bindings.get(port_key): self._tool_server_port = int(port_bindings[port_key][0]["HostPort"]) + caido_port_key = f"{CONTAINER_CAIDO_PORT}/tcp" + if port_bindings.get(caido_port_key): + self._caido_port = int(port_bindings[caido_port_key][0]["HostPort"]) + def _wait_for_tool_server(self, max_retries: int = 30, timeout: int = 5) -> None: host = self._resolve_docker_host() health_url = f"http://{host}:{self._tool_server_port}/health" @@ -121,6 +127,7 @@ class DockerRuntime(AbstractRuntime): time.sleep(1) self._tool_server_port = self._find_available_port() + self._caido_port = self._find_available_port() self._tool_server_token = secrets.token_urlsafe(32) execution_timeout = Config.get("strix_sandbox_execution_timeout") or "120" @@ -130,7 +137,10 @@ class DockerRuntime(AbstractRuntime): detach=True, name=container_name, hostname=container_name, - ports={f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port}, + ports={ + f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port, + f"{CONTAINER_CAIDO_PORT}/tcp": self._caido_port, + }, cap_add=["NET_ADMIN", "NET_RAW"], labels={"strix-scan-id": scan_id}, environment={ @@ -152,6 +162,7 @@ class DockerRuntime(AbstractRuntime): if attempt < max_retries: self._tool_server_port = None self._tool_server_token = None + self._caido_port = None time.sleep(2**attempt) else: return container @@ -173,6 +184,7 @@ class DockerRuntime(AbstractRuntime): self._scan_container = None self._tool_server_port = None self._tool_server_token = None + self._caido_port = None try: container = self.client.containers.get(container_name) @@ -260,7 +272,7 @@ class DockerRuntime(AbstractRuntime): raise RuntimeError("Docker container ID is unexpectedly None") token = existing_token or self._tool_server_token - if self._tool_server_port is None or token is None: + if self._tool_server_port is None or self._caido_port is None or token is None: raise RuntimeError("Tool server not initialized") host = self._resolve_docker_host() @@ -273,6 +285,7 @@ class DockerRuntime(AbstractRuntime): "api_url": api_url, "auth_token": token, "tool_server_port": self._tool_server_port, + "caido_port": self._caido_port, "agent_id": agent_id, } @@ -314,6 +327,7 @@ class DockerRuntime(AbstractRuntime): self._scan_container = None self._tool_server_port = None self._tool_server_token = None + self._caido_port = None except (NotFound, DockerException): pass @@ -323,6 +337,7 @@ class DockerRuntime(AbstractRuntime): self._scan_container = None self._tool_server_port = None self._tool_server_token = None + self._caido_port = None if container_name is None: return diff --git a/strix/runtime/runtime.py b/strix/runtime/runtime.py index e33c08d..e523d51 100644 --- a/strix/runtime/runtime.py +++ b/strix/runtime/runtime.py @@ -7,6 +7,7 @@ class SandboxInfo(TypedDict): api_url: str auth_token: str | None tool_server_port: int + caido_port: int agent_id: str diff --git a/strix/telemetry/tracer.py b/strix/telemetry/tracer.py index 8bbb412..ef97ab6 100644 --- a/strix/telemetry/tracer.py +++ b/strix/telemetry/tracer.py @@ -56,6 +56,7 @@ class Tracer: self._next_message_id = 1 self._saved_vuln_ids: set[str] = set() + self.caido_url: str | None = None self.vulnerability_found_callback: Callable[[dict[str, Any]], None] | None = None def set_run_name(self, run_name: str) -> None: From 4384f5bff82e816b8e089d69f2fb3bea6f19414d Mon Sep 17 00:00:00 2001 From: 0xallam Date: Mon, 23 Feb 2026 18:41:06 -0800 Subject: [PATCH 26/43] chore: Bump version to 0.8.2 --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index f19f1d8..ab08983 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "strix-agent" -version = "0.8.1" +version = "0.8.2" description = "Open-source AI Hackers for your apps" authors = ["Strix "] readme = "README.md" From 5d91500564e4a3436df1be54a85eb250cc331df8 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Mon, 23 Feb 2026 19:54:54 -0800 Subject: [PATCH 27/43] docs: Add human-in-the-loop section to proxy documentation --- docs/tools/proxy.mdx | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/docs/tools/proxy.mdx b/docs/tools/proxy.mdx index f870bad..39b7be6 100644 --- a/docs/tools/proxy.mdx +++ b/docs/tools/proxy.mdx @@ -80,6 +80,27 @@ for req in user_requests.get('requests', []): print(f"Potential IDOR: {test_id} returned 200") ``` +## Human-in-the-Loop + +Strix exposes the Caido proxy to your host machine, so you can interact with it alongside the automated scan. When the sandbox starts, the Caido URL is displayed in the TUI sidebar — click it to copy, then open it in Caido Desktop. + +### Accessing Caido + +1. Start a scan as usual +2. Look for the **Caido** URL in the sidebar stats panel (e.g. `localhost:52341`) +3. Open the URL in Caido Desktop +4. Click **Continue as guest** to access the instance + +### What You Can Do + +- **Inspect traffic** — Browse all HTTP/HTTPS requests the agent is making in real time +- **Replay requests** — Take any captured request and resend it with your own modifications +- **Intercept and modify** — Pause requests mid-flight, edit them, then forward +- **Explore the sitemap** — See the full attack surface the agent has discovered +- **Manual testing** — Use Caido's tools to test findings the agent reports, or explore areas it hasn't reached + +This turns Strix from a fully automated scanner into a collaborative tool — the agent handles the heavy lifting while you focus on the interesting parts. + ## Scope Create scopes to filter traffic to relevant domains: From 30e3f13494d0e9f789c814fd92b0cc15cbcfb585 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Thu, 26 Feb 2026 14:58:28 -0800 Subject: [PATCH 28/43] docs: Add Strix Platform and Enterprise sections to README --- README.md | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 60dece0..9b8f96e 100644 --- a/README.md +++ b/README.md @@ -32,9 +32,6 @@ -> [!TIP] -> **New!** Strix integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production! - --- @@ -95,6 +92,20 @@ strix --target ./app-directory --- +## ☁️ Strix Platform + +Try the Strix full-stack security platform at **[app.strix.ai](https://app.strix.ai)** — sign up for free, connect your repos and domains, and launch a pentest in minutes. + +- **Validated findings with PoCs** and reproduction steps +- **One-click autofix** as ready-to-merge pull requests +- **Continuous monitoring** across code, cloud, and infrastructure +- **Integrations** with GitHub, Slack, Jira, Linear, and CI/CD pipelines +- **Continuous learning** that builds on past findings and remediations + +[**Start your first pentest →**](https://app.strix.ai) + +--- + ## ✨ Features ### Agentic Security Tools @@ -220,6 +231,10 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high, See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models. +## Enterprise + +Get the same Strix experience with [enterprise-grade](https://strix.ai/demo) controls: SSO (SAML/OIDC), custom compliance reports, dedicated support & SLA, custom deployment options (VPC/self-hosted), BYOK model support, and tailored agents optimized for your environment. [Learn more](https://strix.ai/demo). + ## Documentation Full documentation is available at **[docs.strix.ai](https://docs.strix.ai)** — including detailed guides for usage, CI/CD integrations, skills, and advanced configuration. From 5102b641c551e424a00bf93d186a345d8992c47e Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 25 Feb 2026 16:42:23 +0000 Subject: [PATCH 29/43] chore(deps): bump pypdf from 6.7.1 to 6.7.2 Bumps [pypdf](https://github.com/py-pdf/pypdf) from 6.7.1 to 6.7.2. - [Release notes](https://github.com/py-pdf/pypdf/releases) - [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md) - [Commits](https://github.com/py-pdf/pypdf/compare/6.7.1...6.7.2) --- updated-dependencies: - dependency-name: pypdf dependency-version: 6.7.2 dependency-type: indirect ... Signed-off-by: dependabot[bot] --- poetry.lock | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/poetry.lock b/poetry.lock index 8b742c7..ab48d6b 100644 --- a/poetry.lock +++ b/poetry.lock @@ -190,7 +190,7 @@ description = "Python graph (network) package" optional = false python-versions = "*" groups = ["dev"] -markers = "python_version < \"3.15\"" +markers = "python_version <= \"3.14\"" files = [ {file = "altgraph-0.17.5-py2.py3-none-any.whl", hash = "sha256:f3a22400bce1b0c701683820ac4f3b159cd301acab067c51c653e06961600597"}, {file = "altgraph-0.17.5.tar.gz", hash = "sha256:c87b395dd12fabde9c99573a9749d67da8d29ef9de0125c7f536699b4a9bc9e7"}, @@ -324,7 +324,7 @@ description = "LTS Port of Python audioop" optional = true python-versions = ">=3.13" groups = ["main"] -markers = "python_version >= \"3.13\" and extra == \"sandbox\"" +markers = "extra == \"sandbox\" and python_version >= \"3.13\"" files = [ {file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_universal2.whl", hash = "sha256:fd3d4602dc64914d462924a08c1a9816435a2155d74f325853c1f1ac3b2d9800"}, {file = "audioop_lts-0.2.2-cp313-abi3-macosx_10_13_x86_64.whl", hash = "sha256:550c114a8df0aafe9a05442a1162dfc8fec37e9af1d625ae6060fed6e756f303"}, @@ -890,7 +890,7 @@ files = [ {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, ] -markers = {main = "extra == \"sandbox\" and sys_platform == \"win32\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} +markers = {main = "sys_platform == \"win32\" and extra == \"sandbox\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} [[package]] name = "contourpy" @@ -3298,7 +3298,7 @@ description = "Mach-O header analysis and editing" optional = false python-versions = "*" groups = ["dev"] -markers = "python_version < \"3.15\" and sys_platform == \"darwin\"" +markers = "sys_platform == \"darwin\" and python_version <= \"3.14\"" files = [ {file = "macholib-1.16.4-py2.py3-none-any.whl", hash = "sha256:da1a3fa8266e30f0ce7e97c6a54eefaae8edd1e5f86f3eb8b95457cae90265ea"}, {file = "macholib-1.16.4.tar.gz", hash = "sha256:f408c93ab2e995cd2c46e34fe328b130404be143469e41bc366c807448979362"}, @@ -4347,7 +4347,7 @@ description = "Python PE parsing module" optional = false python-versions = ">=3.6.0" groups = ["dev"] -markers = "python_version < \"3.15\" and sys_platform == \"win32\"" +markers = "sys_platform == \"win32\" and python_version <= \"3.14\"" files = [ {file = "pefile-2024.8.26-py3-none-any.whl", hash = "sha256:76f8b485dcd3b1bb8166f1128d395fa3d87af26360c2358fb75b80019b957c6f"}, {file = "pefile-2024.8.26.tar.gz", hash = "sha256:3ff6c5d8b43e8c37bb6e6dd5085658d658a7a0bdcd20b6a07b1fcfc1c4e9d632"}, @@ -4360,7 +4360,7 @@ description = "Pexpect allows easy control of interactive console applications." optional = true python-versions = "*" groups = ["main"] -markers = "extra == \"sandbox\" and sys_platform != \"win32\" and sys_platform != \"emscripten\"" +markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\" and extra == \"sandbox\"" files = [ {file = "pexpect-4.9.0-py2.py3-none-any.whl", hash = "sha256:7236d1e080e4936be2dc3e326cec0af72acf9212a7e1d060210e70a47e253523"}, {file = "pexpect-4.9.0.tar.gz", hash = "sha256:ee7d41123f3c9911050ea2c2dac107568dc43b2d3b0c7557a33212c398ead30f"}, @@ -4769,7 +4769,7 @@ description = "Run a subprocess in a pseudo terminal" optional = true python-versions = "*" groups = ["main"] -markers = "extra == \"sandbox\" and sys_platform != \"win32\" and sys_platform != \"emscripten\"" +markers = "sys_platform != \"win32\" and sys_platform != \"emscripten\" and extra == \"sandbox\"" files = [ {file = "ptyprocess-0.7.0-py2.py3-none-any.whl", hash = "sha256:4b41f3967fce3af57cc7e94b888626c18bf37a083e3651ca8feeb66d492fef35"}, {file = "ptyprocess-0.7.0.tar.gz", hash = "sha256:5c5d0a3b48ceee0b48485e0c26037c0acd7d29765ca3fbb5cb3831d347423220"}, @@ -5085,7 +5085,7 @@ description = "PyInstaller bundles a Python application and all its dependencies optional = false python-versions = "<3.15,>=3.8" groups = ["dev"] -markers = "python_version < \"3.15\"" +markers = "python_version <= \"3.14\"" files = [ {file = "pyinstaller-6.17.0-py3-none-macosx_10_13_universal2.whl", hash = "sha256:4e446b8030c6e5a2f712e3f82011ecf6c7ead86008357b0d23a0ec4bcde31dac"}, {file = "pyinstaller-6.17.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:aa9fd87aaa28239c6f0d0210114029bd03f8cac316a90bab071a5092d7c85ad7"}, @@ -5121,7 +5121,7 @@ description = "Community maintained hooks for PyInstaller" optional = false python-versions = ">=3.8" groups = ["dev"] -markers = "python_version < \"3.15\"" +markers = "python_version <= \"3.14\"" files = [ {file = "pyinstaller_hooks_contrib-2025.10-py3-none-any.whl", hash = "sha256:aa7a378518772846221f63a84d6306d9827299323243db890851474dfd1231a9"}, {file = "pyinstaller_hooks_contrib-2025.10.tar.gz", hash = "sha256:a1a737e5c0dccf1cf6f19a25e2efd109b9fec9ddd625f97f553dac16ee884881"}, @@ -5237,15 +5237,14 @@ diagrams = ["jinja2", "railroad-diagrams"] [[package]] name = "pypdf" -version = "6.7.1" +version = "6.7.2" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" -optional = true +optional = false python-versions = ">=3.9" groups = ["main"] -markers = "extra == \"sandbox\"" files = [ - {file = "pypdf-6.7.1-py3-none-any.whl", hash = "sha256:a02ccbb06463f7c334ce1612e91b3e68a8e827f3cee100b9941771e6066b094e"}, - {file = "pypdf-6.7.1.tar.gz", hash = "sha256:6b7a63be5563a0a35d54c6d6b550d75c00b8ccf36384be96365355e296e6b3b0"}, + {file = "pypdf-6.7.2-py3-none-any.whl", hash = "sha256:331b63cd66f63138f152a700565b3e0cebdf4ec8bec3b7594b2522418782f1f3"}, + {file = "pypdf-6.7.2.tar.gz", hash = "sha256:82a1a48de500ceea59a52a7d979f5095927ef802e4e4fac25ab862a73468acbb"}, ] [package.extras] @@ -5503,7 +5502,7 @@ description = "A (partial) reimplementation of pywin32 using ctypes/cffi" optional = false python-versions = ">=3.6" groups = ["dev"] -markers = "python_version < \"3.15\" and sys_platform == \"win32\"" +markers = "sys_platform == \"win32\" and python_version <= \"3.14\"" files = [ {file = "pywin32-ctypes-0.2.3.tar.gz", hash = "sha256:d162dc04946d704503b2edc4d55f3dba5c1d539ead017afa00142c38b9885755"}, {file = "pywin32_ctypes-0.2.3-py3-none-any.whl", hash = "sha256:8a1513379d709975552d202d942d9837758905c8d01eb82b8bcc30918929e7b8"}, @@ -6458,7 +6457,7 @@ description = "Standard library aifc redistribution. \"dead battery\"." optional = true python-versions = "*" groups = ["main"] -markers = "python_version >= \"3.13\" and extra == \"sandbox\"" +markers = "extra == \"sandbox\" and python_version >= \"3.13\"" files = [ {file = "standard_aifc-3.13.0-py3-none-any.whl", hash = "sha256:f7ae09cc57de1224a0dd8e3eb8f73830be7c3d0bc485de4c1f82b4a7f645ac66"}, {file = "standard_aifc-3.13.0.tar.gz", hash = "sha256:64e249c7cb4b3daf2fdba4e95721f811bde8bdfc43ad9f936589b7bb2fae2e43"}, @@ -6475,7 +6474,7 @@ description = "Standard library chunk redistribution. \"dead battery\"." optional = true python-versions = "*" groups = ["main"] -markers = "python_version >= \"3.13\" and extra == \"sandbox\"" +markers = "extra == \"sandbox\" and python_version >= \"3.13\"" files = [ {file = "standard_chunk-3.13.0-py3-none-any.whl", hash = "sha256:17880a26c285189c644bd5bd8f8ed2bdb795d216e3293e6dbe55bbd848e2982c"}, {file = "standard_chunk-3.13.0.tar.gz", hash = "sha256:4ac345d37d7e686d2755e01836b8d98eda0d1a3ee90375e597ae43aaf064d654"}, From 968cb25cbf684b2c7ecb7e3f8eaab2d9cb94fbb0 Mon Sep 17 00:00:00 2001 From: octovimmer Date: Mon, 2 Mar 2026 14:54:46 -0500 Subject: [PATCH 30/43] chore: remove codex models from supported models --- docs/llm-providers/models.mdx | 4 ---- 1 file changed, 4 deletions(-) diff --git a/docs/llm-providers/models.mdx b/docs/llm-providers/models.mdx index 6c26da1..45bf8bd 100644 --- a/docs/llm-providers/models.mdx +++ b/docs/llm-providers/models.mdx @@ -50,10 +50,6 @@ strix --target ./your-app | GPT-5.2 | `strix/gpt-5.2` | | GPT-5.1 | `strix/gpt-5.1` | | GPT-5 | `strix/gpt-5` | -| GPT-5.2 Codex | `strix/gpt-5.2-codex` | -| GPT-5.1 Codex Max | `strix/gpt-5.1-codex-max` | -| GPT-5.1 Codex | `strix/gpt-5.1-codex` | -| GPT-5 Codex | `strix/gpt-5-codex` | ### Google From 3e8a5c64bb4af544e5667d74e165b5f6e4aa6051 Mon Sep 17 00:00:00 2001 From: octovimmer Date: Mon, 2 Mar 2026 16:29:16 -0500 Subject: [PATCH 31/43] chore: remove references of codex models --- docs/llm-providers/overview.mdx | 2 +- strix/llm/utils.py | 4 ---- 2 files changed, 1 insertion(+), 5 deletions(-) diff --git a/docs/llm-providers/overview.mdx b/docs/llm-providers/overview.mdx index b3df76d..153ad0c 100644 --- a/docs/llm-providers/overview.mdx +++ b/docs/llm-providers/overview.mdx @@ -49,7 +49,7 @@ See the [Local Models guide](/llm-providers/local) for setup instructions and re Recommended models router with high rate limits. - GPT-5 and Codex models. + GPT-5 models. Claude Opus, Sonnet, and Haiku. diff --git a/strix/llm/utils.py b/strix/llm/utils.py index 8ab1693..cb61a81 100644 --- a/strix/llm/utils.py +++ b/strix/llm/utils.py @@ -37,10 +37,6 @@ STRIX_MODEL_MAP: dict[str, str] = { "gpt-5.2": "openai/gpt-5.2", "gpt-5.1": "openai/gpt-5.1", "gpt-5": "openai/gpt-5", - "gpt-5.2-codex": "openai/gpt-5.2-codex", - "gpt-5.1-codex-max": "openai/gpt-5.1-codex-max", - "gpt-5.1-codex": "openai/gpt-5.1-codex", - "gpt-5-codex": "openai/gpt-5-codex", "gemini-3-pro-preview": "gemini/gemini-3-pro-preview", "gemini-3-flash-preview": "gemini/gemini-3-flash-preview", "glm-5": "openrouter/z-ai/glm-5", From d30e1d2f66ff794172be2e2ce5159d7d2514fd51 Mon Sep 17 00:00:00 2001 From: Ahmed Allam <49919286+0xallam@users.noreply.github.com> Date: Tue, 3 Mar 2026 03:33:14 +0400 Subject: [PATCH 32/43] Update models.mdx --- docs/llm-providers/models.mdx | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/llm-providers/models.mdx b/docs/llm-providers/models.mdx index 45bf8bd..758679b 100644 --- a/docs/llm-providers/models.mdx +++ b/docs/llm-providers/models.mdx @@ -16,7 +16,6 @@ Strix Router is currently in **beta**. It's completely optional — Strix works - **Failover & load balancing** — Automatic fallback across providers for reliability - **Simple setup** — One API key, one environment variable, no provider accounts needed - **No markup** — Same token pricing as the underlying providers, no extra fees -- **$10 free credit** — Try it free on signup, no credit card required ## Quick Start From 72c3e0dd9007c91d221f0483bcf3a84d4eb148b9 Mon Sep 17 00:00:00 2001 From: Ahmed Allam <49919286+0xallam@users.noreply.github.com> Date: Tue, 3 Mar 2026 03:33:46 +0400 Subject: [PATCH 33/43] Update README --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9b8f96e..e1e8818 100644 --- a/README.md +++ b/README.md @@ -71,7 +71,7 @@ Strix are autonomous AI agents that act just like real hackers - they run your c - Docker (running) - An LLM API key: - Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.) - - Or [Strix Router](https://models.strix.ai) — single API key for multiple providers with $10 free credit on signup + - Or [Strix Router](https://models.strix.ai) — single API key for multiple providers ### Installation & First Scan From 3c6fccca74b1b402bf201ff29ecf77ff698f3762 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 2 Mar 2026 01:56:51 +0000 Subject: [PATCH 34/43] chore(deps): bump pypdf from 6.7.2 to 6.7.4 Bumps [pypdf](https://github.com/py-pdf/pypdf) from 6.7.2 to 6.7.4. - [Release notes](https://github.com/py-pdf/pypdf/releases) - [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md) - [Commits](https://github.com/py-pdf/pypdf/compare/6.7.2...6.7.4) --- updated-dependencies: - dependency-name: pypdf dependency-version: 6.7.4 dependency-type: indirect ... Signed-off-by: dependabot[bot] --- poetry.lock | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/poetry.lock b/poetry.lock index ab48d6b..dbcbd3f 100644 --- a/poetry.lock +++ b/poetry.lock @@ -5237,14 +5237,14 @@ diagrams = ["jinja2", "railroad-diagrams"] [[package]] name = "pypdf" -version = "6.7.2" +version = "6.7.4" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "pypdf-6.7.2-py3-none-any.whl", hash = "sha256:331b63cd66f63138f152a700565b3e0cebdf4ec8bec3b7594b2522418782f1f3"}, - {file = "pypdf-6.7.2.tar.gz", hash = "sha256:82a1a48de500ceea59a52a7d979f5095927ef802e4e4fac25ab862a73468acbb"}, + {file = "pypdf-6.7.4-py3-none-any.whl", hash = "sha256:527d6da23274a6c70a9cb59d1986d93946ba8e36a6bc17f3f7cce86331492dda"}, + {file = "pypdf-6.7.4.tar.gz", hash = "sha256:9edd1cd47938bb35ec87795f61225fd58a07cfaf0c5699018ae1a47d6f8ab0e3"}, ] [package.extras] From 672a668ecf1b5df644b5eb1297516dbc15e64adc Mon Sep 17 00:00:00 2001 From: Ms6RB <33758164+ms6rb@users.noreply.github.com> Date: Sun, 8 Mar 2026 18:45:08 +0200 Subject: [PATCH 35/43] feat(skills): add NestJS security testing module (#348) --- strix/skills/frameworks/nestjs.md | 225 ++++++++++++++++++++++++++++++ 1 file changed, 225 insertions(+) create mode 100644 strix/skills/frameworks/nestjs.md diff --git a/strix/skills/frameworks/nestjs.md b/strix/skills/frameworks/nestjs.md new file mode 100644 index 0000000..51cf924 --- /dev/null +++ b/strix/skills/frameworks/nestjs.md @@ -0,0 +1,225 @@ +--- +name: nestjs +description: Security testing playbook for NestJS applications covering guards, pipes, decorators, module boundaries, and multi-transport auth +--- + +# NestJS + +Security testing for NestJS applications. Focus on guard gaps across decorator stacks, validation pipe bypasses, module boundary leaks, and inconsistent auth enforcement across HTTP, WebSocket, and microservice transports. + +## Attack Surface + +**Decorator Pipeline** +- Guards: `@UseGuards`, `CanActivate`, execution context (HTTP/WS/RPC), `Reflector` metadata +- Pipes: `ValidationPipe` (whitelist, transform, forbidNonWhitelisted), `ParseIntPipe`, custom pipes +- Interceptors: response mapping, caching, logging, timeout — can modify request/response flow +- Filters: exception filters that may leak information +- Metadata: `@SetMetadata`, `@Public()`, `@Roles()`, `@Permissions()` + +**Module System** +- `@Module` boundaries, provider scoping (DEFAULT/REQUEST/TRANSIENT) +- Dynamic modules: `forRoot`/`forRootAsync`, global modules +- DI container: provider overrides, custom providers + +**Controllers & Transports** +- REST: `@Controller`, versioning (URI/Header/MediaType) +- GraphQL: `@Resolver`, playground/sandbox exposure +- WebSocket: `@WebSocketGateway`, gateway guards, room authorization +- Microservices: TCP, Redis, NATS, MQTT, gRPC, Kafka — often lack HTTP-level auth + +**Data Layer** +- TypeORM: repositories, QueryBuilder, raw queries, relations +- Prisma: `$queryRaw`, `$queryRawUnsafe` +- Mongoose: operator injection, `$where`, `$regex` + +**Auth & Config** +- `@nestjs/passport` strategies, `@nestjs/jwt`, session-based auth +- `@nestjs/config`, ConfigService, `.env` files +- `@nestjs/throttler`, rate limiting with `@SkipThrottle` + +**API Documentation** +- `@nestjs/swagger`: OpenAPI exposure, DTO schemas, auth schemes + +## High-Value Targets + +- Swagger/OpenAPI endpoints in production (`/api`, `/api-docs`, `/api-json`, `/swagger`) +- Auth endpoints: login, register, token refresh, password reset, OAuth callbacks +- Admin controllers decorated with `@Roles('admin')` — test with user-level tokens +- File upload endpoints using `FileInterceptor`/`FilesInterceptor` +- WebSocket gateways sharing business logic with HTTP controllers +- Microservice handlers (`@MessagePattern`, `@EventPattern`) — often unguarded +- CRUD generators (`@nestjsx/crud`) with auto-generated endpoints +- Background jobs and scheduled tasks (`@nestjs/schedule`) +- Health/metrics endpoints (`@nestjs/terminus`, `/health`, `/metrics`) +- GraphQL playground/sandbox in production (`/graphql`) + +## Reconnaissance + +**Swagger Discovery** +``` +GET /api +GET /api-docs +GET /api-json +GET /swagger +GET /docs +GET /v1/api-docs +GET /api/v2/docs +``` + +Extract: paths, parameter schemas, DTOs, auth schemes, example values. Swagger may reveal internal endpoints, deprecated routes, and admin-only paths not visible in the UI. + +**Guard Mapping** + +For each controller and method, identify: +- Global guards (applied in `main.ts` or app module) +- Controller-level guards (`@UseGuards` on the class) +- Method-level guards (`@UseGuards` on individual handlers) +- `@Public()` or `@SkipThrottle()` decorators that bypass protection + +## Key Vulnerabilities + +### Guard Bypass + +**Decorator Stack Gaps** +- Guards execute: global → controller → method. A method missing `@UseGuards` when siblings have it is the #1 finding. +- `@Public()` metadata causing global `AuthGuard` to skip enforcement — check if applied too broadly. +- New methods added to existing controllers without inheriting the expected guard. + +**ExecutionContext Switching** +- Guards handling only HTTP context (`getRequest()`) may fail silently on WebSocket or RPC, returning `true` by default. +- Test same business logic through alternate transports to find context-specific bypasses. + +**Reflector Mismatches** +- Guard reads `SetMetadata('roles', [...])` but decorator sets `'role'` (singular) — guard sees no metadata, defaults to allow. +- `applyDecorators()` compositions accidentally overriding stricter guards with permissive ones. + +### Validation Pipe Exploits + +**Whitelist Bypass** +- `whitelist: true` without `forbidNonWhitelisted: true`: extra properties silently stripped but may have been processed by earlier middleware/interceptors. +- Missing `@Type(() => ChildDto)` on nested objects: `@ValidateNested()` without `@Type` means nested payload is never validated. +- Array elements: `@IsArray()` doesn't validate elements without `@ValidateNested({ each: true })` and `@Type`. + +**Type Coercion** +- `transform: true` enables implicit coercion: strings → numbers, `"true"` → `true`, `"null"` → `null`. +- Exploit truthiness assumptions in business logic downstream. + +**Conditional Validation** +- `@ValidateIf()` and validation groups creating paths where fields skip validation entirely. + +**Missing Parse Pipes** +- `@Param('id')` without `ParseIntPipe`/`ParseUUIDPipe` — string values reach ORM queries directly. + +### Auth & Passport + +**JWT Strategy** +- Check `ignoreExpiration` is false, `algorithms` is pinned (no `none` or HS/RS confusion) +- Weak `secretOrKey` values +- Cross-service token reuse when audience/issuer not enforced + +**Passport Strategy Issues** +- `validate()` return value becomes `req.user` — if it returns full DB record, sensitive fields leak downstream +- Multiple strategies (JWT + session): one may bypass restrictions of the other +- Custom guards returning `true` for unauthenticated as "optional auth" + +**Timing Attacks** +- Plain string comparison instead of bcrypt/argon2 in local strategy + +### Serialization Leaks + +**Missing ClassSerializerInterceptor** +- If not applied globally, `@Exclude()` fields (passwords, internal IDs) returned in responses. +- `@Expose()` with groups: admin-only fields exposed when groups not enforced per-request. + +**Circular Relations** +- Eager-loaded TypeORM/Prisma relations exposing entire object graph without careful serialization. + +### Interceptor Abuse + +**Cache Poisoning** +- `CacheInterceptor` without user/tenant identity in cache key — responses from one user served to another. +- Test: authenticated request, then unauthenticated request returning cached data. + +**Response Mapping** +- Transformation interceptors may leak internal entity fields if mapping is incomplete. + +### Module Boundary Leaks + +**Global Module Exposure** +- `@Global()` modules expose all providers to every module without explicit imports. +- Sensitive services (admin operations, internal APIs) accessible from untrusted modules. + +**Config Leaks** +- `forRoot`/`forRootAsync` configuration secrets accessible via `ConfigService` injection in any module. + +**Scope Issues** +- Request-scoped providers (`Scope.REQUEST`) incorrectly scoped as DEFAULT (singleton) — request context leaks across concurrent requests. + +### WebSocket Gateway + +- HTTP guards don't automatically apply to WebSocket gateways — `@UseGuards` must be explicit. +- Authentication deferred from `handleConnection` to message handlers allows unauthenticated message sending. +- Room/namespace authorization: users joining rooms they shouldn't access. +- `@SubscribeMessage()` handlers relying on connection-level auth instead of per-message validation. + +### Microservice Transport + +- `@MessagePattern`/`@EventPattern` handlers often lack guards (considered "internal"). +- If transport (Redis, NATS, Kafka) is network-accessible, messages can be injected bypassing all HTTP security. +- `ValidationPipe` may only be configured for HTTP — microservice payloads skip validation. + +### ORM Injection + +**TypeORM** +- `QueryBuilder` and `.query()` with template literal interpolation → SQL injection. +- Relations: API allowing specification of which relations to load via query params. + +**Mongoose** +- Query operator injection: `{ password: { $gt: "" } }` via unsanitized request body. +- `$where` and `$regex` operators from user input. + +**Prisma** +- `$queryRaw`/`$executeRaw` with string interpolation (but not tagged template). +- `$queryRawUnsafe` usage. + +### Rate Limiting + +- `@SkipThrottle()` on sensitive endpoints (login, password reset, OTP). +- In-memory throttler storage: resets on restart, doesn't work across instances. +- Behind proxy without `trust proxy`: all requests share same IP, or header spoofable. + +### CRUD Generators + +- Auto-generated CRUD endpoints may not inherit manual guard configurations. +- Bulk operations (`createMany`, `updateMany`) bypassing per-entity authorization. +- Query parameter injection in CRUD libraries: `filter`, `sort`, `join`, `select` exposing unauthorized data. + +## Bypass Techniques + +- `@Public()` / skip-metadata applied via composed decorators at method level causing global guards to skip via `Reflector` metadata checks +- Route param pollution: `/users/123?id=456` — which `id` wins in guards vs handlers? +- Version routing: v1 of endpoint may still be registered without the guard added to v2 +- `X-HTTP-Method-Override` or `_method` processed by Express before guards +- Content-type switching: `application/x-www-form-urlencoded` instead of JSON to bypass JSON-specific validation +- Exception filter differences: guard throwing results in generic error that leaks route existence info + +## Testing Methodology + +1. **Enumerate** — Fetch Swagger/OpenAPI, map all controllers, resolvers, and gateways +2. **Guard audit** — Map decorator stack per method: which guards, pipes, interceptors are applied at each level +3. **Matrix testing** — Test each endpoint across: unauth/user/admin × HTTP/WS/microservice +4. **Validation probing** — Send extra fields, wrong types, nested objects, arrays to find pipe gaps +5. **Transport parity** — Same operation via HTTP, WebSocket, and microservice transport +6. **Module boundaries** — Check if providers from one module are accessible without proper imports +7. **Serialization check** — Compare raw entity fields with API response fields + +## Validation Requirements + +- Guard bypass: request to guarded endpoint succeeding without auth, showing guard chain break point +- Validation bypass: payload with extra/malformed fields affecting business logic +- Cross-transport inconsistency: same action authorized via HTTP but exploitable via WebSocket/microservice +- Module boundary leak: accessing provider or data across unauthorized module boundaries +- Serialization leak: response containing excluded fields (passwords, internal metadata) +- IDOR: side-by-side requests from different users showing unauthorized data access +- ORM injection: raw query with user-controlled input returning unauthorized data, or error-based evidence of query structure +- Cache poisoning: response from unauthenticated or different-user request matching a prior authenticated user's cached response From 048be1fe598385a06c656a9dc15ecc09301b6caa Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Sun, 8 Mar 2026 09:46:32 -0700 Subject: [PATCH 36/43] chore(deps): bump pypdf from 6.7.4 to 6.7.5 (#343) --- poetry.lock | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/poetry.lock b/poetry.lock index dbcbd3f..ffae0a0 100644 --- a/poetry.lock +++ b/poetry.lock @@ -5237,14 +5237,14 @@ diagrams = ["jinja2", "railroad-diagrams"] [[package]] name = "pypdf" -version = "6.7.4" +version = "6.7.5" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "pypdf-6.7.4-py3-none-any.whl", hash = "sha256:527d6da23274a6c70a9cb59d1986d93946ba8e36a6bc17f3f7cce86331492dda"}, - {file = "pypdf-6.7.4.tar.gz", hash = "sha256:9edd1cd47938bb35ec87795f61225fd58a07cfaf0c5699018ae1a47d6f8ab0e3"}, + {file = "pypdf-6.7.5-py3-none-any.whl", hash = "sha256:07ba7f1d6e6d9aa2a17f5452e320a84718d4ce863367f7ede2fd72280349ab13"}, + {file = "pypdf-6.7.5.tar.gz", hash = "sha256:40bb2e2e872078655f12b9b89e2f900888bb505e88a82150b64f9f34fa25651d"}, ] [package.extras] From a60cb4b66c4133af865b57741f72d725389981a1 Mon Sep 17 00:00:00 2001 From: alex s <46074070+bearsyankees@users.noreply.github.com> Date: Mon, 9 Mar 2026 05:11:24 -0300 Subject: [PATCH 37/43] Add OpenTelemetry observability with local JSONL traces (#347) Co-authored-by: 0xallam --- docs/advanced/configuration.mdx | 31 +- poetry.lock | 1512 ++++++++++++++++++++++++- pyproject.toml | 7 + strix/config/config.py | 5 + strix/interface/main.py | 15 +- strix/telemetry/flags.py | 23 + strix/telemetry/posthog.py | 4 +- strix/telemetry/tracer.py | 401 ++++++- strix/telemetry/utils.py | 413 +++++++ tests/config/__init__.py | 1 + tests/config/test_config_telemetry.py | 55 + tests/llm/test_llm_otel.py | 15 + tests/telemetry/test_flags.py | 28 + tests/telemetry/test_tracer.py | 379 +++++++ tests/telemetry/test_utils.py | 39 + 15 files changed, 2880 insertions(+), 48 deletions(-) create mode 100644 strix/telemetry/flags.py create mode 100644 strix/telemetry/utils.py create mode 100644 tests/config/__init__.py create mode 100644 tests/config/test_config_telemetry.py create mode 100644 tests/llm/test_llm_otel.py create mode 100644 tests/telemetry/test_flags.py create mode 100644 tests/telemetry/test_tracer.py create mode 100644 tests/telemetry/test_utils.py diff --git a/docs/advanced/configuration.mdx b/docs/advanced/configuration.mdx index 9c94630..cf8eb93 100644 --- a/docs/advanced/configuration.mdx +++ b/docs/advanced/configuration.mdx @@ -46,9 +46,37 @@ Configure Strix using environment variables or a config file. - Enable/disable anonymous telemetry. Set to `0`, `false`, `no`, or `off` to disable. + Global telemetry default toggle. Set to `0`, `false`, `no`, or `off` to disable both PostHog and OTEL unless overridden by per-channel flags below. + + Enable/disable OpenTelemetry run observability independently. When unset, falls back to `STRIX_TELEMETRY`. + + + + Enable/disable PostHog product telemetry independently. When unset, falls back to `STRIX_TELEMETRY`. + + + + OTLP/Traceloop base URL for remote OpenTelemetry export. If unset, Strix keeps traces local only. + + + + API key used for remote trace export. Remote export is enabled only when both `TRACELOOP_BASE_URL` and `TRACELOOP_API_KEY` are set. + + + + Optional custom OTEL headers (JSON object or `key=value,key2=value2`). Useful for Langfuse or custom/self-hosted OTLP gateways. + + +When remote OTEL vars are not set, Strix still writes complete run telemetry locally to: + +```bash +strix_runs//events.jsonl +``` + +When remote vars are set, Strix dual-writes telemetry to both local JSONL and the remote OTEL endpoint. + ## Docker Configuration @@ -106,4 +134,5 @@ export PERPLEXITY_API_KEY="pplx-..." # Optional: Custom timeouts export LLM_TIMEOUT="600" export STRIX_SANDBOX_EXECUTION_TIMEOUT="300" + ``` diff --git a/poetry.lock b/poetry.lock index ffae0a0..5930b42 100644 --- a/poetry.lock +++ b/poetry.lock @@ -1,4 +1,4 @@ -# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand. +# This file is automatically @generated by Poetry 2.3.2 and should not be changed by hand. [[package]] name = "aiohappyeyeballs" @@ -220,6 +220,34 @@ files = [ {file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"}, ] +[[package]] +name = "anthropic" +version = "0.84.0" +description = "The official Python library for the anthropic API" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "anthropic-0.84.0-py3-none-any.whl", hash = "sha256:861c4c50f91ca45f942e091d83b60530ad6d4f98733bfe648065364da05d29e7"}, + {file = "anthropic-0.84.0.tar.gz", hash = "sha256:72f5f90e5aebe62dca316cb013629cfa24996b0f5a4593b8c3d712bc03c43c37"}, +] + +[package.dependencies] +anyio = ">=3.5.0,<5" +distro = ">=1.7.0,<2" +docstring-parser = ">=0.15,<1" +httpx = ">=0.25.0,<1" +jiter = ">=0.4.0,<1" +pydantic = ">=1.9.0,<3" +sniffio = "*" +typing-extensions = ">=4.10,<5" + +[package.extras] +aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.9)"] +bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"] +mcp = ["mcp (>=1.0) ; python_version >= \"3.10\""] +vertex = ["google-auth[requests] (>=2,<3)"] + [[package]] name = "anyio" version = "4.10.0" @@ -628,6 +656,18 @@ files = [ {file = "cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4"}, ] +[[package]] +name = "catalogue" +version = "2.0.10" +description = "Super lightweight function registries for your library" +optional = false +python-versions = ">=3.6" +groups = ["main"] +files = [ + {file = "catalogue-2.0.10-py3-none-any.whl", hash = "sha256:58c2de0020aa90f4a2da7dfad161bf7b3b054c86a5f09fcedc0b2b740c109a9f"}, + {file = "catalogue-2.0.10.tar.gz", hash = "sha256:4f56daa940913d3f09d589c191c74e5a6d51762b3a9e37dd53b7437afd6cda15"}, +] + [[package]] name = "certifi" version = "2025.8.3" @@ -890,7 +930,7 @@ files = [ {file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"}, {file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"}, ] -markers = {main = "sys_platform == \"win32\" and extra == \"sandbox\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} +markers = {dev = "platform_system == \"Windows\" or sys_platform == \"win32\""} [[package]] name = "contourpy" @@ -1174,6 +1214,17 @@ ssh = ["bcrypt (>=3.1.5)"] test = ["certifi (>=2024)", "cryptography-vectors (==46.0.5)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"] test-randomorder = ["pytest-randomly"] +[[package]] +name = "cuid" +version = "0.4" +description = "Fast, scalable unique ID generation" +optional = false +python-versions = "*" +groups = ["main"] +files = [ + {file = "cuid-0.4.tar.gz", hash = "sha256:74eaba154916a2240405c3631acee708c263ef8fa05a86820b87d0f59f84e978"}, +] + [[package]] name = "cvss" version = "3.6" @@ -1203,6 +1254,29 @@ files = [ docs = ["ipython", "matplotlib", "numpydoc", "sphinx"] tests = ["pytest", "pytest-cov", "pytest-xdist"] +[[package]] +name = "dateparser" +version = "1.3.0" +description = "Date parsing library designed to parse dates from HTML pages" +optional = false +python-versions = ">=3.10" +groups = ["main"] +files = [ + {file = "dateparser-1.3.0-py3-none-any.whl", hash = "sha256:8dc678b0a526e103379f02ae44337d424bd366aac727d3c6cf52ce1b01efbb5a"}, + {file = "dateparser-1.3.0.tar.gz", hash = "sha256:5bccf5d1ec6785e5be71cc7ec80f014575a09b4923e762f850e57443bddbf1a5"}, +] + +[package.dependencies] +python-dateutil = ">=2.7.0" +pytz = ">=2024.2" +regex = ">=2024.9.11" +tzlocal = ">=0.2" + +[package.extras] +calendars = ["convertdate (>=2.2.1)", "hijridate"] +fasttext = ["fasttext (>=0.9.1)", "numpy (>=1.22.0,<2)"] +langdetect = ["langdetect (>=1.0.0)"] + [[package]] name = "decorator" version = "5.2.1" @@ -1228,6 +1302,24 @@ files = [ {file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"}, ] +[[package]] +name = "deprecated" +version = "1.3.1" +description = "Python @deprecated decorator to deprecate old python classes, functions or methods." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7" +groups = ["main"] +files = [ + {file = "deprecated-1.3.1-py2.py3-none-any.whl", hash = "sha256:597bfef186b6f60181535a29fbe44865ce137a5079f295b479886c82729d5f3f"}, + {file = "deprecated-1.3.1.tar.gz", hash = "sha256:b1b50e0ff0c1fddaa5708a2c6b0a6588bb09b892825ab2b214ac9ea9d92a5223"}, +] + +[package.dependencies] +wrapt = ">=1.10,<3" + +[package.extras] +dev = ["PyTest", "PyTest-Cov", "bump2version (<1)", "setuptools ; python_version >= \"3.12\"", "tox"] + [[package]] name = "dill" version = "0.4.0" @@ -1316,10 +1408,9 @@ websockets = ["websocket-client (>=1.3.0)"] name = "docstring-parser" version = "0.17.0" description = "Parse Python docstrings in reST, Google and Numpydoc format" -optional = true +optional = false python-versions = ">=3.8" groups = ["main"] -markers = "extra == \"vertex\"" files = [ {file = "docstring_parser-0.17.0-py3-none-any.whl", hash = "sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708"}, {file = "docstring_parser-0.17.0.tar.gz", hash = "sha256:583de4a309722b3315439bb31d64ba3eebada841f2e2cee23b99df001434c912"}, @@ -1388,6 +1479,24 @@ files = [ [package.extras] tests = ["asttokens (>=2.1.0)", "coverage", "coverage-enable-subprocess", "ipython", "littleutils", "pytest", "rich ; python_version >= \"3.11\""] +[[package]] +name = "faker" +version = "40.8.0" +description = "Faker is a Python package that generates fake data for you." +optional = false +python-versions = ">=3.10" +groups = ["main"] +files = [ + {file = "faker-40.8.0-py3-none-any.whl", hash = "sha256:eb21bdba18f7a8375382eb94fb436fce07046893dc94cb20817d28deb0c3d579"}, + {file = "faker-40.8.0.tar.gz", hash = "sha256:936a3c9be6c004433f20aa4d99095df5dec82b8c7ad07459756041f8c1728875"}, +] + +[package.dependencies] +tzdata = {version = "*", markers = "platform_system == \"Windows\""} + +[package.extras] +tzdata = ["tzdata"] + [[package]] name = "fastapi" version = "0.121.0" @@ -2142,10 +2251,9 @@ requests = ["requests (>=2.18.0,<3.0.0)"] name = "googleapis-common-protos" version = "1.72.0" description = "Common protobufs used in Google APIs" -optional = true +optional = false python-versions = ">=3.7" groups = ["main"] -markers = "extra == \"vertex\"" files = [ {file = "googleapis_common_protos-1.72.0-py3-none-any.whl", hash = "sha256:4299c5a82d5ae1a9702ada957347726b167f9f8d1fc352477702a1e851ff4038"}, {file = "googleapis_common_protos-1.72.0.tar.gz", hash = "sha256:e55a601c1b32b52d7a3e65f43563e2aa61bcd737998ee672ac9b951cd49319f5"}, @@ -2635,6 +2743,18 @@ perf = ["ipython"] test = ["flufl.flake8", "importlib_resources (>=1.3) ; python_version < \"3.9\"", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"] type = ["pytest-mypy"] +[[package]] +name = "inflection" +version = "0.5.1" +description = "A port of Ruby on Rails inflector to Python" +optional = false +python-versions = ">=3.5" +groups = ["main"] +files = [ + {file = "inflection-0.5.1-py2.py3-none-any.whl", hash = "sha256:f38b2b640938a4f35ade69ac3d053042959b62a0f1076a5bbaa1b9526605a8a2"}, + {file = "inflection-0.5.1.tar.gz", hash = "sha256:1a29730d366e996aaacffb2f1f1cb9593dc38e2ddd30c91250c6dde09ea9b417"}, +] + [[package]] name = "iniconfig" version = "2.1.0" @@ -2862,6 +2982,18 @@ files = [ {file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"}, ] +[[package]] +name = "joblib" +version = "1.5.3" +description = "Lightweight pipelining with Python functions" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "joblib-1.5.3-py3-none-any.whl", hash = "sha256:5fc3c5039fc5ca8c0276333a188bbd59d6b7ab37fe6632daa76bc7f9ec18e713"}, + {file = "joblib-1.5.3.tar.gz", hash = "sha256:8561a3269e6801106863fd0d6d84bb737be9e7631e33aaed3fb9ce5953688da3"}, +] + [[package]] name = "jsonschema" version = "4.25.1" @@ -2876,7 +3008,7 @@ files = [ [package.dependencies] attrs = ">=22.2.0" -jsonschema-specifications = ">=2023.03.6" +jsonschema-specifications = ">=2023.3.6" referencing = ">=0.28.4" rpds-py = ">=0.7.1" @@ -3863,6 +3995,32 @@ extra = ["lxml (>=4.6)", "pydot (>=3.0.1)", "pygraphviz (>=1.14)", "sympy (>=1.1 test = ["pytest (>=7.2)", "pytest-cov (>=4.0)", "pytest-xdist (>=3.0)"] test-extras = ["pytest-mpl", "pytest-randomly"] +[[package]] +name = "nltk" +version = "3.9.3" +description = "Natural Language Toolkit" +optional = false +python-versions = ">=3.10" +groups = ["main"] +files = [ + {file = "nltk-3.9.3-py3-none-any.whl", hash = "sha256:60b3db6e9995b3dd976b1f0fa7dec22069b2677e759c28eb69b62ddd44870522"}, + {file = "nltk-3.9.3.tar.gz", hash = "sha256:cb5945d6424a98d694c2b9a0264519fab4363711065a46aa0ae7a2195b92e71f"}, +] + +[package.dependencies] +click = "*" +joblib = "*" +regex = ">=2021.8.3" +tqdm = "*" + +[package.extras] +all = ["matplotlib", "numpy", "pyparsing", "python-crfsuite", "requests", "scikit-learn", "scipy", "twython"] +corenlp = ["requests"] +machine-learning = ["numpy", "python-crfsuite", "scikit-learn", "scipy"] +plot = ["matplotlib"] +tgrep = ["pyparsing"] +twitter = ["twython"] + [[package]] name = "nodeenv" version = "1.9.1" @@ -3879,10 +4037,9 @@ files = [ name = "numpy" version = "2.3.2" description = "Fundamental package for array computing in Python" -optional = true +optional = false python-versions = ">=3.11" groups = ["main"] -markers = "extra == \"sandbox\"" files = [ {file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"}, {file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"}, @@ -4084,6 +4241,957 @@ files = [ [package.dependencies] et-xmlfile = "*" +[[package]] +name = "opentelemetry-api" +version = "1.40.0" +description = "OpenTelemetry Python API" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_api-1.40.0-py3-none-any.whl", hash = "sha256:82dd69331ae74b06f6a874704be0cfaa49a1650e1537d4a813b86ecef7d0ecf9"}, + {file = "opentelemetry_api-1.40.0.tar.gz", hash = "sha256:159be641c0b04d11e9ecd576906462773eb97ae1b657730f0ecf64d32071569f"}, +] + +[package.dependencies] +importlib-metadata = ">=6.0,<8.8.0" +typing-extensions = ">=4.5.0" + +[[package]] +name = "opentelemetry-exporter-otlp-proto-common" +version = "1.40.0" +description = "OpenTelemetry Protobuf encoding" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_exporter_otlp_proto_common-1.40.0-py3-none-any.whl", hash = "sha256:7081ff453835a82417bf38dccf122c827c3cbc94f2079b03bba02a3165f25149"}, + {file = "opentelemetry_exporter_otlp_proto_common-1.40.0.tar.gz", hash = "sha256:1cbee86a4064790b362a86601ee7934f368b81cd4cc2f2e163902a6e7818a0fa"}, +] + +[package.dependencies] +opentelemetry-proto = "1.40.0" + +[[package]] +name = "opentelemetry-exporter-otlp-proto-grpc" +version = "1.40.0" +description = "OpenTelemetry Collector Protobuf over gRPC Exporter" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_exporter_otlp_proto_grpc-1.40.0-py3-none-any.whl", hash = "sha256:2aa0ca53483fe0cf6405087a7491472b70335bc5c7944378a0a8e72e86995c52"}, + {file = "opentelemetry_exporter_otlp_proto_grpc-1.40.0.tar.gz", hash = "sha256:bd4015183e40b635b3dab8da528b27161ba83bf4ef545776b196f0fb4ec47740"}, +] + +[package.dependencies] +googleapis-common-protos = ">=1.57,<2.0" +grpcio = [ + {version = ">=1.63.2,<2.0.0", markers = "python_version < \"3.13\""}, + {version = ">=1.66.2,<2.0.0", markers = "python_version == \"3.13\""}, + {version = ">=1.75.1,<2.0.0", markers = "python_version >= \"3.14\""}, +] +opentelemetry-api = ">=1.15,<2.0" +opentelemetry-exporter-otlp-proto-common = "1.40.0" +opentelemetry-proto = "1.40.0" +opentelemetry-sdk = ">=1.40.0,<1.41.0" +typing-extensions = ">=4.6.0" + +[package.extras] +gcp-auth = ["opentelemetry-exporter-credential-provider-gcp (>=0.59b0)"] + +[[package]] +name = "opentelemetry-exporter-otlp-proto-http" +version = "1.40.0" +description = "OpenTelemetry Collector Protobuf over HTTP Exporter" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_exporter_otlp_proto_http-1.40.0-py3-none-any.whl", hash = "sha256:a8d1dab28f504c5d96577d6509f80a8150e44e8f45f82cdbe0e34c99ab040069"}, + {file = "opentelemetry_exporter_otlp_proto_http-1.40.0.tar.gz", hash = "sha256:db48f5e0f33217588bbc00274a31517ba830da576e59503507c839b38fa0869c"}, +] + +[package.dependencies] +googleapis-common-protos = ">=1.52,<2.0" +opentelemetry-api = ">=1.15,<2.0" +opentelemetry-exporter-otlp-proto-common = "1.40.0" +opentelemetry-proto = "1.40.0" +opentelemetry-sdk = ">=1.40.0,<1.41.0" +requests = ">=2.7,<3.0" +typing-extensions = ">=4.5.0" + +[package.extras] +gcp-auth = ["opentelemetry-exporter-credential-provider-gcp (>=0.59b0)"] + +[[package]] +name = "opentelemetry-instrumentation" +version = "0.61b0" +description = "Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation-0.61b0-py3-none-any.whl", hash = "sha256:92a93a280e69788e8f88391247cc530fd81f16f2b011979d4d6398f805cfbc63"}, + {file = "opentelemetry_instrumentation-0.61b0.tar.gz", hash = "sha256:cb21b48db738c9de196eba6b805b4ff9de3b7f187e4bbf9a466fa170514f1fc7"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.4,<2.0" +opentelemetry-semantic-conventions = "0.61b0" +packaging = ">=18.0" +wrapt = ">=1.0.0,<2.0.0" + +[[package]] +name = "opentelemetry-instrumentation-agno" +version = "0.53.0" +description = "OpenTelemetry Agno instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_agno-0.53.0-py3-none-any.whl", hash = "sha256:bab72e73e12dfcfae6440d6d47f124d6cdd9d6a5ef391ef896b79742696595d1"}, + {file = "opentelemetry_instrumentation_agno-0.53.0.tar.gz", hash = "sha256:67ff165475ca1c48ea41fe9db2d9f89d72430b8e995ea1aa8b329f04473b7a0c"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.28.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["agno"] + +[[package]] +name = "opentelemetry-instrumentation-alephalpha" +version = "0.53.0" +description = "OpenTelemetry Aleph Alpha instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_alephalpha-0.53.0-py3-none-any.whl", hash = "sha256:905d97267097c4d35426fda6893590908a4f15c58f50fdfbe9b59f8cfef266ea"}, + {file = "opentelemetry_instrumentation_alephalpha-0.53.0.tar.gz", hash = "sha256:e558d0c5aa17c4278619242d06792f272a32297ab1bb6dce61498863f40ee270"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["aleph-alpha-client"] + +[[package]] +name = "opentelemetry-instrumentation-anthropic" +version = "0.53.0" +description = "OpenTelemetry Anthropic instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_anthropic-0.53.0-py3-none-any.whl", hash = "sha256:e89f19457cb697fd94d63f29883f38d640603a7a0351c25052f3674f41af1c99"}, + {file = "opentelemetry_instrumentation_anthropic-0.53.0.tar.gz", hash = "sha256:de8d405f5ed2f6af5f368e028e6ad07504acecd20b133b84a9fa45827deaba15"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.14,<0.5.0" + +[package.extras] +instruments = ["anthropic"] + +[[package]] +name = "opentelemetry-instrumentation-bedrock" +version = "0.53.0" +description = "OpenTelemetry Bedrock instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_bedrock-0.53.0-py3-none-any.whl", hash = "sha256:1e13877d1bcf31e4617b0801f0369f2c2aa42fca17e9174d3cbf23b0c1a63315"}, + {file = "opentelemetry_instrumentation_bedrock-0.53.0.tar.gz", hash = "sha256:0bf17a81fdeddeeee2baf567b30ea42853c9dfd2ba8dca55fcbdb7c306aa0825"}, +] + +[package.dependencies] +anthropic = ">=0.17.0" +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" +tokenizers = ">=0.13.0" + +[package.extras] +instruments = ["boto3"] + +[[package]] +name = "opentelemetry-instrumentation-chromadb" +version = "0.53.0" +description = "OpenTelemetry Chroma DB instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_chromadb-0.53.0-py3-none-any.whl", hash = "sha256:5c1c17dc07ae94b4dec01022e2c5f9c51d31c8912d9ddde7ac392dd97094d317"}, + {file = "opentelemetry_instrumentation_chromadb-0.53.0.tar.gz", hash = "sha256:131495c56fdc6131abb8d8a31addcf86e9ab10e63e86927bb74380da351f1b5a"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["chromadb"] + +[[package]] +name = "opentelemetry-instrumentation-cohere" +version = "0.53.0" +description = "OpenTelemetry Cohere instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_cohere-0.53.0-py3-none-any.whl", hash = "sha256:7a1483c99db7f30c4dde1763834ee6844f0d2ba1a986b52eb740c5c4e68ed926"}, + {file = "opentelemetry_instrumentation_cohere-0.53.0.tar.gz", hash = "sha256:51a128e317d0ec09c1b42fb1b955258c2bb337150e55c23a70dbad627dac5097"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["cohere"] + +[[package]] +name = "opentelemetry-instrumentation-crewai" +version = "0.53.0" +description = "OpenTelemetry crewAI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_crewai-0.53.0-py3-none-any.whl", hash = "sha256:348b9214f2557f33057a49fb648402cb46a231a063a9ffa7469047c1b2383afe"}, + {file = "opentelemetry_instrumentation_crewai-0.53.0.tar.gz", hash = "sha256:9b50cd375ca0b366f1f23e8f7e8d8a8baac61792fe1d3f515e41ef45a7dc360f"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["crewai"] + +[[package]] +name = "opentelemetry-instrumentation-google-generativeai" +version = "0.53.0" +description = "OpenTelemetry Google Generative AI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_google_generativeai-0.53.0-py3-none-any.whl", hash = "sha256:8f3b14ac2bcf348502f039f9b0a1440b9e8a041280c4ee8c6e7ffb79e35f7bd8"}, + {file = "opentelemetry_instrumentation_google_generativeai-0.53.0.tar.gz", hash = "sha256:c30ed87c3ebb9b52558c97e465a36451e5dc6f40e18d1dbfef482ecbdadcf42f"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["google-genai"] + +[[package]] +name = "opentelemetry-instrumentation-groq" +version = "0.53.0" +description = "OpenTelemetry Groq instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_groq-0.53.0-py3-none-any.whl", hash = "sha256:40efe9df236e785ae31a498f3fe5b2287afa7465b4b7786f2ca36cfa70943aa3"}, + {file = "opentelemetry_instrumentation_groq-0.53.0.tar.gz", hash = "sha256:19065150a7236a2c99f1bcea6056456922a6997102198642285d3c7e80b011e4"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["groq"] + +[[package]] +name = "opentelemetry-instrumentation-haystack" +version = "0.53.0" +description = "OpenTelemetry Haystack instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_haystack-0.53.0-py3-none-any.whl", hash = "sha256:782daac342840f3c63194c6655258fc2c80b03b399458a30b6b332727e5a9d57"}, + {file = "opentelemetry_instrumentation_haystack-0.53.0.tar.gz", hash = "sha256:62307cf41d613b69fe1495e233ff4ec0f86e83fd9b5c8fe208eefc229ebde010"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["haystack-ai"] + +[[package]] +name = "opentelemetry-instrumentation-lancedb" +version = "0.53.0" +description = "OpenTelemetry Lancedb instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_lancedb-0.53.0-py3-none-any.whl", hash = "sha256:30e6b1b4b83c3513101931531919b650ea61ab65b8594f9966159f4eeaf436a8"}, + {file = "opentelemetry_instrumentation_lancedb-0.53.0.tar.gz", hash = "sha256:e646e8e850e4f646199dbf2c62d3bb3e495c00ab093303e5b4dbbd4c76f0738f"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["lancedb"] + +[[package]] +name = "opentelemetry-instrumentation-langchain" +version = "0.53.0" +description = "OpenTelemetry Langchain instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_langchain-0.53.0-py3-none-any.whl", hash = "sha256:5426917b76ffc5e9765c0b2eaac516ac7b30f70bd53bbbee51d65364ae668276"}, + {file = "opentelemetry_instrumentation_langchain-0.53.0.tar.gz", hash = "sha256:47d9ad0baa6b3f2e44b9b31bd655b87eac2d86794dc38079d61a2eb24b747f51"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["langchain"] + +[[package]] +name = "opentelemetry-instrumentation-llamaindex" +version = "0.53.0" +description = "OpenTelemetry LlamaIndex instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_llamaindex-0.53.0-py3-none-any.whl", hash = "sha256:c4a0043bc0305b860b0da4840466ffb5fae83595a52a49212a85fb46ddbb6617"}, + {file = "opentelemetry_instrumentation_llamaindex-0.53.0.tar.gz", hash = "sha256:c7b0bd1fe818002286d0122f6a57c516c6a4b248813ca3a4adff61a547f83050"}, +] + +[package.dependencies] +inflection = ">=0.5.1,<0.6.0" +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["llama-index"] +llamaparse = ["llama-parse"] + +[[package]] +name = "opentelemetry-instrumentation-logging" +version = "0.61b0" +description = "OpenTelemetry Logging instrumentation" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_logging-0.61b0-py3-none-any.whl", hash = "sha256:6d87e5ded6a0128d775d41511f8380910a1b610671081d16efb05ac3711c0074"}, + {file = "opentelemetry_instrumentation_logging-0.61b0.tar.gz", hash = "sha256:feaa30b700acd2a37cc81db5f562ab0c3a5b6cc2453595e98b72c01dcf649584"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.12,<2.0" +opentelemetry-instrumentation = "0.61b0" + +[[package]] +name = "opentelemetry-instrumentation-marqo" +version = "0.53.0" +description = "OpenTelemetry Marqo instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_marqo-0.53.0-py3-none-any.whl", hash = "sha256:7e3ffb849d45ffade704a24118d4f05df13217a13bb421489a2765dd8996df9a"}, + {file = "opentelemetry_instrumentation_marqo-0.53.0.tar.gz", hash = "sha256:c2756ca5f2dbdbb48140174119e7e6637d7b6af84ae8125aba4fbf58915cd08b"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["marqo"] + +[[package]] +name = "opentelemetry-instrumentation-mcp" +version = "0.53.0" +description = "OpenTelemetry mcp instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_mcp-0.53.0-py3-none-any.whl", hash = "sha256:39172f541a9f74035a1e3108fd1760921962a2e8627f01ba3b9e4822e4d25f37"}, + {file = "opentelemetry_instrumentation_mcp-0.53.0.tar.gz", hash = "sha256:95bb08cd628ea8d347fb243a831a1ddc104cd4b5d88401885da327345b8e890f"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["mcp"] + +[[package]] +name = "opentelemetry-instrumentation-milvus" +version = "0.53.0" +description = "OpenTelemetry Milvus instrumentation" +optional = false +python-versions = "<4,>=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_milvus-0.53.0-py3-none-any.whl", hash = "sha256:26e74998bd735cea4d31d02137a65b8dbc15dd857acdeea2a23af020f2e4cbe6"}, + {file = "opentelemetry_instrumentation_milvus-0.53.0.tar.gz", hash = "sha256:613b32bee958dacb05ff3325050b87eedb4697eda9c75c304d1438bbb47f929c"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["pymilvus"] + +[[package]] +name = "opentelemetry-instrumentation-mistralai" +version = "0.53.0" +description = "OpenTelemetry Mistral AI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_mistralai-0.53.0-py3-none-any.whl", hash = "sha256:f23c892366262be6c0011105167e7db455a73a72675ce4529258f66aa24f7fb3"}, + {file = "opentelemetry_instrumentation_mistralai-0.53.0.tar.gz", hash = "sha256:1d05ab9b303efe32dc3e6fb7c7cc844b32b33355535b3a5f03d0d5100b0db36e"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["mistralai"] + +[[package]] +name = "opentelemetry-instrumentation-ollama" +version = "0.53.0" +description = "OpenTelemetry Ollama instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_ollama-0.53.0-py3-none-any.whl", hash = "sha256:44aa9e53b9359b9571e2f84ee5313ea39cb49626db42fa0a27c77441b6f7fe1b"}, + {file = "opentelemetry_instrumentation_ollama-0.53.0.tar.gz", hash = "sha256:2039ac601ff68f2a1fa97e8af5de94f00ccae67797d07c04a3cc706979bcb4cb"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["ollama"] + +[[package]] +name = "opentelemetry-instrumentation-openai" +version = "0.53.0" +description = "OpenTelemetry OpenAI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_openai-0.53.0-py3-none-any.whl", hash = "sha256:91d9f69673636f5f7d50e5a4782e4526d6df3a1ddfd6ac2d9e15a957f8fd9ad8"}, + {file = "opentelemetry_instrumentation_openai-0.53.0.tar.gz", hash = "sha256:c0cd83d223d138309af3cc5f53c9c6d22136374bfa00e8f66dff31cd322ef547"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["openai"] + +[[package]] +name = "opentelemetry-instrumentation-openai-agents" +version = "0.53.0" +description = "OpenTelemetry OpenAI Agents instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_openai_agents-0.53.0-py3-none-any.whl", hash = "sha256:2f19e3348359de73cef8a97865cad82f6ba3820ab52bba671e83e091b1dca6d4"}, + {file = "opentelemetry_instrumentation_openai_agents-0.53.0.tar.gz", hash = "sha256:f8877927da7de87bafc9757173ff3ce63b487f952260017299678d290c1c432f"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["openai-agents"] + +[[package]] +name = "opentelemetry-instrumentation-pinecone" +version = "0.53.0" +description = "OpenTelemetry Pinecone instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_pinecone-0.53.0-py3-none-any.whl", hash = "sha256:b972992b8dae9af5fb811c52333c54d4ac5d0eff0a71e6a9220b4905aa94eee3"}, + {file = "opentelemetry_instrumentation_pinecone-0.53.0.tar.gz", hash = "sha256:c7918da22d719d15ad6c0148d79f2d25bfeef3ddb3a10800222d8d8491575fd4"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["pinecone (>=5.1.0,<9)"] + +[[package]] +name = "opentelemetry-instrumentation-qdrant" +version = "0.53.0" +description = "OpenTelemetry Qdrant instrumentation" +optional = false +python-versions = "<4,>=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_qdrant-0.53.0-py3-none-any.whl", hash = "sha256:448bca5e4ce4061fbb760a51a9732dbb91c07193bb1774a3eb6579d79007e2b3"}, + {file = "opentelemetry_instrumentation_qdrant-0.53.0.tar.gz", hash = "sha256:4a739516f3864963cab42f8c67c632cb276861b590b852df91124585031e07dc"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["qdrant-client"] + +[[package]] +name = "opentelemetry-instrumentation-redis" +version = "0.61b0" +description = "OpenTelemetry Redis instrumentation" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_redis-0.61b0-py3-none-any.whl", hash = "sha256:8d4e850bbb5f8eeafa44c0eac3a007990c7125de187bc9c3659e29ff7e091172"}, + {file = "opentelemetry_instrumentation_redis-0.61b0.tar.gz", hash = "sha256:ae0fbb56be9a641e621d55b02a7d62977a2c77c5ee760addd79b9b266e46e523"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.12,<2.0" +opentelemetry-instrumentation = "0.61b0" +opentelemetry-semantic-conventions = "0.61b0" +wrapt = ">=1.12.1" + +[package.extras] +instruments = ["redis (>=2.6)"] + +[[package]] +name = "opentelemetry-instrumentation-replicate" +version = "0.53.0" +description = "OpenTelemetry Replicate instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_replicate-0.53.0-py3-none-any.whl", hash = "sha256:318b9f59acb6b83b51075d1fbdc5fee1a79867fb24268a030c4e27953ed283b2"}, + {file = "opentelemetry_instrumentation_replicate-0.53.0.tar.gz", hash = "sha256:ca348b6dd57267d15e715d27eaf33c52113bbb9c27875c479fd868228a812941"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["replicate"] + +[[package]] +name = "opentelemetry-instrumentation-requests" +version = "0.61b0" +description = "OpenTelemetry requests instrumentation" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_requests-0.61b0-py3-none-any.whl", hash = "sha256:cce19b379949fe637eb73ba39b02c57d2d0805447ca6d86534aa33fcb141f683"}, + {file = "opentelemetry_instrumentation_requests-0.61b0.tar.gz", hash = "sha256:15f879ce8fb206bd7e6fdc61663ea63481040a845218c0cf42902ce70bd7e9d9"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.12,<2.0" +opentelemetry-instrumentation = "0.61b0" +opentelemetry-semantic-conventions = "0.61b0" +opentelemetry-util-http = "0.61b0" + +[package.extras] +instruments = ["requests (>=2.0,<3.0)"] + +[[package]] +name = "opentelemetry-instrumentation-sagemaker" +version = "0.53.0" +description = "OpenTelemetry SageMaker instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_sagemaker-0.53.0-py3-none-any.whl", hash = "sha256:d20e07fe7765908bbd58a6e00ac970a38482bf05ac7bd737027abd92507fc367"}, + {file = "opentelemetry_instrumentation_sagemaker-0.53.0.tar.gz", hash = "sha256:08d34be9f9cf6a12457b90713c8589ec5cbc3c87ddff862543f5590549fd202a"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["boto3"] + +[[package]] +name = "opentelemetry-instrumentation-sqlalchemy" +version = "0.61b0" +description = "OpenTelemetry SQLAlchemy instrumentation" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_sqlalchemy-0.61b0-py3-none-any.whl", hash = "sha256:f115e0be54116ba4c327b8d7b68db4045ee18d44439d888ab8130a549c50d1c1"}, + {file = "opentelemetry_instrumentation_sqlalchemy-0.61b0.tar.gz", hash = "sha256:13a3a159a2043a52f0180b3757fbaa26741b0e08abb50deddce4394c118956e6"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.12,<2.0" +opentelemetry-instrumentation = "0.61b0" +opentelemetry-semantic-conventions = "0.61b0" +packaging = ">=21.0" +wrapt = ">=1.11.2" + +[package.extras] +instruments = ["sqlalchemy (>=1.0.0,<2.1.0)"] + +[[package]] +name = "opentelemetry-instrumentation-threading" +version = "0.61b0" +description = "Thread context propagation support for OpenTelemetry" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_threading-0.61b0-py3-none-any.whl", hash = "sha256:735f4a1dc964202fc8aff475efc12bb64e6566f22dff52d5cb5de864b3fe1a70"}, + {file = "opentelemetry_instrumentation_threading-0.61b0.tar.gz", hash = "sha256:38e0263c692d15a7a458b3fa0286d29290448fa4ac4c63045edac438c6113433"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.12,<2.0" +opentelemetry-instrumentation = "0.61b0" +wrapt = ">=1.0.0,<2.0.0" + +[[package]] +name = "opentelemetry-instrumentation-together" +version = "0.53.0" +description = "OpenTelemetry Together AI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_together-0.53.0-py3-none-any.whl", hash = "sha256:686ebf9b181aa942355f44fed2fbb2c7e04174f0622127f7a80c41730fe1bc8c"}, + {file = "opentelemetry_instrumentation_together-0.53.0.tar.gz", hash = "sha256:f34c411bdc0ed1f72d33ca05ef4d16fcd8935b2ce18b6d9f625cec91a290b3b9"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["together"] + +[[package]] +name = "opentelemetry-instrumentation-transformers" +version = "0.53.0" +description = "OpenTelemetry transformers instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_transformers-0.53.0-py3-none-any.whl", hash = "sha256:c2dff5f32579f702842d98dd53b626f25e859a6d9cb9e46f4807a46647f8d6a5"}, + {file = "opentelemetry_instrumentation_transformers-0.53.0.tar.gz", hash = "sha256:c29c2fd97b01e0ca111996e22a4d4fa5da023b61c643e385e6ce62f2a46b18a1"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["transformers"] + +[[package]] +name = "opentelemetry-instrumentation-urllib3" +version = "0.61b0" +description = "OpenTelemetry urllib3 instrumentation" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_urllib3-0.61b0-py3-none-any.whl", hash = "sha256:9644f8c07870266e52f129e6226859ff3a35192555abe46fa0ef9bbbf5b6b46d"}, + {file = "opentelemetry_instrumentation_urllib3-0.61b0.tar.gz", hash = "sha256:f00037bc8ff813153c4b79306f55a14618c40469a69c6c03a3add29dc7e8b928"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.12,<2.0" +opentelemetry-instrumentation = "0.61b0" +opentelemetry-semantic-conventions = "0.61b0" +opentelemetry-util-http = "0.61b0" +wrapt = ">=1.0.0,<2.0.0" + +[package.extras] +instruments = ["urllib3 (>=1.0.0,<3.0.0)"] + +[[package]] +name = "opentelemetry-instrumentation-vertexai" +version = "0.53.0" +description = "OpenTelemetry Vertex AI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_vertexai-0.53.0-py3-none-any.whl", hash = "sha256:8f2d610e3da3e717069a439d61a3adfa2b375d4658de03f2e05131a3cbbd4681"}, + {file = "opentelemetry_instrumentation_vertexai-0.53.0.tar.gz", hash = "sha256:436ebbb284af8c067d5ea98e349c53692d801989f61769481b45b75774756fc8"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["google-cloud-aiplatform"] + +[[package]] +name = "opentelemetry-instrumentation-voyageai" +version = "0.53.0" +description = "OpenTelemetry Voyage AI instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_voyageai-0.53.0-py3-none-any.whl", hash = "sha256:43342c73dc6cafe4e7d7c6ce66fc5964481d43d1dd71de55ef1fcd5d6c72c6e3"}, + {file = "opentelemetry_instrumentation_voyageai-0.53.0.tar.gz", hash = "sha256:8382bbbf00d32dcf38d6b0faabff6bd933163d46a5a4de3e86c49114bb00c9b5"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["voyageai"] + +[[package]] +name = "opentelemetry-instrumentation-watsonx" +version = "0.53.0" +description = "OpenTelemetry IBM Watsonx Instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_watsonx-0.53.0-py3-none-any.whl", hash = "sha256:d7567f1f58fb78e37aee04a154f5aedd116628930835d10e78267e122f7f5589"}, + {file = "opentelemetry_instrumentation_watsonx-0.53.0.tar.gz", hash = "sha256:e0064eb9f173cd06e685c2a55f8afc12a603306ca22d946864ba7db34920edd3"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["ibm-watson-machine-learning"] + +[[package]] +name = "opentelemetry-instrumentation-weaviate" +version = "0.53.0" +description = "OpenTelemetry Weaviate instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_weaviate-0.53.0-py3-none-any.whl", hash = "sha256:2d825fe52e83db0c3db8cc5536ea8cede80844e51d2c64a88eb4b3531c55731a"}, + {file = "opentelemetry_instrumentation_weaviate-0.53.0.tar.gz", hash = "sha256:f843fdac67d07ac99039d889f4f20e36e69358df26de943f490cccaa47da79bd"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" + +[package.extras] +instruments = ["weaviate-client"] + +[[package]] +name = "opentelemetry-instrumentation-writer" +version = "0.53.0" +description = "OpenTelemetry Writer instrumentation" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "opentelemetry_instrumentation_writer-0.53.0-py3-none-any.whl", hash = "sha256:04a1c1840ba170fae53b48d80462cb572166ad1e3434969a1293a1dfc68f9dfe"}, + {file = "opentelemetry_instrumentation_writer-0.53.0.tar.gz", hash = "sha256:802598df8ba6a131fdd2912aa0b7fc4082f541e2d79a57a0ef7fbec78691158d"}, +] + +[package.dependencies] +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-instrumentation = ">=0.59b0" +opentelemetry-semantic-conventions = ">=0.59b0" +opentelemetry-semantic-conventions-ai = ">=0.4.11" + +[package.extras] +instruments = ["writer"] + +[[package]] +name = "opentelemetry-proto" +version = "1.40.0" +description = "OpenTelemetry Python Proto" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_proto-1.40.0-py3-none-any.whl", hash = "sha256:266c4385d88923a23d63e353e9761af0f47a6ed0d486979777fe4de59dc9b25f"}, + {file = "opentelemetry_proto-1.40.0.tar.gz", hash = "sha256:03f639ca129ba513f5819810f5b1f42bcb371391405d99c168fe6937c62febcd"}, +] + +[package.dependencies] +protobuf = ">=5.0,<7.0" + +[[package]] +name = "opentelemetry-sdk" +version = "1.40.0" +description = "OpenTelemetry Python SDK" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_sdk-1.40.0-py3-none-any.whl", hash = "sha256:787d2154a71f4b3d81f20524a8ce061b7db667d24e46753f32a7bc48f1c1f3f1"}, + {file = "opentelemetry_sdk-1.40.0.tar.gz", hash = "sha256:18e9f5ec20d859d268c7cb3c5198c8d105d073714db3de50b593b8c1345a48f2"}, +] + +[package.dependencies] +opentelemetry-api = "1.40.0" +opentelemetry-semantic-conventions = "0.61b0" +typing-extensions = ">=4.5.0" + +[[package]] +name = "opentelemetry-semantic-conventions" +version = "0.61b0" +description = "OpenTelemetry Semantic Conventions" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_semantic_conventions-0.61b0-py3-none-any.whl", hash = "sha256:fa530a96be229795f8cef353739b618148b0fe2b4b3f005e60e262926c4d38e2"}, + {file = "opentelemetry_semantic_conventions-0.61b0.tar.gz", hash = "sha256:072f65473c5d7c6dc0355b27d6c9d1a679d63b6d4b4b16a9773062cb7e31192a"}, +] + +[package.dependencies] +opentelemetry-api = "1.40.0" +typing-extensions = ">=4.5.0" + +[[package]] +name = "opentelemetry-semantic-conventions-ai" +version = "0.4.15" +description = "OpenTelemetry Semantic Conventions Extension for Large Language Models" +optional = false +python-versions = "<4,>=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_semantic_conventions_ai-0.4.15-py3-none-any.whl", hash = "sha256:011461f1fba30f27035c49ab3b8344367adc72da0a6c8d3c7428303c6779edc9"}, + {file = "opentelemetry_semantic_conventions_ai-0.4.15.tar.gz", hash = "sha256:12de172d1e11d21c6e82bbf578c7e8a713589a7fda76af9ed785632564a28b81"}, +] + +[package.dependencies] +opentelemetry-sdk = ">=1.38.0,<2" +opentelemetry-semantic-conventions = ">=0.59b0" + +[[package]] +name = "opentelemetry-util-http" +version = "0.61b0" +description = "Web util for OpenTelemetry" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "opentelemetry_util_http-0.61b0-py3-none-any.whl", hash = "sha256:8e715e848233e9527ea47e275659ea60a57a75edf5206a3b937e236a6da5fc33"}, + {file = "opentelemetry_util_http-0.61b0.tar.gz", hash = "sha256:1039cb891334ad2731affdf034d8fb8b48c239af9b6dd295e5fabd07f1c95572"}, +] + [[package]] name = "orjson" version = "3.11.2" @@ -4369,6 +5477,18 @@ files = [ [package.dependencies] ptyprocess = ">=0.5" +[[package]] +name = "phonenumbers" +version = "9.0.25" +description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers." +optional = false +python-versions = ">=2.5" +groups = ["main"] +files = [ + {file = "phonenumbers-9.0.25-py2.py3-none-any.whl", hash = "sha256:b1fd6c20d588f5bcd40af3899d727a9f536364211ec6eac554fcd75ca58992a3"}, + {file = "phonenumbers-9.0.25.tar.gz", hash = "sha256:a5f236fa384c6a77378d7836c8e486ade5f984ad2e8e6cc0dbe5124315cdc81b"}, +] + [[package]] name = "pillow" version = "12.1.1" @@ -4745,10 +5865,9 @@ testing = ["google-api-core (>=1.31.5)"] name = "protobuf" version = "6.33.5" description = "" -optional = true +optional = false python-versions = ">=3.9" groups = ["main"] -markers = "extra == \"vertex\"" files = [ {file = "protobuf-6.33.5-cp310-abi3-win32.whl", hash = "sha256:d71b040839446bac0f4d162e758bea99c8251161dae9d0983a3b88dee345153b"}, {file = "protobuf-6.33.5-cp310-abi3-win_amd64.whl", hash = "sha256:3093804752167bcab3998bec9f1048baae6e29505adaf1afd14a37bddede533c"}, @@ -5239,9 +6358,10 @@ diagrams = ["jinja2", "railroad-diagrams"] name = "pypdf" version = "6.7.5" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" -optional = false +optional = true python-versions = ">=3.9" groups = ["main"] +markers = "extra == \"sandbox\"" files = [ {file = "pypdf-6.7.5-py3-none-any.whl", hash = "sha256:07ba7f1d6e6d9aa2a17f5452e320a84718d4ce863367f7ede2fd72280349ab13"}, {file = "pypdf-6.7.5.tar.gz", hash = "sha256:40bb2e2e872078655f12b9b89e2f900888bb505e88a82150b64f9f34fa25651d"}, @@ -5452,6 +6572,21 @@ Pillow = ">=3.3.2" typing-extensions = ">=4.9.0" XlsxWriter = ">=0.5.7" +[[package]] +name = "python-stdnum" +version = "2.2" +description = "Python module to handle standardized numbers and codes" +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "python_stdnum-2.2-py3-none-any.whl", hash = "sha256:bdf98fd117a0ca152e4047aa8ad254bae63853d4e915ddd4e0effb33ba0e9260"}, + {file = "python_stdnum-2.2.tar.gz", hash = "sha256:e95fcfa858a703d4a40130cb3eaac133c60d8808a7f3c98efeedac968c2479b9"}, +] + +[package.extras] +soap = ["zeep"] + [[package]] name = "pytz" version = "2025.2" @@ -6123,10 +7258,173 @@ files = [ ] [package.dependencies] -botocore = ">=1.37.4,<2.0a.0" +botocore = ">=1.37.4,<2.0a0" [package.extras] -crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"] +crt = ["botocore[crt] (>=1.37.4,<2.0a0)"] + +[[package]] +name = "scikit-learn" +version = "1.8.0" +description = "A set of python modules for machine learning and data mining" +optional = false +python-versions = ">=3.11" +groups = ["main"] +files = [ + {file = "scikit_learn-1.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:146b4d36f800c013d267b29168813f7a03a43ecd2895d04861f1240b564421da"}, + {file = "scikit_learn-1.8.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:f984ca4b14914e6b4094c5d52a32ea16b49832c03bd17a110f004db3c223e8e1"}, + {file = "scikit_learn-1.8.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5e30adb87f0cc81c7690a84f7932dd66be5bac57cfe16b91cb9151683a4a2d3b"}, + {file = "scikit_learn-1.8.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ada8121bcb4dac28d930febc791a69f7cb1673c8495e5eee274190b73a4559c1"}, + {file = "scikit_learn-1.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:c57b1b610bd1f40ba43970e11ce62821c2e6569e4d74023db19c6b26f246cb3b"}, + {file = "scikit_learn-1.8.0-cp311-cp311-win_arm64.whl", hash = "sha256:2838551e011a64e3053ad7618dda9310175f7515f1742fa2d756f7c874c05961"}, + {file = "scikit_learn-1.8.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:5fb63362b5a7ddab88e52b6dbb47dac3fd7dafeee740dc6c8d8a446ddedade8e"}, + {file = "scikit_learn-1.8.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:5025ce924beccb28298246e589c691fe1b8c1c96507e6d27d12c5fadd85bfd76"}, + {file = "scikit_learn-1.8.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4496bb2cf7a43ce1a2d7524a79e40bc5da45cf598dbf9545b7e8316ccba47bb4"}, + {file = "scikit_learn-1.8.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a0bcfe4d0d14aec44921545fd2af2338c7471de9cb701f1da4c9d85906ab847a"}, + {file = "scikit_learn-1.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:35c007dedb2ffe38fe3ee7d201ebac4a2deccd2408e8621d53067733e3c74809"}, + {file = "scikit_learn-1.8.0-cp312-cp312-win_arm64.whl", hash = "sha256:8c497fff237d7b4e07e9ef1a640887fa4fb765647f86fbe00f969ff6280ce2bb"}, + {file = "scikit_learn-1.8.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0d6ae97234d5d7079dc0040990a6f7aeb97cb7fa7e8945f1999a429b23569e0a"}, + {file = "scikit_learn-1.8.0-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:edec98c5e7c128328124a029bceb09eda2d526997780fef8d65e9a69eead963e"}, + {file = "scikit_learn-1.8.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:74b66d8689d52ed04c271e1329f0c61635bcaf5b926db9b12d58914cdc01fe57"}, + {file = "scikit_learn-1.8.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8fdf95767f989b0cfedb85f7ed8ca215d4be728031f56ff5a519ee1e3276dc2e"}, + {file = "scikit_learn-1.8.0-cp313-cp313-win_amd64.whl", hash = "sha256:2de443b9373b3b615aec1bb57f9baa6bb3a9bd093f1269ba95c17d870422b271"}, + {file = "scikit_learn-1.8.0-cp313-cp313-win_arm64.whl", hash = "sha256:eddde82a035681427cbedded4e6eff5e57fa59216c2e3e90b10b19ab1d0a65c3"}, + {file = "scikit_learn-1.8.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:7cc267b6108f0a1499a734167282c00c4ebf61328566b55ef262d48e9849c735"}, + {file = "scikit_learn-1.8.0-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:fe1c011a640a9f0791146011dfd3c7d9669785f9fed2b2a5f9e207536cf5c2fd"}, + {file = "scikit_learn-1.8.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72358cce49465d140cc4e7792015bb1f0296a9742d5622c67e31399b75468b9e"}, + {file = "scikit_learn-1.8.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:80832434a6cc114f5219211eec13dcbc16c2bac0e31ef64c6d346cde3cf054cb"}, + {file = "scikit_learn-1.8.0-cp313-cp313t-win_amd64.whl", hash = "sha256:ee787491dbfe082d9c3013f01f5991658b0f38aa8177e4cd4bf434c58f551702"}, + {file = "scikit_learn-1.8.0-cp313-cp313t-win_arm64.whl", hash = "sha256:bf97c10a3f5a7543f9b88cbf488d33d175e9146115a451ae34568597ba33dcde"}, + {file = "scikit_learn-1.8.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:c22a2da7a198c28dd1a6e1136f19c830beab7fdca5b3e5c8bba8394f8a5c45b3"}, + {file = "scikit_learn-1.8.0-cp314-cp314-macosx_12_0_arm64.whl", hash = "sha256:6b595b07a03069a2b1740dc08c2299993850ea81cce4fe19b2421e0c970de6b7"}, + {file = "scikit_learn-1.8.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:29ffc74089f3d5e87dfca4c2c8450f88bdc61b0fc6ed5d267f3988f19a1309f6"}, + {file = "scikit_learn-1.8.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fb65db5d7531bccf3a4f6bec3462223bea71384e2cda41da0f10b7c292b9e7c4"}, + {file = "scikit_learn-1.8.0-cp314-cp314-win_amd64.whl", hash = "sha256:56079a99c20d230e873ea40753102102734c5953366972a71d5cb39a32bc40c6"}, + {file = "scikit_learn-1.8.0-cp314-cp314-win_arm64.whl", hash = "sha256:3bad7565bc9cf37ce19a7c0d107742b320c1285df7aab1a6e2d28780df167242"}, + {file = "scikit_learn-1.8.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:4511be56637e46c25721e83d1a9cea9614e7badc7040c4d573d75fbe257d6fd7"}, + {file = "scikit_learn-1.8.0-cp314-cp314t-macosx_12_0_arm64.whl", hash = "sha256:a69525355a641bf8ef136a7fa447672fb54fe8d60cab5538d9eb7c6438543fb9"}, + {file = "scikit_learn-1.8.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c2656924ec73e5939c76ac4c8b026fc203b83d8900362eb2599d8aee80e4880f"}, + {file = "scikit_learn-1.8.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:15fc3b5d19cc2be65404786857f2e13c70c83dd4782676dd6814e3b89dc8f5b9"}, + {file = "scikit_learn-1.8.0-cp314-cp314t-win_amd64.whl", hash = "sha256:00d6f1d66fbcf4eba6e356e1420d33cc06c70a45bb1363cd6f6a8e4ebbbdece2"}, + {file = "scikit_learn-1.8.0-cp314-cp314t-win_arm64.whl", hash = "sha256:f28dd15c6bb0b66ba09728cf09fd8736c304be29409bd8445a080c1280619e8c"}, + {file = "scikit_learn-1.8.0.tar.gz", hash = "sha256:9bccbb3b40e3de10351f8f5068e105d0f4083b1a65fa07b6634fbc401a6287fd"}, +] + +[package.dependencies] +joblib = ">=1.3.0" +numpy = ">=1.24.1" +scipy = ">=1.10.0" +threadpoolctl = ">=3.2.0" + +[package.extras] +benchmark = ["matplotlib (>=3.6.1)", "memory_profiler (>=0.57.0)", "pandas (>=1.5.0)"] +build = ["cython (>=3.1.2)", "meson-python (>=0.17.1)", "numpy (>=1.24.1)", "scipy (>=1.10.0)"] +docs = ["Pillow (>=10.1.0)", "matplotlib (>=3.6.1)", "memory_profiler (>=0.57.0)", "numpydoc (>=1.2.0)", "pandas (>=1.5.0)", "plotly (>=5.18.0)", "polars (>=0.20.30)", "pooch (>=1.8.0)", "pydata-sphinx-theme (>=0.15.3)", "scikit-image (>=0.22.0)", "seaborn (>=0.13.0)", "sphinx (>=7.3.7)", "sphinx-copybutton (>=0.5.2)", "sphinx-design (>=0.6.0)", "sphinx-gallery (>=0.17.1)", "sphinx-prompt (>=1.4.0)", "sphinx-remove-toctrees (>=1.0.0.post1)", "sphinxcontrib-sass (>=0.3.4)", "sphinxext-opengraph (>=0.9.1)", "towncrier (>=24.8.0)"] +examples = ["matplotlib (>=3.6.1)", "pandas (>=1.5.0)", "plotly (>=5.18.0)", "pooch (>=1.8.0)", "scikit-image (>=0.22.0)", "seaborn (>=0.13.0)"] +install = ["joblib (>=1.3.0)", "numpy (>=1.24.1)", "scipy (>=1.10.0)", "threadpoolctl (>=3.2.0)"] +maintenance = ["conda-lock (==3.0.1)"] +tests = ["matplotlib (>=3.6.1)", "mypy (>=1.15)", "numpydoc (>=1.2.0)", "pandas (>=1.5.0)", "polars (>=0.20.30)", "pooch (>=1.8.0)", "pyamg (>=5.0.0)", "pyarrow (>=12.0.0)", "pytest (>=7.1.2)", "pytest-cov (>=2.9.0)", "ruff (>=0.11.7)"] + +[[package]] +name = "scipy" +version = "1.17.1" +description = "Fundamental algorithms for scientific computing in Python" +optional = false +python-versions = ">=3.11" +groups = ["main"] +files = [ + {file = "scipy-1.17.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:1f95b894f13729334fb990162e911c9e5dc1ab390c58aa6cbecb389c5b5e28ec"}, + {file = "scipy-1.17.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:e18f12c6b0bc5a592ed23d3f7b891f68fd7f8241d69b7883769eb5d5dfb52696"}, + {file = "scipy-1.17.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:a3472cfbca0a54177d0faa68f697d8ba4c80bbdc19908c3465556d9f7efce9ee"}, + {file = "scipy-1.17.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:766e0dc5a616d026a3a1cffa379af959671729083882f50307e18175797b3dfd"}, + {file = "scipy-1.17.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:744b2bf3640d907b79f3fd7874efe432d1cf171ee721243e350f55234b4cec4c"}, + {file = "scipy-1.17.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:43af8d1f3bea642559019edfe64e9b11192a8978efbd1539d7bc2aaa23d92de4"}, + {file = "scipy-1.17.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd96a1898c0a47be4520327e01f874acfd61fb48a9420f8aa9f6483412ffa444"}, + {file = "scipy-1.17.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4eb6c25dd62ee8d5edf68a8e1c171dd71c292fdae95d8aeb3dd7d7de4c364082"}, + {file = "scipy-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:d30e57c72013c2a4fe441c2fcb8e77b14e152ad48b5464858e07e2ad9fbfceff"}, + {file = "scipy-1.17.1-cp311-cp311-win_arm64.whl", hash = "sha256:9ecb4efb1cd6e8c4afea0daa91a87fbddbce1b99d2895d151596716c0b2e859d"}, + {file = "scipy-1.17.1-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:35c3a56d2ef83efc372eaec584314bd0ef2e2f0d2adb21c55e6ad5b344c0dcb8"}, + {file = "scipy-1.17.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:fcb310ddb270a06114bb64bbe53c94926b943f5b7f0842194d585c65eb4edd76"}, + {file = "scipy-1.17.1-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:cc90d2e9c7e5c7f1a482c9875007c095c3194b1cfedca3c2f3291cdc2bc7c086"}, + {file = "scipy-1.17.1-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:c80be5ede8f3f8eded4eff73cc99a25c388ce98e555b17d31da05287015ffa5b"}, + {file = "scipy-1.17.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e19ebea31758fac5893a2ac360fedd00116cbb7628e650842a6691ba7ca28a21"}, + {file = "scipy-1.17.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02ae3b274fde71c5e92ac4d54bc06c42d80e399fec704383dcd99b301df37458"}, + {file = "scipy-1.17.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8a604bae87c6195d8b1045eddece0514d041604b14f2727bbc2b3020172045eb"}, + {file = "scipy-1.17.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f590cd684941912d10becc07325a3eeb77886fe981415660d9265c4c418d0bea"}, + {file = "scipy-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:41b71f4a3a4cab9d366cd9065b288efc4d4f3c0b37a91a8e0947fb5bd7f31d87"}, + {file = "scipy-1.17.1-cp312-cp312-win_arm64.whl", hash = "sha256:f4115102802df98b2b0db3cce5cb9b92572633a1197c77b7553e5203f284a5b3"}, + {file = "scipy-1.17.1-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:5e3c5c011904115f88a39308379c17f91546f77c1667cea98739fe0fccea804c"}, + {file = "scipy-1.17.1-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:6fac755ca3d2c3edcb22f479fceaa241704111414831ddd3bc6056e18516892f"}, + {file = "scipy-1.17.1-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:7ff200bf9d24f2e4d5dc6ee8c3ac64d739d3a89e2326ba68aaf6c4a2b838fd7d"}, + {file = "scipy-1.17.1-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:4b400bdc6f79fa02a4d86640310dde87a21fba0c979efff5248908c6f15fad1b"}, + {file = "scipy-1.17.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2b64ca7d4aee0102a97f3ba22124052b4bd2152522355073580bf4845e2550b6"}, + {file = "scipy-1.17.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:581b2264fc0aa555f3f435a5944da7504ea3a065d7029ad60e7c3d1ae09c5464"}, + {file = "scipy-1.17.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:beeda3d4ae615106d7094f7e7cef6218392e4465cc95d25f900bebabfded0950"}, + {file = "scipy-1.17.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6609bc224e9568f65064cfa72edc0f24ee6655b47575954ec6339534b2798369"}, + {file = "scipy-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:37425bc9175607b0268f493d79a292c39f9d001a357bebb6b88fdfaff13f6448"}, + {file = "scipy-1.17.1-cp313-cp313-win_arm64.whl", hash = "sha256:5cf36e801231b6a2059bf354720274b7558746f3b1a4efb43fcf557ccd484a87"}, + {file = "scipy-1.17.1-cp313-cp313t-macosx_10_14_x86_64.whl", hash = "sha256:d59c30000a16d8edc7e64152e30220bfbd724c9bbb08368c054e24c651314f0a"}, + {file = "scipy-1.17.1-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:010f4333c96c9bb1a4516269e33cb5917b08ef2166d5556ca2fd9f082a9e6ea0"}, + {file = "scipy-1.17.1-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:2ceb2d3e01c5f1d83c4189737a42d9cb2fc38a6eeed225e7515eef71ad301dce"}, + {file = "scipy-1.17.1-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:844e165636711ef41f80b4103ed234181646b98a53c8f05da12ca5ca289134f6"}, + {file = "scipy-1.17.1-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:158dd96d2207e21c966063e1635b1063cd7787b627b6f07305315dd73d9c679e"}, + {file = "scipy-1.17.1-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:74cbb80d93260fe2ffa334efa24cb8f2f0f622a9b9febf8b483c0b865bfb3475"}, + {file = "scipy-1.17.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:dbc12c9f3d185f5c737d801da555fb74b3dcfa1a50b66a1a93e09190f41fab50"}, + {file = "scipy-1.17.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:94055a11dfebe37c656e70317e1996dc197e1a15bbcc351bcdd4610e128fe1ca"}, + {file = "scipy-1.17.1-cp313-cp313t-win_amd64.whl", hash = "sha256:e30bdeaa5deed6bc27b4cc490823cd0347d7dae09119b8803ae576ea0ce52e4c"}, + {file = "scipy-1.17.1-cp313-cp313t-win_arm64.whl", hash = "sha256:a720477885a9d2411f94a93d16f9d89bad0f28ca23c3f8daa521e2dcc3f44d49"}, + {file = "scipy-1.17.1-cp314-cp314-macosx_10_14_x86_64.whl", hash = "sha256:a48a72c77a310327f6a3a920092fa2b8fd03d7deaa60f093038f22d98e096717"}, + {file = "scipy-1.17.1-cp314-cp314-macosx_12_0_arm64.whl", hash = "sha256:45abad819184f07240d8a696117a7aacd39787af9e0b719d00285549ed19a1e9"}, + {file = "scipy-1.17.1-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:3fd1fcdab3ea951b610dc4cef356d416d5802991e7e32b5254828d342f7b7e0b"}, + {file = "scipy-1.17.1-cp314-cp314-macosx_14_0_x86_64.whl", hash = "sha256:7bdf2da170b67fdf10bca777614b1c7d96ae3ca5794fd9587dce41eb2966e866"}, + {file = "scipy-1.17.1-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:adb2642e060a6549c343603a3851ba76ef0b74cc8c079a9a58121c7ec9fe2350"}, + {file = "scipy-1.17.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eee2cfda04c00a857206a4330f0c5e3e56535494e30ca445eb19ec624ae75118"}, + {file = "scipy-1.17.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:d2650c1fb97e184d12d8ba010493ee7b322864f7d3d00d3f9bb97d9c21de4068"}, + {file = "scipy-1.17.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:08b900519463543aa604a06bec02461558a6e1cef8fdbb8098f77a48a83c8118"}, + {file = "scipy-1.17.1-cp314-cp314-win_amd64.whl", hash = "sha256:3877ac408e14da24a6196de0ddcace62092bfc12a83823e92e49e40747e52c19"}, + {file = "scipy-1.17.1-cp314-cp314-win_arm64.whl", hash = "sha256:f8885db0bc2bffa59d5c1b72fad7a6a92d3e80e7257f967dd81abb553a90d293"}, + {file = "scipy-1.17.1-cp314-cp314t-macosx_10_14_x86_64.whl", hash = "sha256:1cc682cea2ae55524432f3cdff9e9a3be743d52a7443d0cba9017c23c87ae2f6"}, + {file = "scipy-1.17.1-cp314-cp314t-macosx_12_0_arm64.whl", hash = "sha256:2040ad4d1795a0ae89bfc7e8429677f365d45aa9fd5e4587cf1ea737f927b4a1"}, + {file = "scipy-1.17.1-cp314-cp314t-macosx_14_0_arm64.whl", hash = "sha256:131f5aaea57602008f9822e2115029b55d4b5f7c070287699fe45c661d051e39"}, + {file = "scipy-1.17.1-cp314-cp314t-macosx_14_0_x86_64.whl", hash = "sha256:9cdc1a2fcfd5c52cfb3045feb399f7b3ce822abdde3a193a6b9a60b3cb5854ca"}, + {file = "scipy-1.17.1-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e3dcd57ab780c741fde8dc68619de988b966db759a3c3152e8e9142c26295ad"}, + {file = "scipy-1.17.1-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a9956e4d4f4a301ebf6cde39850333a6b6110799d470dbbb1e25326ac447f52a"}, + {file = "scipy-1.17.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:a4328d245944d09fd639771de275701ccadf5f781ba0ff092ad141e017eccda4"}, + {file = "scipy-1.17.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a77cbd07b940d326d39a1d1b37817e2ee4d79cb30e7338f3d0cddffae70fcaa2"}, + {file = "scipy-1.17.1-cp314-cp314t-win_amd64.whl", hash = "sha256:eb092099205ef62cd1782b006658db09e2fed75bffcae7cc0d44052d8aa0f484"}, + {file = "scipy-1.17.1-cp314-cp314t-win_arm64.whl", hash = "sha256:200e1050faffacc162be6a486a984a0497866ec54149a01270adc8a59b7c7d21"}, + {file = "scipy-1.17.1.tar.gz", hash = "sha256:95d8e012d8cb8816c226aef832200b1d45109ed4464303e997c5b13122b297c0"}, +] + +[package.dependencies] +numpy = ">=1.26.4,<2.7" + +[package.extras] +dev = ["click (<8.3.0)", "cython-lint (>=0.12.2)", "mypy (==1.10.0)", "pycodestyle", "ruff (>=0.12.0)", "spin", "types-psutil", "typing_extensions"] +doc = ["intersphinx_registry", "jupyterlite-pyodide-kernel", "jupyterlite-sphinx (>=0.19.1)", "jupytext", "linkify-it-py", "matplotlib (>=3.5)", "myst-nb (>=1.2.0)", "numpydoc", "pooch", "pydata-sphinx-theme (>=0.15.2)", "sphinx (>=5.0.0,<8.2.0)", "sphinx-copybutton", "sphinx-design (>=0.4.0)", "tabulate"] +test = ["Cython", "array-api-strict (>=2.3.1)", "asv", "gmpy2", "hypothesis (>=6.30)", "meson", "mpmath", "ninja ; sys_platform != \"emscripten\"", "pooch", "pytest (>=8.0.0)", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"] + +[[package]] +name = "scrubadub" +version = "2.0.1" +description = "Clean personally identifiable information from dirty dirty text." +optional = false +python-versions = "*" +groups = ["main"] +files = [ + {file = "scrubadub-2.0.1-py3-none-any.whl", hash = "sha256:44b9004998a03aff4c6b5d9073a52895081742f994470083a7be610b373e62b7"}, + {file = "scrubadub-2.0.1.tar.gz", hash = "sha256:52a1fb8aa9bc0226043e02c3ec22d450bd4ebeede9e7e8db2def7c89b37c5aad"}, +] + +[package.dependencies] +catalogue = "*" +dateparser = "*" +faker = "*" +phonenumbers = "*" +python-stdnum = "*" +scikit-learn = "*" +textblob = "0.15.3" +typing-extensions = "*" [[package]] name = "setuptools" @@ -6530,6 +7828,21 @@ files = [ doc = ["reno", "sphinx"] test = ["pytest", "tornado (>=4.5)", "typeguard"] +[[package]] +name = "textblob" +version = "0.15.3" +description = "Simple, Pythonic text processing. Sentiment analysis, part-of-speech tagging, noun phrase parsing, and more." +optional = false +python-versions = "*" +groups = ["main"] +files = [ + {file = "textblob-0.15.3-py2.py3-none-any.whl", hash = "sha256:b0eafd8b129c9b196c8128056caed891d64b7fa20ba570e1fcde438f4f7dd312"}, + {file = "textblob-0.15.3.tar.gz", hash = "sha256:7ff3c00cb5a85a30132ee6768b8c68cb2b9d76432fec18cd1b3ffe2f8594ec8c"}, +] + +[package.dependencies] +nltk = ">=3.1" + [[package]] name = "textual" version = "4.0.0" @@ -6551,6 +7864,18 @@ typing-extensions = ">=4.4.0,<5.0.0" [package.extras] syntax = ["tree-sitter (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-bash (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-css (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-go (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-html (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-java (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-javascript (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-json (>=0.24.0) ; python_version >= \"3.9\"", "tree-sitter-markdown (>=0.3.0) ; python_version >= \"3.9\"", "tree-sitter-python (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-regex (>=0.24.0) ; python_version >= \"3.9\"", "tree-sitter-rust (>=0.23.0,<=0.23.2) ; python_version >= \"3.9\"", "tree-sitter-sql (>=0.3.0,<0.3.8) ; python_version >= \"3.9\"", "tree-sitter-toml (>=0.6.0) ; python_version >= \"3.9\"", "tree-sitter-xml (>=0.7.0) ; python_version >= \"3.9\"", "tree-sitter-yaml (>=0.6.0) ; python_version >= \"3.9\""] +[[package]] +name = "threadpoolctl" +version = "3.6.0" +description = "threadpoolctl" +optional = false +python-versions = ">=3.9" +groups = ["main"] +files = [ + {file = "threadpoolctl-3.6.0-py3-none-any.whl", hash = "sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb"}, + {file = "threadpoolctl-3.6.0.tar.gz", hash = "sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e"}, +] + [[package]] name = "tiktoken" version = "0.11.0" @@ -6666,6 +7991,72 @@ notebook = ["ipywidgets (>=6)"] slack = ["slack-sdk"] telegram = ["requests"] +[[package]] +name = "traceloop-sdk" +version = "0.53.0" +description = "Traceloop Software Development Kit (SDK) for Python" +optional = false +python-versions = "<4,>=3.10" +groups = ["main"] +files = [ + {file = "traceloop_sdk-0.53.0-py3-none-any.whl", hash = "sha256:29cee493dda92c872b4578a7f570794669a64f51ab09d61a0893749d616bfcfd"}, + {file = "traceloop_sdk-0.53.0.tar.gz", hash = "sha256:3cd761733eea055d0dc87b5a22c8cc8a6350eca896a80acb5a7e11d089aee3fb"}, +] + +[package.dependencies] +aiohttp = ">=3.11.11,<4" +colorama = ">=0.4.6,<0.5.0" +cuid = ">=0.4,<0.5" +deprecated = ">=1.2.14,<2" +jinja2 = ">=3.1.5,<4" +opentelemetry-api = ">=1.38.0,<2" +opentelemetry-exporter-otlp-proto-grpc = ">=1.38.0,<2" +opentelemetry-exporter-otlp-proto-http = ">=1.38.0,<2" +opentelemetry-instrumentation-agno = "*" +opentelemetry-instrumentation-alephalpha = "*" +opentelemetry-instrumentation-anthropic = "*" +opentelemetry-instrumentation-bedrock = "*" +opentelemetry-instrumentation-chromadb = "*" +opentelemetry-instrumentation-cohere = "*" +opentelemetry-instrumentation-crewai = "*" +opentelemetry-instrumentation-google-generativeai = "*" +opentelemetry-instrumentation-groq = "*" +opentelemetry-instrumentation-haystack = "*" +opentelemetry-instrumentation-lancedb = "*" +opentelemetry-instrumentation-langchain = "*" +opentelemetry-instrumentation-llamaindex = "*" +opentelemetry-instrumentation-logging = ">=0.59b0" +opentelemetry-instrumentation-marqo = "*" +opentelemetry-instrumentation-mcp = "*" +opentelemetry-instrumentation-milvus = "*" +opentelemetry-instrumentation-mistralai = "*" +opentelemetry-instrumentation-ollama = "*" +opentelemetry-instrumentation-openai = "*" +opentelemetry-instrumentation-openai-agents = "*" +opentelemetry-instrumentation-pinecone = "*" +opentelemetry-instrumentation-qdrant = "*" +opentelemetry-instrumentation-redis = ">=0.59b0" +opentelemetry-instrumentation-replicate = "*" +opentelemetry-instrumentation-requests = ">=0.59b0" +opentelemetry-instrumentation-sagemaker = "*" +opentelemetry-instrumentation-sqlalchemy = ">=0.59b0" +opentelemetry-instrumentation-threading = ">=0.59b0" +opentelemetry-instrumentation-together = "*" +opentelemetry-instrumentation-transformers = "*" +opentelemetry-instrumentation-urllib3 = ">=0.59b0" +opentelemetry-instrumentation-vertexai = "*" +opentelemetry-instrumentation-voyageai = "*" +opentelemetry-instrumentation-watsonx = "*" +opentelemetry-instrumentation-weaviate = "*" +opentelemetry-instrumentation-writer = "*" +opentelemetry-sdk = ">=1.38.0,<2" +opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0" +pydantic = ">=1" +tenacity = ">=8.2.3,<10.0" + +[package.extras] +datasets = ["pandas"] + [[package]] name = "traitlets" version = "5.14.3" @@ -7104,6 +8495,97 @@ files = [ {file = "whatthepatch-1.0.7.tar.gz", hash = "sha256:9eefb4ebea5200408e02d413d2b4bc28daea6b78bb4b4d53431af7245f7d7edf"}, ] +[[package]] +name = "wrapt" +version = "1.17.3" +description = "Module for decorators, wrappers and monkey patching." +optional = false +python-versions = ">=3.8" +groups = ["main"] +files = [ + {file = "wrapt-1.17.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:88bbae4d40d5a46142e70d58bf664a89b6b4befaea7b2ecc14e03cedb8e06c04"}, + {file = "wrapt-1.17.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b13af258d6a9ad602d57d889f83b9d5543acd471eee12eb51f5b01f8eb1bc2"}, + {file = "wrapt-1.17.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd341868a4b6714a5962c1af0bd44f7c404ef78720c7de4892901e540417111c"}, + {file = "wrapt-1.17.3-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f9b2601381be482f70e5d1051a5965c25fb3625455a2bf520b5a077b22afb775"}, + {file = "wrapt-1.17.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343e44b2a8e60e06a7e0d29c1671a0d9951f59174f3709962b5143f60a2a98bd"}, + {file = "wrapt-1.17.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:33486899acd2d7d3066156b03465b949da3fd41a5da6e394ec49d271baefcf05"}, + {file = "wrapt-1.17.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e6f40a8aa5a92f150bdb3e1c44b7e98fb7113955b2e5394122fa5532fec4b418"}, + {file = "wrapt-1.17.3-cp310-cp310-win32.whl", hash = "sha256:a36692b8491d30a8c75f1dfee65bef119d6f39ea84ee04d9f9311f83c5ad9390"}, + {file = "wrapt-1.17.3-cp310-cp310-win_amd64.whl", hash = "sha256:afd964fd43b10c12213574db492cb8f73b2f0826c8df07a68288f8f19af2ebe6"}, + {file = "wrapt-1.17.3-cp310-cp310-win_arm64.whl", hash = "sha256:af338aa93554be859173c39c85243970dc6a289fa907402289eeae7543e1ae18"}, + {file = "wrapt-1.17.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:273a736c4645e63ac582c60a56b0acb529ef07f78e08dc6bfadf6a46b19c0da7"}, + {file = "wrapt-1.17.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5531d911795e3f935a9c23eb1c8c03c211661a5060aab167065896bbf62a5f85"}, + {file = "wrapt-1.17.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0610b46293c59a3adbae3dee552b648b984176f8562ee0dba099a56cfbe4df1f"}, + {file = "wrapt-1.17.3-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b32888aad8b6e68f83a8fdccbf3165f5469702a7544472bdf41f582970ed3311"}, + {file = "wrapt-1.17.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8cccf4f81371f257440c88faed6b74f1053eef90807b77e31ca057b2db74edb1"}, + {file = "wrapt-1.17.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8a210b158a34164de8bb68b0e7780041a903d7b00c87e906fb69928bf7890d5"}, + {file = "wrapt-1.17.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:79573c24a46ce11aab457b472efd8d125e5a51da2d1d24387666cd85f54c05b2"}, + {file = "wrapt-1.17.3-cp311-cp311-win32.whl", hash = "sha256:c31eebe420a9a5d2887b13000b043ff6ca27c452a9a22fa71f35f118e8d4bf89"}, + {file = "wrapt-1.17.3-cp311-cp311-win_amd64.whl", hash = "sha256:0b1831115c97f0663cb77aa27d381237e73ad4f721391a9bfb2fe8bc25fa6e77"}, + {file = "wrapt-1.17.3-cp311-cp311-win_arm64.whl", hash = "sha256:5a7b3c1ee8265eb4c8f1b7d29943f195c00673f5ab60c192eba2d4a7eae5f46a"}, + {file = "wrapt-1.17.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ab232e7fdb44cdfbf55fc3afa31bcdb0d8980b9b95c38b6405df2acb672af0e0"}, + {file = "wrapt-1.17.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9baa544e6acc91130e926e8c802a17f3b16fbea0fd441b5a60f5cf2cc5c3deba"}, + {file = "wrapt-1.17.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6b538e31eca1a7ea4605e44f81a48aa24c4632a277431a6ed3f328835901f4fd"}, + {file = "wrapt-1.17.3-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:042ec3bb8f319c147b1301f2393bc19dba6e176b7da446853406d041c36c7828"}, + {file = "wrapt-1.17.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3af60380ba0b7b5aeb329bc4e402acd25bd877e98b3727b0135cb5c2efdaefe9"}, + {file = "wrapt-1.17.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0b02e424deef65c9f7326d8c19220a2c9040c51dc165cddb732f16198c168396"}, + {file = "wrapt-1.17.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:74afa28374a3c3a11b3b5e5fca0ae03bef8450d6aa3ab3a1e2c30e3a75d023dc"}, + {file = "wrapt-1.17.3-cp312-cp312-win32.whl", hash = "sha256:4da9f45279fff3543c371d5ababc57a0384f70be244de7759c85a7f989cb4ebe"}, + {file = "wrapt-1.17.3-cp312-cp312-win_amd64.whl", hash = "sha256:e71d5c6ebac14875668a1e90baf2ea0ef5b7ac7918355850c0908ae82bcb297c"}, + {file = "wrapt-1.17.3-cp312-cp312-win_arm64.whl", hash = "sha256:604d076c55e2fdd4c1c03d06dc1a31b95130010517b5019db15365ec4a405fc6"}, + {file = "wrapt-1.17.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a47681378a0439215912ef542c45a783484d4dd82bac412b71e59cf9c0e1cea0"}, + {file = "wrapt-1.17.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:54a30837587c6ee3cd1a4d1c2ec5d24e77984d44e2f34547e2323ddb4e22eb77"}, + {file = "wrapt-1.17.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:16ecf15d6af39246fe33e507105d67e4b81d8f8d2c6598ff7e3ca1b8a37213f7"}, + {file = "wrapt-1.17.3-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6fd1ad24dc235e4ab88cda009e19bf347aabb975e44fd5c2fb22a3f6e4141277"}, + {file = "wrapt-1.17.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ed61b7c2d49cee3c027372df5809a59d60cf1b6c2f81ee980a091f3afed6a2d"}, + {file = "wrapt-1.17.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:423ed5420ad5f5529db9ce89eac09c8a2f97da18eb1c870237e84c5a5c2d60aa"}, + {file = "wrapt-1.17.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e01375f275f010fcbf7f643b4279896d04e571889b8a5b3f848423d91bf07050"}, + {file = "wrapt-1.17.3-cp313-cp313-win32.whl", hash = "sha256:53e5e39ff71b3fc484df8a522c933ea2b7cdd0d5d15ae82e5b23fde87d44cbd8"}, + {file = "wrapt-1.17.3-cp313-cp313-win_amd64.whl", hash = "sha256:1f0b2f40cf341ee8cc1a97d51ff50dddb9fcc73241b9143ec74b30fc4f44f6cb"}, + {file = "wrapt-1.17.3-cp313-cp313-win_arm64.whl", hash = "sha256:7425ac3c54430f5fc5e7b6f41d41e704db073309acfc09305816bc6a0b26bb16"}, + {file = "wrapt-1.17.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:cf30f6e3c077c8e6a9a7809c94551203c8843e74ba0c960f4a98cd80d4665d39"}, + {file = "wrapt-1.17.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e228514a06843cae89621384cfe3a80418f3c04aadf8a3b14e46a7be704e4235"}, + {file = "wrapt-1.17.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:5ea5eb3c0c071862997d6f3e02af1d055f381b1d25b286b9d6644b79db77657c"}, + {file = "wrapt-1.17.3-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:281262213373b6d5e4bb4353bc36d1ba4084e6d6b5d242863721ef2bf2c2930b"}, + {file = "wrapt-1.17.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:dc4a8d2b25efb6681ecacad42fca8859f88092d8732b170de6a5dddd80a1c8fa"}, + {file = "wrapt-1.17.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:373342dd05b1d07d752cecbec0c41817231f29f3a89aa8b8843f7b95992ed0c7"}, + {file = "wrapt-1.17.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d40770d7c0fd5cbed9d84b2c3f2e156431a12c9a37dc6284060fb4bec0b7ffd4"}, + {file = "wrapt-1.17.3-cp314-cp314-win32.whl", hash = "sha256:fbd3c8319de8e1dc79d346929cd71d523622da527cca14e0c1d257e31c2b8b10"}, + {file = "wrapt-1.17.3-cp314-cp314-win_amd64.whl", hash = "sha256:e1a4120ae5705f673727d3253de3ed0e016f7cd78dc463db1b31e2463e1f3cf6"}, + {file = "wrapt-1.17.3-cp314-cp314-win_arm64.whl", hash = "sha256:507553480670cab08a800b9463bdb881b2edeed77dc677b0a5915e6106e91a58"}, + {file = "wrapt-1.17.3-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:ed7c635ae45cfbc1a7371f708727bf74690daedc49b4dba310590ca0bd28aa8a"}, + {file = "wrapt-1.17.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:249f88ed15503f6492a71f01442abddd73856a0032ae860de6d75ca62eed8067"}, + {file = "wrapt-1.17.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5a03a38adec8066d5a37bea22f2ba6bbf39fcdefbe2d91419ab864c3fb515454"}, + {file = "wrapt-1.17.3-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:5d4478d72eb61c36e5b446e375bbc49ed002430d17cdec3cecb36993398e1a9e"}, + {file = "wrapt-1.17.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:223db574bb38637e8230eb14b185565023ab624474df94d2af18f1cdb625216f"}, + {file = "wrapt-1.17.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e405adefb53a435f01efa7ccdec012c016b5a1d3f35459990afc39b6be4d5056"}, + {file = "wrapt-1.17.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:88547535b787a6c9ce4086917b6e1d291aa8ed914fdd3a838b3539dc95c12804"}, + {file = "wrapt-1.17.3-cp314-cp314t-win32.whl", hash = "sha256:41b1d2bc74c2cac6f9074df52b2efbef2b30bdfe5f40cb78f8ca22963bc62977"}, + {file = "wrapt-1.17.3-cp314-cp314t-win_amd64.whl", hash = "sha256:73d496de46cd2cdbdbcce4ae4bcdb4afb6a11234a1df9c085249d55166b95116"}, + {file = "wrapt-1.17.3-cp314-cp314t-win_arm64.whl", hash = "sha256:f38e60678850c42461d4202739f9bf1e3a737c7ad283638251e79cc49effb6b6"}, + {file = "wrapt-1.17.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:70d86fa5197b8947a2fa70260b48e400bf2ccacdcab97bb7de47e3d1e6312225"}, + {file = "wrapt-1.17.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:df7d30371a2accfe4013e90445f6388c570f103d61019b6b7c57e0265250072a"}, + {file = "wrapt-1.17.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:caea3e9c79d5f0d2c6d9ab96111601797ea5da8e6d0723f77eabb0d4068d2b2f"}, + {file = "wrapt-1.17.3-cp38-cp38-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:758895b01d546812d1f42204bd443b8c433c44d090248bf22689df673ccafe00"}, + {file = "wrapt-1.17.3-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:02b551d101f31694fc785e58e0720ef7d9a10c4e62c1c9358ce6f63f23e30a56"}, + {file = "wrapt-1.17.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:656873859b3b50eeebe6db8b1455e99d90c26ab058db8e427046dbc35c3140a5"}, + {file = "wrapt-1.17.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a9a2203361a6e6404f80b99234fe7fb37d1fc73487b5a78dc1aa5b97201e0f22"}, + {file = "wrapt-1.17.3-cp38-cp38-win32.whl", hash = "sha256:55cbbc356c2842f39bcc553cf695932e8b30e30e797f961860afb308e6b1bb7c"}, + {file = "wrapt-1.17.3-cp38-cp38-win_amd64.whl", hash = "sha256:ad85e269fe54d506b240d2d7b9f5f2057c2aa9a2ea5b32c66f8902f768117ed2"}, + {file = "wrapt-1.17.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:30ce38e66630599e1193798285706903110d4f057aab3168a34b7fdc85569afc"}, + {file = "wrapt-1.17.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:65d1d00fbfb3ea5f20add88bbc0f815150dbbde3b026e6c24759466c8b5a9ef9"}, + {file = "wrapt-1.17.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7c06742645f914f26c7f1fa47b8bc4c91d222f76ee20116c43d5ef0912bba2d"}, + {file = "wrapt-1.17.3-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7e18f01b0c3e4a07fe6dfdb00e29049ba17eadbc5e7609a2a3a4af83ab7d710a"}, + {file = "wrapt-1.17.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f5f51a6466667a5a356e6381d362d259125b57f059103dd9fdc8c0cf1d14139"}, + {file = "wrapt-1.17.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:59923aa12d0157f6b82d686c3fd8e1166fa8cdfb3e17b42ce3b6147ff81528df"}, + {file = "wrapt-1.17.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:46acc57b331e0b3bcb3e1ca3b421d65637915cfcd65eb783cb2f78a511193f9b"}, + {file = "wrapt-1.17.3-cp39-cp39-win32.whl", hash = "sha256:3e62d15d3cfa26e3d0788094de7b64efa75f3a53875cdbccdf78547aed547a81"}, + {file = "wrapt-1.17.3-cp39-cp39-win_amd64.whl", hash = "sha256:1f23fa283f51c890eda8e34e4937079114c74b4c81d2b2f1f1d94948f5cc3d7f"}, + {file = "wrapt-1.17.3-cp39-cp39-win_arm64.whl", hash = "sha256:24c2ed34dc222ed754247a2702b1e1e89fdbaa4016f324b4b8f1a802d4ffe87f"}, + {file = "wrapt-1.17.3-py3-none-any.whl", hash = "sha256:7171ae35d2c33d326ac19dd8facb1e82e5fd04ef8c6c0e394d7af55a55051c22"}, + {file = "wrapt-1.17.3.tar.gz", hash = "sha256:f66eb08feaa410fe4eebd17f2a2c8e2e46d3476e9f8c783daa8e09e0faa666d0"}, +] + [[package]] name = "xlrd" version = "2.0.2" @@ -7309,4 +8791,4 @@ vertex = ["google-cloud-aiplatform"] [metadata] lock-version = "2.1" python-versions = "^3.12" -content-hash = "4a67311f830ccf488e636a127723741d5de84d7368131ccb99afb065ca4a12b1" +content-hash = "d6a1cc4aac053c720cd224f72c4bac24371559ab0725a1fb9eb6ab4ed8d64b06" diff --git a/pyproject.toml b/pyproject.toml index ab08983..48f9196 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -56,6 +56,9 @@ textual = "^4.0.0" xmltodict = "^0.13.0" requests = "^2.32.0" cvss = "^3.2" +traceloop-sdk = "^0.53.0" +opentelemetry-exporter-otlp-proto-http = "^1.40.0" +scrubadub = "^2.0.1" # Optional LLM provider dependencies google-cloud-aiplatform = { version = ">=1.38", optional = true } @@ -148,6 +151,9 @@ module = [ "libtmux.*", "pytest.*", "cvss.*", + "opentelemetry.*", + "scrubadub.*", + "traceloop.*", ] ignore_missing_imports = true @@ -155,6 +161,7 @@ ignore_missing_imports = true [[tool.mypy.overrides]] module = ["tests.*"] disallow_untyped_decorators = false +disallow_untyped_defs = false # ============================================================================ # Ruff Configuration (Fast Python Linter & Formatter) diff --git a/strix/config/config.py b/strix/config/config.py index 7578b61..bad994a 100644 --- a/strix/config/config.py +++ b/strix/config/config.py @@ -47,6 +47,11 @@ class Config: # Telemetry strix_telemetry = "1" + strix_otel_telemetry = None + strix_posthog_telemetry = None + traceloop_base_url = None + traceloop_api_key = None + traceloop_headers = None # Config file override (set via --config CLI arg) _config_file_override: Path | None = None diff --git a/strix/interface/main.py b/strix/interface/main.py index 33785e6..7d340df 100644 --- a/strix/interface/main.py +++ b/strix/interface/main.py @@ -413,8 +413,6 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) -> if tracer and tracer.scan_results: scan_completed = tracer.scan_results.get("scan_completed", False) - has_vulnerabilities = tracer and len(tracer.vulnerability_reports) > 0 - completion_text = Text() if scan_completed: completion_text.append("Penetration test completed", style="bold #22c55e") @@ -439,13 +437,12 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) -> if stats_text.plain: panel_parts.extend(["\n", stats_text]) - if scan_completed or has_vulnerabilities: - results_text = Text() - results_text.append("\n") - results_text.append("Output", style="dim") - results_text.append(" ") - results_text.append(str(results_path), style="#60a5fa") - panel_parts.extend(["\n", results_text]) + results_text = Text() + results_text.append("\n") + results_text.append("Output", style="dim") + results_text.append(" ") + results_text.append(str(results_path), style="#60a5fa") + panel_parts.extend(["\n", results_text]) panel_content = Text.assemble(*panel_parts) diff --git a/strix/telemetry/flags.py b/strix/telemetry/flags.py new file mode 100644 index 0000000..bae9427 --- /dev/null +++ b/strix/telemetry/flags.py @@ -0,0 +1,23 @@ +from strix.config import Config + + +_DISABLED_VALUES = {"0", "false", "no", "off"} + + +def _is_enabled(raw_value: str | None, default: str = "1") -> bool: + value = (raw_value if raw_value is not None else default).strip().lower() + return value not in _DISABLED_VALUES + + +def is_otel_enabled() -> bool: + explicit = Config.get("strix_otel_telemetry") + if explicit is not None: + return _is_enabled(explicit) + return _is_enabled(Config.get("strix_telemetry"), default="1") + + +def is_posthog_enabled() -> bool: + explicit = Config.get("strix_posthog_telemetry") + if explicit is not None: + return _is_enabled(explicit) + return _is_enabled(Config.get("strix_telemetry"), default="1") diff --git a/strix/telemetry/posthog.py b/strix/telemetry/posthog.py index fd66bcc..aa534d2 100644 --- a/strix/telemetry/posthog.py +++ b/strix/telemetry/posthog.py @@ -6,7 +6,7 @@ from pathlib import Path from typing import TYPE_CHECKING, Any from uuid import uuid4 -from strix.config import Config +from strix.telemetry.flags import is_posthog_enabled if TYPE_CHECKING: @@ -19,7 +19,7 @@ _SESSION_ID = uuid4().hex[:16] def _is_enabled() -> bool: - return (Config.get("strix_telemetry") or "1").lower() not in ("0", "false", "no", "off") + return is_posthog_enabled() def _is_first_run() -> bool: diff --git a/strix/telemetry/tracer.py b/strix/telemetry/tracer.py index ef97ab6..bde9750 100644 --- a/strix/telemetry/tracer.py +++ b/strix/telemetry/tracer.py @@ -1,20 +1,40 @@ +import json import logging +import threading from datetime import UTC, datetime from pathlib import Path -from typing import TYPE_CHECKING, Any, Optional +from typing import Any, Callable, Optional from uuid import uuid4 +from opentelemetry import trace +from opentelemetry.trace import SpanContext, SpanKind + +from strix.config import Config from strix.telemetry import posthog +from strix.telemetry.flags import is_otel_enabled +from strix.telemetry.utils import ( + TelemetrySanitizer, + append_jsonl_record, + bootstrap_otel, + format_span_id, + format_trace_id, + get_events_write_lock, +) -if TYPE_CHECKING: - from collections.abc import Callable +try: + from traceloop.sdk import Traceloop +except ImportError: # pragma: no cover - exercised when dependency is absent + Traceloop = None # type: ignore[assignment,unused-ignore] logger = logging.getLogger(__name__) _global_tracer: Optional["Tracer"] = None +_OTEL_BOOTSTRAP_LOCK = threading.Lock() +_OTEL_BOOTSTRAPPED = False +_OTEL_REMOTE_ENABLED = False def get_global_tracer() -> Optional["Tracer"]: return _global_tracer @@ -52,16 +72,225 @@ class Tracer: "status": "running", } self._run_dir: Path | None = None + self._events_file_path: Path | None = None self._next_execution_id = 1 self._next_message_id = 1 self._saved_vuln_ids: set[str] = set() + self._run_completed_emitted = False + self._telemetry_enabled = is_otel_enabled() + self._sanitizer = TelemetrySanitizer() + + self._otel_tracer: Any = None + self._remote_export_enabled = False self.caido_url: str | None = None self.vulnerability_found_callback: Callable[[dict[str, Any]], None] | None = None + self._setup_telemetry() + self._emit_run_started_event() + + @property + def events_file_path(self) -> Path: + if self._events_file_path is None: + self._events_file_path = self.get_run_dir() / "events.jsonl" + return self._events_file_path + + def _active_events_file_path(self) -> Path: + active = get_global_tracer() + if active and active._events_file_path is not None: + return active._events_file_path + return self.events_file_path + + def _get_events_write_lock(self, output_path: Path | None = None) -> threading.Lock: + path = output_path or self.events_file_path + return get_events_write_lock(path) + + def _active_run_metadata(self) -> dict[str, Any]: + active = get_global_tracer() + if active: + return active.run_metadata + return self.run_metadata + + def _setup_telemetry(self) -> None: + global _OTEL_BOOTSTRAPPED, _OTEL_REMOTE_ENABLED + + if not self._telemetry_enabled: + self._otel_tracer = None + self._remote_export_enabled = False + return + + run_dir = self.get_run_dir() + self._events_file_path = run_dir / "events.jsonl" + base_url = (Config.get("traceloop_base_url") or "").strip() + api_key = (Config.get("traceloop_api_key") or "").strip() + headers_raw = Config.get("traceloop_headers") or "" + + ( + self._otel_tracer, + self._remote_export_enabled, + _OTEL_BOOTSTRAPPED, + _OTEL_REMOTE_ENABLED, + ) = bootstrap_otel( + bootstrapped=_OTEL_BOOTSTRAPPED, + remote_enabled_state=_OTEL_REMOTE_ENABLED, + bootstrap_lock=_OTEL_BOOTSTRAP_LOCK, + traceloop=Traceloop, + base_url=base_url, + api_key=api_key, + headers_raw=headers_raw, + output_path_getter=self._active_events_file_path, + run_metadata_getter=self._active_run_metadata, + sanitizer=self._sanitize_data, + write_lock_getter=self._get_events_write_lock, + tracer_name="strix.telemetry.tracer", + ) + + def _set_association_properties(self, properties: dict[str, Any]) -> None: + if Traceloop is None: + return + sanitized = self._sanitize_data(properties) + try: + Traceloop.set_association_properties(sanitized) + except Exception: # noqa: BLE001 + logger.debug("Failed to set Traceloop association properties") + + def _sanitize_data(self, data: Any, key_hint: str | None = None) -> Any: + return self._sanitizer.sanitize(data, key_hint=key_hint) + + def _append_event_record(self, record: dict[str, Any]) -> None: + try: + append_jsonl_record(self.events_file_path, record) + except OSError: + logger.exception("Failed to append JSONL event record") + + def _enrich_actor(self, actor: dict[str, Any] | None) -> dict[str, Any] | None: + if not actor: + return None + + enriched = dict(actor) + if "agent_name" in enriched: + return enriched + + agent_id = enriched.get("agent_id") + if not isinstance(agent_id, str): + return enriched + + agent_data = self.agents.get(agent_id, {}) + agent_name = agent_data.get("name") + if isinstance(agent_name, str) and agent_name: + enriched["agent_name"] = agent_name + + return enriched + + def _emit_event( + self, + event_type: str, + actor: dict[str, Any] | None = None, + payload: Any | None = None, + status: str | None = None, + error: Any | None = None, + source: str = "strix.tracer", + include_run_metadata: bool = False, + ) -> None: + if not self._telemetry_enabled: + return + + enriched_actor = self._enrich_actor(actor) + sanitized_actor = self._sanitize_data(enriched_actor) if enriched_actor else None + sanitized_payload = self._sanitize_data(payload) if payload is not None else None + sanitized_error = self._sanitize_data(error) if error is not None else None + + trace_id: str | None = None + span_id: str | None = None + parent_span_id: str | None = None + + current_context = trace.get_current_span().get_span_context() + if isinstance(current_context, SpanContext) and current_context.is_valid: + parent_span_id = format_span_id(current_context.span_id) + + if self._otel_tracer is not None: + try: + with self._otel_tracer.start_as_current_span( + f"strix.{event_type}", + kind=SpanKind.INTERNAL, + ) as span: + span_context = span.get_span_context() + trace_id = format_trace_id(span_context.trace_id) + span_id = format_span_id(span_context.span_id) + + span.set_attribute("strix.event_type", event_type) + span.set_attribute("strix.source", source) + span.set_attribute("strix.run_id", self.run_id) + span.set_attribute("strix.run_name", self.run_name or "") + + if status: + span.set_attribute("strix.status", status) + if sanitized_actor is not None: + span.set_attribute( + "strix.actor", + json.dumps(sanitized_actor, ensure_ascii=False), + ) + if sanitized_payload is not None: + span.set_attribute( + "strix.payload", + json.dumps(sanitized_payload, ensure_ascii=False), + ) + if sanitized_error is not None: + span.set_attribute( + "strix.error", + json.dumps(sanitized_error, ensure_ascii=False), + ) + except Exception: # noqa: BLE001 + logger.debug("Failed to create OTEL span for event type '%s'", event_type) + + if trace_id is None: + trace_id = format_trace_id(uuid4().int & ((1 << 128) - 1)) or uuid4().hex + if span_id is None: + span_id = format_span_id(uuid4().int & ((1 << 64) - 1)) or uuid4().hex[:16] + + record = { + "timestamp": datetime.now(UTC).isoformat(), + "event_type": event_type, + "run_id": self.run_id, + "trace_id": trace_id, + "span_id": span_id, + "parent_span_id": parent_span_id, + "actor": sanitized_actor, + "payload": sanitized_payload, + "status": status, + "error": sanitized_error, + "source": source, + } + if include_run_metadata: + record["run_metadata"] = self._sanitize_data(self.run_metadata) + self._append_event_record(record) + def set_run_name(self, run_name: str) -> None: self.run_name = run_name self.run_id = run_name + self.run_metadata["run_name"] = run_name + self.run_metadata["run_id"] = run_name + self._run_dir = None + self._events_file_path = None + self._run_completed_emitted = False + self._set_association_properties({"run_id": self.run_id, "run_name": self.run_name or ""}) + self._emit_run_started_event() + + def _emit_run_started_event(self) -> None: + if not self._telemetry_enabled: + return + + self._emit_event( + "run.started", + payload={ + "run_name": self.run_name, + "start_time": self.start_time, + "local_jsonl_path": str(self.events_file_path), + "remote_export_enabled": self._remote_export_enabled, + }, + status="running", + include_run_metadata=True, + ) def get_run_dir(self) -> Path: if self._run_dir is None: @@ -134,6 +363,12 @@ class Tracer: self.vulnerability_reports.append(report) logger.info(f"Added vulnerability report: {report_id} - {title}") posthog.finding(severity) + self._emit_event( + "finding.created", + payload={"report": report}, + status=report["severity"], + source="strix.findings", + ) if self.vulnerability_found_callback: self.vulnerability_found_callback(report) @@ -178,11 +413,24 @@ class Tracer: """ logger.info("Updated scan final fields") + self._emit_event( + "finding.reviewed", + payload={ + "scan_completed": True, + "vulnerability_count": len(self.vulnerability_reports), + }, + status="completed", + source="strix.findings", + ) self.save_run_data(mark_complete=True) posthog.end(self, exit_reason="finished_by_tool") def log_agent_creation( - self, agent_id: str, name: str, task: str, parent_id: str | None = None + self, + agent_id: str, + name: str, + task: str, + parent_id: str | None = None, ) -> None: agent_data: dict[str, Any] = { "id": agent_id, @@ -196,6 +444,13 @@ class Tracer: } self.agents[agent_id] = agent_data + self._emit_event( + "agent.created", + actor={"agent_id": agent_id, "agent_name": name}, + payload={"task": task, "parent_id": parent_id}, + status="running", + source="strix.agents", + ) def log_chat_message( self, @@ -217,9 +472,21 @@ class Tracer: } self.chat_messages.append(message_data) + self._emit_event( + "chat.message", + actor={"agent_id": agent_id, "role": role}, + payload={"message_id": message_id, "content": content, "metadata": metadata or {}}, + status="logged", + source="strix.chat", + ) return message_id - def log_tool_execution_start(self, agent_id: str, tool_name: str, args: dict[str, Any]) -> int: + def log_tool_execution_start( + self, + agent_id: str, + tool_name: str, + args: dict[str, Any], + ) -> int: execution_id = self._next_execution_id self._next_execution_id += 1 @@ -241,18 +508,67 @@ class Tracer: if agent_id in self.agents: self.agents[agent_id]["tool_executions"].append(execution_id) + self._emit_event( + "tool.execution.started", + actor={ + "agent_id": agent_id, + "tool_name": tool_name, + "execution_id": execution_id, + }, + payload={"args": args}, + status="running", + source="strix.tools", + ) + return execution_id def update_tool_execution( - self, execution_id: int, status: str, result: Any | None = None + self, + execution_id: int, + status: str, + result: Any | None = None, ) -> None: - if execution_id in self.tool_executions: - self.tool_executions[execution_id]["status"] = status - self.tool_executions[execution_id]["result"] = result - self.tool_executions[execution_id]["completed_at"] = datetime.now(UTC).isoformat() + if execution_id not in self.tool_executions: + return + + tool_data = self.tool_executions[execution_id] + tool_data["status"] = status + tool_data["result"] = result + tool_data["completed_at"] = datetime.now(UTC).isoformat() + + tool_name = str(tool_data.get("tool_name", "unknown")) + agent_id = str(tool_data.get("agent_id", "unknown")) + error_payload = result if status in {"error", "failed"} else None + + self._emit_event( + "tool.execution.updated", + actor={ + "agent_id": agent_id, + "tool_name": tool_name, + "execution_id": execution_id, + }, + payload={"result": result}, + status=status, + error=error_payload, + source="strix.tools", + ) + + if tool_name == "create_vulnerability_report": + finding_status = "reviewed" if status == "completed" else "rejected" + self._emit_event( + "finding.reviewed", + actor={"agent_id": agent_id, "tool_name": tool_name}, + payload={"execution_id": execution_id, "result": result}, + status=finding_status, + error=error_payload, + source="strix.findings", + ) def update_agent_status( - self, agent_id: str, status: str, error_message: str | None = None + self, + agent_id: str, + status: str, + error_message: str | None = None, ) -> None: if agent_id in self.agents: self.agents[agent_id]["status"] = status @@ -260,6 +576,15 @@ class Tracer: if error_message: self.agents[agent_id]["error_message"] = error_message + self._emit_event( + "agent.status.updated", + actor={"agent_id": agent_id}, + payload={"error_message": error_message}, + status=status, + error=error_message, + source="strix.agents", + ) + def set_scan_config(self, config: dict[str, Any]) -> None: self.scan_config = config self.run_metadata.update( @@ -269,13 +594,29 @@ class Tracer: "max_iterations": config.get("max_iterations", 200), } ) - self.get_run_dir() + self._set_association_properties( + { + "run_id": self.run_id, + "run_name": self.run_name or "", + "targets": config.get("targets", []), + "max_iterations": config.get("max_iterations", 200), + } + ) + self._emit_event( + "run.configured", + payload={"scan_config": config}, + status="configured", + source="strix.run", + ) - def save_run_data(self, mark_complete: bool = False) -> None: # noqa: PLR0912, PLR0915 + def save_run_data(self, mark_complete: bool = False) -> None: try: run_dir = self.get_run_dir() if mark_complete: - self.end_time = datetime.now(UTC).isoformat() + if self.end_time is None: + self.end_time = datetime.now(UTC).isoformat() + self.run_metadata["end_time"] = self.end_time + self.run_metadata["status"] = "completed" if self.final_scan_result: penetration_test_report_file = run_dir / "penetration_test_report.md" @@ -286,7 +627,8 @@ class Tracer: ) f.write(f"{self.final_scan_result}\n") logger.info( - f"Saved final penetration test report to: {penetration_test_report_file}" + "Saved final penetration test report to: %s", + penetration_test_report_file, ) if self.vulnerability_reports: @@ -302,7 +644,10 @@ class Tracer: severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3, "info": 4} sorted_reports = sorted( self.vulnerability_reports, - key=lambda x: (severity_order.get(x["severity"], 5), x["timestamp"]), + key=lambda report: ( + severity_order.get(report["severity"], 5), + report["timestamp"], + ), ) for report in new_reports: @@ -329,8 +674,8 @@ class Tracer: f.write(f"**{label}:** {value}\n") f.write("\n## Description\n\n") - desc = report.get("description") or "No description provided." - f.write(f"{desc}\n\n") + description = report.get("description") or "No description provided." + f.write(f"{description}\n\n") if report.get("impact"): f.write("## Impact\n\n") @@ -404,11 +749,25 @@ class Tracer: if new_reports: logger.info( - f"Saved {len(new_reports)} new vulnerability report(s) to: {vuln_dir}" + "Saved %d new vulnerability report(s) to: %s", + len(new_reports), + vuln_dir, ) - logger.info(f"Updated vulnerability index: {vuln_csv_file}") + logger.info("Updated vulnerability index: %s", vuln_csv_file) - logger.info(f"📊 Essential scan data saved to: {run_dir}") + logger.info("📊 Essential scan data saved to: %s", run_dir) + if mark_complete and not self._run_completed_emitted: + self._emit_event( + "run.completed", + payload={ + "duration_seconds": self._calculate_duration(), + "vulnerability_count": len(self.vulnerability_reports), + }, + status="completed", + source="strix.run", + include_run_metadata=True, + ) + self._run_completed_emitted = True except (OSError, RuntimeError): logger.exception("Failed to save scan data") diff --git a/strix/telemetry/utils.py b/strix/telemetry/utils.py new file mode 100644 index 0000000..85e49f3 --- /dev/null +++ b/strix/telemetry/utils.py @@ -0,0 +1,413 @@ +import json +import logging +import re +import threading +from collections.abc import Callable, Sequence +from datetime import UTC, datetime +from pathlib import Path +from typing import Any + +from opentelemetry import trace +from opentelemetry.sdk.trace import ReadableSpan, TracerProvider +from opentelemetry.sdk.trace.export import ( + BatchSpanProcessor, + SimpleSpanProcessor, + SpanExporter, + SpanExportResult, +) +from scrubadub import Scrubber +from scrubadub.detectors import RegexDetector +from scrubadub.filth import Filth + + +logger = logging.getLogger(__name__) + +_REDACTED = "[REDACTED]" +_SCREENSHOT_OMITTED = "[SCREENSHOT_OMITTED]" +_SCREENSHOT_KEY_PATTERN = re.compile(r"screenshot", re.IGNORECASE) +_SENSITIVE_KEY_PATTERN = re.compile( + r"(api[_-]?key|token|secret|password|authorization|cookie|session|credential|private[_-]?key)", + re.IGNORECASE, +) +_SENSITIVE_TOKEN_PATTERN = re.compile( + r"(?i)\b(" + r"bearer\s+[a-z0-9._-]+|" + r"sk-[a-z0-9_-]{8,}|" + r"gh[pousr]_[a-z0-9_-]{12,}|" + r"xox[baprs]-[a-z0-9-]{12,}" + r")\b" +) +_SCRUBADUB_PLACEHOLDER_PATTERN = re.compile(r"\{\{[^}]+\}\}") +_EVENTS_FILE_LOCKS_LOCK = threading.Lock() +_EVENTS_FILE_LOCKS: dict[str, threading.Lock] = {} +_NOISY_OTEL_CONTENT_PREFIXES = ( + "gen_ai.prompt.", + "gen_ai.completion.", + "llm.input_messages.", + "llm.output_messages.", +) +_NOISY_OTEL_EXACT_KEYS = { + "llm.input", + "llm.output", + "llm.prompt", + "llm.completion", +} + + +class _SecretFilth(Filth): # type: ignore[misc] + type = "secret" + + +class _SecretTokenDetector(RegexDetector): # type: ignore[misc] + name = "strix_secret_token_detector" + filth_cls = _SecretFilth + regex = _SENSITIVE_TOKEN_PATTERN + + +class TelemetrySanitizer: + def __init__(self) -> None: + self._scrubber = Scrubber(detector_list=[_SecretTokenDetector]) + + def sanitize(self, data: Any, key_hint: str | None = None) -> Any: # noqa: PLR0911 + if data is None: + return None + + if isinstance(data, dict): + sanitized: dict[str, Any] = {} + for key, value in data.items(): + key_str = str(key) + if _SCREENSHOT_KEY_PATTERN.search(key_str): + sanitized[key_str] = _SCREENSHOT_OMITTED + elif _SENSITIVE_KEY_PATTERN.search(key_str): + sanitized[key_str] = _REDACTED + else: + sanitized[key_str] = self.sanitize(value, key_hint=key_str) + return sanitized + + if isinstance(data, list): + return [self.sanitize(item, key_hint=key_hint) for item in data] + + if isinstance(data, tuple): + return [self.sanitize(item, key_hint=key_hint) for item in data] + + if isinstance(data, str): + if key_hint and _SENSITIVE_KEY_PATTERN.search(key_hint): + return _REDACTED + + cleaned = self._scrubber.clean(data) + return _SCRUBADUB_PLACEHOLDER_PATTERN.sub(_REDACTED, cleaned) + + if isinstance(data, int | float | bool): + return data + + return str(data) + + +def format_trace_id(trace_id: int | None) -> str | None: + if trace_id is None or trace_id == 0: + return None + return f"{trace_id:032x}" + + +def format_span_id(span_id: int | None) -> str | None: + if span_id is None or span_id == 0: + return None + return f"{span_id:016x}" + + +def iso_from_unix_ns(unix_ns: int | None) -> str | None: + if unix_ns is None: + return None + try: + return datetime.fromtimestamp(unix_ns / 1_000_000_000, tz=UTC).isoformat() + except (OSError, OverflowError, ValueError): + return None + + + +def get_events_write_lock(output_path: Path) -> threading.Lock: + path_key = str(output_path.resolve(strict=False)) + with _EVENTS_FILE_LOCKS_LOCK: + lock = _EVENTS_FILE_LOCKS.get(path_key) + if lock is None: + lock = threading.Lock() + _EVENTS_FILE_LOCKS[path_key] = lock + return lock + + +def reset_events_write_locks() -> None: + with _EVENTS_FILE_LOCKS_LOCK: + _EVENTS_FILE_LOCKS.clear() + + +def append_jsonl_record(output_path: Path, record: dict[str, Any]) -> None: + output_path.parent.mkdir(parents=True, exist_ok=True) + with get_events_write_lock(output_path), output_path.open("a", encoding="utf-8") as f: + f.write(json.dumps(record, ensure_ascii=False) + "\n") + + +def default_resource_attributes() -> dict[str, str]: + return { + "service.name": "strix-agent", + "service.namespace": "strix", + } + + +def parse_traceloop_headers(raw_headers: str) -> dict[str, str]: + headers = raw_headers.strip() + if not headers: + return {} + + if headers.startswith("{"): + try: + parsed = json.loads(headers) + except json.JSONDecodeError: + logger.warning("Invalid TRACELOOP_HEADERS JSON, ignoring custom headers") + return {} + if isinstance(parsed, dict): + return {str(key): str(value) for key, value in parsed.items() if value is not None} + logger.warning("TRACELOOP_HEADERS JSON must be an object, ignoring custom headers") + return {} + + result: dict[str, str] = {} + for part in headers.split(","): + key, sep, value = part.partition("=") + if not sep: + continue + key = key.strip() + value = value.strip() + if key and value: + result[key] = value + return result + + +def prune_otel_span_attributes(attributes: dict[str, Any]) -> dict[str, Any]: + """Drop high-volume LLM payload attributes to keep JSONL event files compact.""" + filtered: dict[str, Any] = {} + filtered_count = 0 + + for key, value in attributes.items(): + key_str = str(key) + if key_str in _NOISY_OTEL_EXACT_KEYS: + filtered_count += 1 + continue + + if key_str.endswith(".content") and key_str.startswith(_NOISY_OTEL_CONTENT_PREFIXES): + filtered_count += 1 + continue + + filtered[key_str] = value + + if filtered_count: + filtered["strix.filtered_attributes_count"] = filtered_count + + return filtered + + +class JsonlSpanExporter(SpanExporter): # type: ignore[misc] + """Append OTEL spans to JSONL for local run artifacts.""" + + def __init__( + self, + output_path_getter: Callable[[], Path], + run_metadata_getter: Callable[[], dict[str, Any]], + sanitizer: Callable[[Any], Any], + write_lock_getter: Callable[[Path], threading.Lock], + ): + self._output_path_getter = output_path_getter + self._run_metadata_getter = run_metadata_getter + self._sanitize = sanitizer + self._write_lock_getter = write_lock_getter + + def export(self, spans: Sequence[ReadableSpan]) -> SpanExportResult: + records: list[dict[str, Any]] = [] + for span in spans: + attributes = prune_otel_span_attributes(dict(span.attributes or {})) + if "strix.event_type" in attributes: + # Tracer events are written directly in Tracer._emit_event. + continue + records.append(self._span_to_record(span, attributes)) + + if not records: + return SpanExportResult.SUCCESS + + try: + output_path = self._output_path_getter() + output_path.parent.mkdir(parents=True, exist_ok=True) + with self._write_lock_getter(output_path), output_path.open("a", encoding="utf-8") as f: + for record in records: + f.write(json.dumps(record, ensure_ascii=False) + "\n") + except OSError: + logger.exception("Failed to write OTEL span records to JSONL") + return SpanExportResult.FAILURE + + return SpanExportResult.SUCCESS + + def shutdown(self) -> None: + return None + + def force_flush(self, timeout_millis: int = 30_000) -> bool: # noqa: ARG002 + return True + + def _span_to_record( + self, + span: ReadableSpan, + attributes: dict[str, Any], + ) -> dict[str, Any]: + span_context = span.get_span_context() + parent_context = span.parent + + status = None + if span.status and span.status.status_code: + status = span.status.status_code.name.lower() + + event_type = str(attributes.get("gen_ai.operation.name", span.name)) + run_metadata = self._run_metadata_getter() + run_id_attr = ( + attributes.get("strix.run_id") + or attributes.get("strix_run_id") + or run_metadata.get("run_id") + or span.resource.attributes.get("strix.run_id") + ) + + record: dict[str, Any] = { + "timestamp": iso_from_unix_ns(span.end_time) or datetime.now(UTC).isoformat(), + "event_type": event_type, + "run_id": str(run_id_attr or run_metadata.get("run_id") or ""), + "trace_id": format_trace_id(span_context.trace_id), + "span_id": format_span_id(span_context.span_id), + "parent_span_id": format_span_id(parent_context.span_id if parent_context else None), + "actor": None, + "payload": None, + "status": status, + "error": None, + "source": "otel.span", + "span_name": span.name, + "span_kind": span.kind.name.lower(), + "attributes": self._sanitize(attributes), + } + + if span.events: + record["otel_events"] = self._sanitize( + [ + { + "name": event.name, + "timestamp": iso_from_unix_ns(event.timestamp), + "attributes": dict(event.attributes or {}), + } + for event in span.events + ] + ) + + return record + + +def bootstrap_otel( + *, + bootstrapped: bool, + remote_enabled_state: bool, + bootstrap_lock: threading.Lock, + traceloop: Any, + base_url: str, + api_key: str, + headers_raw: str, + output_path_getter: Callable[[], Path], + run_metadata_getter: Callable[[], dict[str, Any]], + sanitizer: Callable[[Any], Any], + write_lock_getter: Callable[[Path], threading.Lock], + tracer_name: str = "strix.telemetry.tracer", +) -> tuple[Any, bool, bool, bool]: + with bootstrap_lock: + if bootstrapped: + return ( + trace.get_tracer(tracer_name), + remote_enabled_state, + bootstrapped, + remote_enabled_state, + ) + + local_exporter = JsonlSpanExporter( + output_path_getter=output_path_getter, + run_metadata_getter=run_metadata_getter, + sanitizer=sanitizer, + write_lock_getter=write_lock_getter, + ) + local_processor = SimpleSpanProcessor(local_exporter) + + headers = parse_traceloop_headers(headers_raw) + remote_enabled = bool(base_url and api_key) + otlp_headers = headers + if remote_enabled: + otlp_headers = {"Authorization": f"Bearer {api_key}"} + otlp_headers.update(headers) + + otel_init_ok = False + if traceloop: + try: + from traceloop.sdk.instruments import Instruments + + init_kwargs: dict[str, Any] = { + "app_name": "strix-agent", + "processor": local_processor, + "telemetry_enabled": False, + "resource_attributes": default_resource_attributes(), + "block_instruments": { + Instruments.URLLIB3, + Instruments.REQUESTS, + }, + } + if remote_enabled: + init_kwargs.update( + { + "api_endpoint": base_url, + "api_key": api_key, + "headers": headers, + } + ) + import io + import sys + + _stdout = sys.stdout + sys.stdout = io.StringIO() + try: + traceloop.init(**init_kwargs) + finally: + sys.stdout = _stdout + otel_init_ok = True + except Exception: + logger.exception("Failed to initialize Traceloop/OpenLLMetry") + remote_enabled = False + + if not otel_init_ok: + from opentelemetry.sdk.resources import Resource + + provider = TracerProvider(resource=Resource.create(default_resource_attributes())) + provider.add_span_processor(local_processor) + if remote_enabled: + try: + from opentelemetry.exporter.otlp.proto.http.trace_exporter import ( + OTLPSpanExporter, + ) + + endpoint = base_url.rstrip("/") + "/v1/traces" + provider.add_span_processor( + BatchSpanProcessor( + OTLPSpanExporter(endpoint=endpoint, headers=otlp_headers) + ) + ) + except Exception: + logger.exception("Failed to configure OTLP HTTP exporter") + remote_enabled = False + + try: + trace.set_tracer_provider(provider) + otel_init_ok = True + except Exception: + logger.exception("Failed to set OpenTelemetry tracer provider") + remote_enabled = False + + otel_tracer = trace.get_tracer(tracer_name) + if otel_init_ok: + return otel_tracer, remote_enabled, True, remote_enabled + + return otel_tracer, remote_enabled, bootstrapped, remote_enabled_state diff --git a/tests/config/__init__.py b/tests/config/__init__.py new file mode 100644 index 0000000..2edfe31 --- /dev/null +++ b/tests/config/__init__.py @@ -0,0 +1 @@ +"""Tests for strix.config module.""" diff --git a/tests/config/test_config_telemetry.py b/tests/config/test_config_telemetry.py new file mode 100644 index 0000000..89af42f --- /dev/null +++ b/tests/config/test_config_telemetry.py @@ -0,0 +1,55 @@ +import json + +from strix.config.config import Config + + +def test_traceloop_vars_are_tracked() -> None: + tracked = Config.tracked_vars() + + assert "STRIX_OTEL_TELEMETRY" in tracked + assert "STRIX_POSTHOG_TELEMETRY" in tracked + assert "TRACELOOP_BASE_URL" in tracked + assert "TRACELOOP_API_KEY" in tracked + assert "TRACELOOP_HEADERS" in tracked + + +def test_apply_saved_uses_saved_traceloop_vars(monkeypatch, tmp_path) -> None: + config_path = tmp_path / "cli-config.json" + config_path.write_text( + json.dumps( + { + "env": { + "TRACELOOP_BASE_URL": "https://otel.example.com", + "TRACELOOP_API_KEY": "api-key", + "TRACELOOP_HEADERS": "x-test=value", + } + } + ), + encoding="utf-8", + ) + + monkeypatch.setattr(Config, "_config_file_override", config_path) + monkeypatch.delenv("TRACELOOP_BASE_URL", raising=False) + monkeypatch.delenv("TRACELOOP_API_KEY", raising=False) + monkeypatch.delenv("TRACELOOP_HEADERS", raising=False) + + applied = Config.apply_saved() + + assert applied["TRACELOOP_BASE_URL"] == "https://otel.example.com" + assert applied["TRACELOOP_API_KEY"] == "api-key" + assert applied["TRACELOOP_HEADERS"] == "x-test=value" + + +def test_apply_saved_respects_existing_env_traceloop_vars(monkeypatch, tmp_path) -> None: + config_path = tmp_path / "cli-config.json" + config_path.write_text( + json.dumps({"env": {"TRACELOOP_BASE_URL": "https://otel.example.com"}}), + encoding="utf-8", + ) + + monkeypatch.setattr(Config, "_config_file_override", config_path) + monkeypatch.setenv("TRACELOOP_BASE_URL", "https://env.example.com") + + applied = Config.apply_saved(force=False) + + assert "TRACELOOP_BASE_URL" not in applied diff --git a/tests/llm/test_llm_otel.py b/tests/llm/test_llm_otel.py new file mode 100644 index 0000000..58ee89e --- /dev/null +++ b/tests/llm/test_llm_otel.py @@ -0,0 +1,15 @@ +import litellm + +from strix.llm.config import LLMConfig +from strix.llm.llm import LLM + + +def test_llm_does_not_modify_litellm_callbacks(monkeypatch) -> None: + monkeypatch.setenv("STRIX_TELEMETRY", "1") + monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1") + monkeypatch.setattr(litellm, "callbacks", ["custom-callback"]) + + llm = LLM(LLMConfig(model_name="openai/gpt-5"), agent_name=None) + + assert llm is not None + assert litellm.callbacks == ["custom-callback"] diff --git a/tests/telemetry/test_flags.py b/tests/telemetry/test_flags.py new file mode 100644 index 0000000..a7f8e43 --- /dev/null +++ b/tests/telemetry/test_flags.py @@ -0,0 +1,28 @@ +from strix.telemetry.flags import is_otel_enabled, is_posthog_enabled + + +def test_flags_fallback_to_strix_telemetry(monkeypatch) -> None: + monkeypatch.delenv("STRIX_OTEL_TELEMETRY", raising=False) + monkeypatch.delenv("STRIX_POSTHOG_TELEMETRY", raising=False) + monkeypatch.setenv("STRIX_TELEMETRY", "0") + + assert is_otel_enabled() is False + assert is_posthog_enabled() is False + + +def test_otel_flag_overrides_global_telemetry(monkeypatch) -> None: + monkeypatch.setenv("STRIX_TELEMETRY", "0") + monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1") + monkeypatch.delenv("STRIX_POSTHOG_TELEMETRY", raising=False) + + assert is_otel_enabled() is True + assert is_posthog_enabled() is False + + +def test_posthog_flag_overrides_global_telemetry(monkeypatch) -> None: + monkeypatch.setenv("STRIX_TELEMETRY", "0") + monkeypatch.setenv("STRIX_POSTHOG_TELEMETRY", "1") + monkeypatch.delenv("STRIX_OTEL_TELEMETRY", raising=False) + + assert is_otel_enabled() is False + assert is_posthog_enabled() is True diff --git a/tests/telemetry/test_tracer.py b/tests/telemetry/test_tracer.py new file mode 100644 index 0000000..10f887e --- /dev/null +++ b/tests/telemetry/test_tracer.py @@ -0,0 +1,379 @@ +import json +import sys +import types +from pathlib import Path +from typing import Any, ClassVar + +import pytest +from opentelemetry.sdk.trace.export import SimpleSpanProcessor, SpanExportResult + +from strix.telemetry import tracer as tracer_module +from strix.telemetry import utils as telemetry_utils +from strix.telemetry.tracer import Tracer, set_global_tracer + + +def _load_events(events_path: Path) -> list[dict[str, Any]]: + lines = events_path.read_text(encoding="utf-8").splitlines() + return [json.loads(line) for line in lines if line] + + +@pytest.fixture(autouse=True) +def _reset_tracer_globals(monkeypatch) -> None: + monkeypatch.setattr(tracer_module, "_global_tracer", None) + monkeypatch.setattr(tracer_module, "_OTEL_BOOTSTRAPPED", False) + monkeypatch.setattr(tracer_module, "_OTEL_REMOTE_ENABLED", False) + telemetry_utils.reset_events_write_locks() + monkeypatch.delenv("STRIX_TELEMETRY", raising=False) + monkeypatch.delenv("STRIX_OTEL_TELEMETRY", raising=False) + monkeypatch.delenv("STRIX_POSTHOG_TELEMETRY", raising=False) + monkeypatch.delenv("TRACELOOP_BASE_URL", raising=False) + monkeypatch.delenv("TRACELOOP_API_KEY", raising=False) + monkeypatch.delenv("TRACELOOP_HEADERS", raising=False) + + +def test_tracer_local_mode_writes_jsonl_with_correlation(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer("local-observability") + set_global_tracer(tracer) + tracer.set_scan_config({"targets": ["https://example.com"], "user_instructions": "focus auth"}) + tracer.log_agent_creation("agent-1", "Root Agent", "scan auth") + tracer.log_chat_message("starting scan", "user", "agent-1") + execution_id = tracer.log_tool_execution_start( + "agent-1", + "send_request", + {"url": "https://example.com/login"}, + ) + tracer.update_tool_execution(execution_id, "completed", {"status_code": 200, "body": "ok"}) + + events_path = tmp_path / "strix_runs" / "local-observability" / "events.jsonl" + assert events_path.exists() + + events = _load_events(events_path) + assert any(event["event_type"] == "tool.execution.updated" for event in events) + assert not any(event["event_type"] == "traffic.intercepted" for event in events) + + for event in events: + assert event["run_id"] == "local-observability" + assert event["trace_id"] + assert event["span_id"] + + +def test_tracer_redacts_sensitive_payloads(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer("redaction-run") + set_global_tracer(tracer) + execution_id = tracer.log_tool_execution_start( + "agent-1", + "send_request", + { + "url": "https://example.com", + "api_key": "sk-secret-token-value", + "authorization": "Bearer super-secret-token", + }, + ) + tracer.update_tool_execution( + execution_id, + "error", + {"error": "request failed with token sk-secret-token-value"}, + ) + + events_path = tmp_path / "strix_runs" / "redaction-run" / "events.jsonl" + events = _load_events(events_path) + serialized = json.dumps(events) + + assert "sk-secret-token-value" not in serialized + assert "super-secret-token" not in serialized + assert "[REDACTED]" in serialized + + +def test_tracer_remote_mode_configures_traceloop_export(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + class FakeTraceloop: + init_calls: ClassVar[list[dict[str, Any]]] = [] + + @staticmethod + def init(**kwargs: Any) -> None: + FakeTraceloop.init_calls.append(kwargs) + + @staticmethod + def set_association_properties(properties: dict[str, Any]) -> None: # noqa: ARG004 + return None + + monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop) + monkeypatch.setenv("TRACELOOP_BASE_URL", "https://otel.example.com") + monkeypatch.setenv("TRACELOOP_API_KEY", "test-api-key") + monkeypatch.setenv("TRACELOOP_HEADERS", '{"x-custom":"header"}') + + tracer = Tracer("remote-observability") + set_global_tracer(tracer) + tracer.log_chat_message("hello", "user", "agent-1") + + assert tracer._remote_export_enabled is True + assert FakeTraceloop.init_calls + init_kwargs = FakeTraceloop.init_calls[-1] + assert init_kwargs["api_endpoint"] == "https://otel.example.com" + assert init_kwargs["api_key"] == "test-api-key" + assert init_kwargs["headers"] == {"x-custom": "header"} + assert isinstance(init_kwargs["processor"], SimpleSpanProcessor) + assert "strix.run_id" not in init_kwargs["resource_attributes"] + assert "strix.run_name" not in init_kwargs["resource_attributes"] + + events_path = tmp_path / "strix_runs" / "remote-observability" / "events.jsonl" + events = _load_events(events_path) + run_started = next(event for event in events if event["event_type"] == "run.started") + assert run_started["payload"]["remote_export_enabled"] is True + + +def test_tracer_local_mode_avoids_traceloop_remote_endpoint(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + class FakeTraceloop: + init_calls: ClassVar[list[dict[str, Any]]] = [] + + @staticmethod + def init(**kwargs: Any) -> None: + FakeTraceloop.init_calls.append(kwargs) + + @staticmethod + def set_association_properties(properties: dict[str, Any]) -> None: # noqa: ARG004 + return None + + monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop) + + tracer = Tracer("local-traceloop") + set_global_tracer(tracer) + tracer.log_chat_message("hello", "user", "agent-1") + + assert FakeTraceloop.init_calls + init_kwargs = FakeTraceloop.init_calls[-1] + assert "api_endpoint" not in init_kwargs + assert "api_key" not in init_kwargs + assert "headers" not in init_kwargs + assert isinstance(init_kwargs["processor"], SimpleSpanProcessor) + assert tracer._remote_export_enabled is False + + +def test_otlp_fallback_includes_auth_and_custom_headers(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + monkeypatch.setattr(tracer_module, "Traceloop", None) + monkeypatch.setenv("TRACELOOP_BASE_URL", "https://otel.example.com") + monkeypatch.setenv("TRACELOOP_API_KEY", "test-api-key") + monkeypatch.setenv("TRACELOOP_HEADERS", '{"x-custom":"header"}') + + captured: dict[str, Any] = {} + + class FakeOTLPSpanExporter: + def __init__(self, endpoint: str, headers: dict[str, str] | None = None, **kwargs: Any): + captured["endpoint"] = endpoint + captured["headers"] = headers or {} + captured["kwargs"] = kwargs + + def export(self, spans: Any) -> SpanExportResult: # noqa: ARG002 + return SpanExportResult.SUCCESS + + def shutdown(self) -> None: + return None + + def force_flush(self, timeout_millis: int = 30_000) -> bool: # noqa: ARG002 + return True + + fake_module = types.ModuleType("opentelemetry.exporter.otlp.proto.http.trace_exporter") + fake_module.OTLPSpanExporter = FakeOTLPSpanExporter + monkeypatch.setitem( + sys.modules, + "opentelemetry.exporter.otlp.proto.http.trace_exporter", + fake_module, + ) + + tracer = Tracer("otlp-fallback") + set_global_tracer(tracer) + + assert tracer._remote_export_enabled is True + assert captured["endpoint"] == "https://otel.example.com/v1/traces" + assert captured["headers"]["Authorization"] == "Bearer test-api-key" + assert captured["headers"]["x-custom"] == "header" + + +def test_traceloop_init_failure_does_not_mark_bootstrapped_on_provider_failure( + monkeypatch, tmp_path +) -> None: + monkeypatch.chdir(tmp_path) + + class FakeTraceloop: + @staticmethod + def init(**kwargs: Any) -> None: # noqa: ARG004 + raise RuntimeError("traceloop init failed") + + @staticmethod + def set_association_properties(properties: dict[str, Any]) -> None: # noqa: ARG004 + return None + + monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop) + + def _raise_provider_error(provider: Any) -> None: + raise RuntimeError("provider setup failed") + + monkeypatch.setattr(tracer_module.trace, "set_tracer_provider", _raise_provider_error) + + tracer = Tracer("bootstrap-failure") + set_global_tracer(tracer) + + assert tracer_module._OTEL_BOOTSTRAPPED is False + assert tracer._remote_export_enabled is False + + +def test_run_completed_event_emitted_once(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer("single-complete") + set_global_tracer(tracer) + tracer.save_run_data(mark_complete=True) + tracer.save_run_data(mark_complete=True) + + events_path = tmp_path / "strix_runs" / "single-complete" / "events.jsonl" + events = _load_events(events_path) + run_completed = [event for event in events if event["event_type"] == "run.completed"] + assert len(run_completed) == 1 + + +def test_events_with_agent_id_include_agent_name(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer("agent-name-enrichment") + set_global_tracer(tracer) + tracer.log_agent_creation("agent-1", "Root Agent", "scan auth") + tracer.log_chat_message("hello", "assistant", "agent-1") + + events_path = tmp_path / "strix_runs" / "agent-name-enrichment" / "events.jsonl" + events = _load_events(events_path) + chat_event = next(event for event in events if event["event_type"] == "chat.message") + + assert chat_event["actor"]["agent_id"] == "agent-1" + assert chat_event["actor"]["agent_name"] == "Root Agent" + + +def test_run_metadata_is_only_on_run_lifecycle_events(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer("metadata-scope") + set_global_tracer(tracer) + tracer.log_chat_message("hello", "assistant", "agent-1") + tracer.save_run_data(mark_complete=True) + + events_path = tmp_path / "strix_runs" / "metadata-scope" / "events.jsonl" + events = _load_events(events_path) + + run_started = next(event for event in events if event["event_type"] == "run.started") + run_completed = next(event for event in events if event["event_type"] == "run.completed") + chat_event = next(event for event in events if event["event_type"] == "chat.message") + + assert "run_metadata" in run_started + assert "run_metadata" in run_completed + assert "run_metadata" not in chat_event + + +def test_set_run_name_resets_cached_paths(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer() + set_global_tracer(tracer) + old_events_path = tracer.events_file_path + + tracer.set_run_name("renamed-run") + tracer.log_chat_message("hello", "assistant", "agent-1") + + new_events_path = tracer.events_file_path + assert new_events_path != old_events_path + assert new_events_path == tmp_path / "strix_runs" / "renamed-run" / "events.jsonl" + + events = _load_events(new_events_path) + assert any(event["event_type"] == "run.started" for event in events) + assert any(event["event_type"] == "chat.message" for event in events) + + +def test_set_run_name_resets_run_completed_flag(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + tracer = Tracer() + set_global_tracer(tracer) + + tracer.save_run_data(mark_complete=True) + tracer.set_run_name("renamed-complete") + tracer.save_run_data(mark_complete=True) + + events_path = tmp_path / "strix_runs" / "renamed-complete" / "events.jsonl" + events = _load_events(events_path) + run_completed = [event for event in events if event["event_type"] == "run.completed"] + + assert any(event["event_type"] == "run.started" for event in events) + assert len(run_completed) == 1 + + +def test_set_run_name_updates_traceloop_association_properties(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + + class FakeTraceloop: + associations: ClassVar[list[dict[str, Any]]] = [] + + @staticmethod + def init(**kwargs: Any) -> None: # noqa: ARG004 + return None + + @staticmethod + def set_association_properties(properties: dict[str, Any]) -> None: + FakeTraceloop.associations.append(properties) + + monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop) + + tracer = Tracer() + set_global_tracer(tracer) + tracer.set_run_name("renamed-run") + + assert FakeTraceloop.associations + assert FakeTraceloop.associations[-1]["run_id"] == "renamed-run" + assert FakeTraceloop.associations[-1]["run_name"] == "renamed-run" + + +def test_events_write_locks_are_scoped_by_events_file(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + monkeypatch.setenv("STRIX_TELEMETRY", "0") + + tracer_one = Tracer("lock-run-a") + tracer_two = Tracer("lock-run-b") + + lock_a_from_one = tracer_one._get_events_write_lock(tracer_one.events_file_path) + lock_a_from_two = tracer_two._get_events_write_lock(tracer_one.events_file_path) + lock_b = tracer_two._get_events_write_lock(tracer_two.events_file_path) + + assert lock_a_from_one is lock_a_from_two + assert lock_a_from_one is not lock_b + + +def test_tracer_skips_jsonl_when_telemetry_disabled(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + monkeypatch.setenv("STRIX_TELEMETRY", "0") + + tracer = Tracer("telemetry-disabled") + set_global_tracer(tracer) + tracer.log_chat_message("hello", "assistant", "agent-1") + tracer.save_run_data(mark_complete=True) + + events_path = tmp_path / "strix_runs" / "telemetry-disabled" / "events.jsonl" + assert not events_path.exists() + + +def test_tracer_otel_flag_overrides_global_telemetry(monkeypatch, tmp_path) -> None: + monkeypatch.chdir(tmp_path) + monkeypatch.setenv("STRIX_TELEMETRY", "0") + monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1") + + tracer = Tracer("otel-enabled") + set_global_tracer(tracer) + tracer.log_chat_message("hello", "assistant", "agent-1") + tracer.save_run_data(mark_complete=True) + + events_path = tmp_path / "strix_runs" / "otel-enabled" / "events.jsonl" + assert events_path.exists() diff --git a/tests/telemetry/test_utils.py b/tests/telemetry/test_utils.py new file mode 100644 index 0000000..3e039ac --- /dev/null +++ b/tests/telemetry/test_utils.py @@ -0,0 +1,39 @@ +from strix.telemetry.utils import prune_otel_span_attributes + + +def test_prune_otel_span_attributes_drops_high_volume_prompt_content() -> None: + attributes = { + "gen_ai.operation.name": "openai.chat", + "gen_ai.request.model": "gpt-5.2", + "gen_ai.prompt.0.role": "system", + "gen_ai.prompt.0.content": "a" * 20_000, + "gen_ai.completion.0.content": "b" * 10_000, + "llm.input_messages.0.content": "c" * 5_000, + "llm.output_messages.0.content": "d" * 5_000, + "llm.input": "x" * 3_000, + "llm.output": "y" * 3_000, + } + + pruned = prune_otel_span_attributes(attributes) + + assert "gen_ai.prompt.0.content" not in pruned + assert "gen_ai.completion.0.content" not in pruned + assert "llm.input_messages.0.content" not in pruned + assert "llm.output_messages.0.content" not in pruned + assert "llm.input" not in pruned + assert "llm.output" not in pruned + assert pruned["gen_ai.operation.name"] == "openai.chat" + assert pruned["gen_ai.prompt.0.role"] == "system" + assert pruned["strix.filtered_attributes_count"] == 6 + + +def test_prune_otel_span_attributes_keeps_metadata_when_nothing_is_dropped() -> None: + attributes = { + "gen_ai.operation.name": "openai.chat", + "gen_ai.request.model": "gpt-5.2", + "gen_ai.prompt.0.role": "user", + } + + pruned = prune_otel_span_attributes(attributes) + + assert pruned == attributes From f860b2f8e2d3a1ed0261534a97dfe331a58c3db7 Mon Sep 17 00:00:00 2001 From: Alex <94366726+Hurleveur@users.noreply.github.com> Date: Wed, 11 Mar 2026 14:33:54 +0100 Subject: [PATCH 38/43] Change VERTEXAI_LOCATION from 'us-central1' to 'global' us-central1 doesn't have access to the latest gemini models like gemini-3-flash-preview --- docs/llm-providers/vertex.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/llm-providers/vertex.mdx b/docs/llm-providers/vertex.mdx index 18c6ecc..d7ed971 100644 --- a/docs/llm-providers/vertex.mdx +++ b/docs/llm-providers/vertex.mdx @@ -44,7 +44,7 @@ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json" ```bash export VERTEXAI_PROJECT="your-project-id" -export VERTEXAI_LOCATION="us-central1" +export VERTEXAI_LOCATION="global" ``` ## Prerequisites From f71e34dd0f0149843de1c4345c5ce9799493fbcd Mon Sep 17 00:00:00 2001 From: Ahmed Allam <49919286+0xallam@users.noreply.github.com> Date: Wed, 11 Mar 2026 14:16:59 -0700 Subject: [PATCH 39/43] Update web search model name to 'sonar-reasoning-pro' --- strix/tools/web_search/web_search_actions.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/strix/tools/web_search/web_search_actions.py b/strix/tools/web_search/web_search_actions.py index 52f00a9..f2b6fcf 100644 --- a/strix/tools/web_search/web_search_actions.py +++ b/strix/tools/web_search/web_search_actions.py @@ -46,7 +46,7 @@ def web_search(query: str) -> dict[str, Any]: headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"} payload = { - "model": "sonar-reasoning", + "model": "sonar-reasoning-pro", "messages": [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": query}, From 7dde988efcf6ca3a413a8ed03dacda05031fffdd Mon Sep 17 00:00:00 2001 From: 0xallam Date: Sat, 14 Mar 2026 11:31:40 -0700 Subject: [PATCH 40/43] fix: web_search tool not loading when API key is in config file The perplexity API key check in strix/tools/__init__.py used Config.get() which only checks os.environ. At import time, the config file (~/.strix/cli-config.json) hasn't been applied to env vars yet, so the check always returned False. Replace with _has_perplexity_api() that checks os.environ first (fast path for SaaS/env var), then falls back to Config.load() which reads the config file directly. --- strix/tools/__init__.py | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/strix/tools/__init__.py b/strix/tools/__init__.py index 1c49472..8e92f6c 100644 --- a/strix/tools/__init__.py +++ b/strix/tools/__init__.py @@ -24,9 +24,22 @@ from .registry import ( SANDBOX_MODE = os.getenv("STRIX_SANDBOX_MODE", "false").lower() == "true" -HAS_PERPLEXITY_API = bool(Config.get("perplexity_api_key")) -DISABLE_BROWSER = (Config.get("strix_disable_browser") or "false").lower() == "true" +def _is_browser_disabled() -> bool: + if os.getenv("STRIX_DISABLE_BROWSER", "").lower() == "true": + return True + val: str = Config.load().get("env", {}).get("STRIX_DISABLE_BROWSER", "") + return str(val).lower() == "true" + + +DISABLE_BROWSER = _is_browser_disabled() + + +def _has_perplexity_api() -> bool: + if os.getenv("PERPLEXITY_API_KEY"): + return True + return bool(Config.load().get("env", {}).get("PERPLEXITY_API_KEY")) + if not SANDBOX_MODE: from .agents_graph import * # noqa: F403 @@ -43,7 +56,7 @@ if not SANDBOX_MODE: from .thinking import * # noqa: F403 from .todo import * # noqa: F403 - if HAS_PERPLEXITY_API: + if _has_perplexity_api(): from .web_search import * # noqa: F403 else: if not DISABLE_BROWSER: From 140486409795bf0c23d6f4065ad4cf20026a8330 Mon Sep 17 00:00:00 2001 From: 0xallam Date: Sat, 14 Mar 2026 11:21:04 -0700 Subject: [PATCH 41/43] feat: add interactive mode for agent loop MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Re-architects the agent loop to support interactive (chat-like) mode where text-only responses pause execution and wait for user input, while tool-call responses continue looping autonomously. - Add `interactive` flag to LLMConfig (default False, no regression) - Add configurable `waiting_timeout` to AgentState (0 = disabled) - _process_iteration returns None for text-only → agent_loop pauses - Conditional system prompt: interactive allows natural text responses - Skip Continue the task. injection in interactive mode - Sub-agents inherit interactive from parent (300s auto-resume timeout) - Root interactive agents wait indefinitely for user input (timeout=0) - TUI sets interactive=True; CLI unchanged (non_interactive=True) --- strix/agents/StrixAgent/system_prompt.jinja | 25 ++++++++++++++ strix/agents/base_agent.py | 34 +++++++++++++------ strix/agents/state.py | 6 +++- strix/interface/cli.py | 1 - strix/interface/tui.py | 2 +- strix/llm/config.py | 3 ++ strix/llm/llm.py | 3 +- .../agents_graph/agents_graph_actions.py | 21 +++++++++--- 8 files changed, 75 insertions(+), 20 deletions(-) diff --git a/strix/agents/StrixAgent/system_prompt.jinja b/strix/agents/StrixAgent/system_prompt.jinja index 36c8850..9edd62e 100644 --- a/strix/agents/StrixAgent/system_prompt.jinja +++ b/strix/agents/StrixAgent/system_prompt.jinja @@ -21,6 +21,18 @@ INTER-AGENT MESSAGES: - NEVER echo agent_identity blocks; treat them as internal metadata for identity only. Do not include them in outputs or tool calls. - Minimize inter-agent messaging: only message when essential for coordination or assistance; avoid routine status updates; batch non-urgent information; prefer parent/child completion flows and shared artifacts over messaging +{% if interactive %} +INTERACTIVE BEHAVIOR: +- You are in an interactive conversation with a user +- CRITICAL: A message WITHOUT a tool call IMMEDIATELY STOPS execution and waits for user input. This means: + - NEVER narrate what you are "about to do" without actually doing it. Statements like "I'll now launch the browser..." or "Let me scan the target..." WITHOUT a tool call will HALT your work. + - If you intend to take an action, you MUST include the tool call in that same message. Describe what you're doing AND call the tool together. + - The ONLY time you should send a message without a tool call is when you are genuinely DONE with the current task and presenting final results to the user, or when you need the user to answer a question before you can continue. +- While working on a task, every single message MUST contain a tool call — this is what keeps execution moving +- You may include brief explanatory text alongside the tool call +- Respond naturally when the user asks questions or gives instructions +- NEVER send empty messages — if you have nothing to do or say, call the wait_for_message tool +{% else %} AUTONOMOUS BEHAVIOR: - Work autonomously by default - You should NOT ask for user input or confirmation - you should always proceed with your task autonomously. @@ -28,6 +40,7 @@ AUTONOMOUS BEHAVIOR: - NEVER send an empty or blank message. If you have no content to output or need to wait (for user input, subagent results, or any other reason), you MUST call the wait_for_message tool (or another appropriate tool) instead of emitting an empty response. - If there is nothing to execute and no user query to answer any more: do NOT send filler/repetitive text — either call wait_for_message or finish your work (subagents: agent_finish; root: finish_scan) - While the agent loop is running, almost every output MUST be a tool call. Do NOT send plain text messages; act via tools. If idle, use wait_for_message; when done, use agent_finish (subagents) or finish_scan (root) +{% endif %} @@ -307,7 +320,11 @@ Tool call format: CRITICAL RULES: +{% if interactive %} +0. When using tools, include exactly one tool call per message. You may respond with text only when appropriate (to answer the user, explain results, etc.). +{% else %} 0. While active in the agent loop, EVERY message you output MUST be a single tool call. Do not send plain text-only responses. +{% endif %} 1. Exactly one tool call per message — never include more than one ... block in a single LLM message. 2. Tool call must be last in message 3. EVERY tool call MUST end with . This is MANDATORY. Never omit the closing tag. End your response immediately after . @@ -315,7 +332,11 @@ CRITICAL RULES: 5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. Literal "\n" instead of real line breaks will cause tools to fail. 6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants). 7. Parameters must use value exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values. +{% if interactive %} +8. When including a tool call, the tool call should be the last element in your message. You may include brief explanatory text before it. +{% else %} 8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block. +{% endif %} CORRECT format — use this EXACTLY: @@ -336,7 +357,11 @@ Do NOT emit any extra XML tags in your output. In particular: - NO ... or ... blocks - NO ... or ... blocks - NO ... or ... wrappers +{% if not interactive %} If you need to reason, use the think tool. Your raw output must contain ONLY the tool call — no surrounding XML tags. +{% else %} +If you need to reason, use the think tool. When using tools, do not add surrounding XML tags. +{% endif %} Notice: use NOT , use NOT , use NOT . diff --git a/strix/agents/base_agent.py b/strix/agents/base_agent.py index 99d0332..74fe21e 100644 --- a/strix/agents/base_agent.py +++ b/strix/agents/base_agent.py @@ -56,7 +56,6 @@ class BaseAgent(metaclass=AgentMeta): self.config = config self.local_sources = config.get("local_sources", []) - self.non_interactive = config.get("non_interactive", False) if "max_iterations" in config: self.max_iterations = config["max_iterations"] @@ -74,6 +73,9 @@ class BaseAgent(metaclass=AgentMeta): max_iterations=self.max_iterations, ) + self.interactive = getattr(self.llm_config, "interactive", False) + if self.interactive and self.state.parent_id is None: + self.state.waiting_timeout = 0 self.llm = LLM(self.llm_config, agent_name=self.agent_name) with contextlib.suppress(Exception): @@ -169,7 +171,7 @@ class BaseAgent(metaclass=AgentMeta): continue if self.state.should_stop(): - if self.non_interactive: + if not self.interactive: return self.state.final_result or {} await self._enter_waiting_state(tracer) continue @@ -213,8 +215,12 @@ class BaseAgent(metaclass=AgentMeta): should_finish = await iteration_task self._current_task = None + if should_finish is None and self.interactive: + await self._enter_waiting_state(tracer, text_response=True) + continue + if should_finish: - if self.non_interactive: + if not self.interactive: self.state.set_completed({"success": True}) if tracer: tracer.update_agent_status(self.state.agent_id, "completed") @@ -230,7 +236,7 @@ class BaseAgent(metaclass=AgentMeta): self.state.add_message( "assistant", f"{partial_content}\n\n[ABORTED BY USER]" ) - if self.non_interactive: + if not self.interactive: raise await self._enter_waiting_state(tracer, error_occurred=False, was_cancelled=True) continue @@ -243,7 +249,7 @@ class BaseAgent(metaclass=AgentMeta): except (RuntimeError, ValueError, TypeError) as e: if not await self._handle_iteration_error(e, tracer): - if self.non_interactive: + if not self.interactive: self.state.set_completed({"success": False, "error": str(e)}) if tracer: tracer.update_agent_status(self.state.agent_id, "failed") @@ -283,11 +289,14 @@ class BaseAgent(metaclass=AgentMeta): task_completed: bool = False, error_occurred: bool = False, was_cancelled: bool = False, + text_response: bool = False, ) -> None: self.state.enter_waiting_state() if tracer: - if task_completed: + if text_response: + tracer.update_agent_status(self.state.agent_id, "waiting_for_input") + elif task_completed: tracer.update_agent_status(self.state.agent_id, "completed") elif error_occurred: tracer.update_agent_status(self.state.agent_id, "error") @@ -296,6 +305,9 @@ class BaseAgent(metaclass=AgentMeta): else: tracer.update_agent_status(self.state.agent_id, "stopped") + if text_response: + return + if task_completed: self.state.add_message( "assistant", @@ -352,7 +364,7 @@ class BaseAgent(metaclass=AgentMeta): self.state.add_message("user", task) - async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool: + async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool | None: final_response = None async for response in self.llm.generate(self.state.get_conversation_history()): @@ -398,7 +410,7 @@ class BaseAgent(metaclass=AgentMeta): if actions: return await self._execute_actions(actions, tracer) - return False + return None async def _execute_actions(self, actions: list[Any], tracer: Optional["Tracer"]) -> bool: """Execute actions and return True if agent should finish.""" @@ -426,7 +438,7 @@ class BaseAgent(metaclass=AgentMeta): self.state.set_completed({"success": True}) if tracer: tracer.update_agent_status(self.state.agent_id, "completed") - if self.non_interactive and self.state.parent_id is None: + if not self.interactive and self.state.parent_id is None: return True return True @@ -526,7 +538,7 @@ class BaseAgent(metaclass=AgentMeta): error_details = error.details self.state.add_error(error_msg) - if self.non_interactive: + if not self.interactive: self.state.set_completed({"success": False, "error": error_msg}) if tracer: tracer.update_agent_status(self.state.agent_id, "failed", error_msg) @@ -561,7 +573,7 @@ class BaseAgent(metaclass=AgentMeta): error_details = getattr(error, "details", None) self.state.add_error(error_msg) - if self.non_interactive: + if not self.interactive: self.state.set_completed({"success": False, "error": error_msg}) if tracer: tracer.update_agent_status(self.state.agent_id, "failed", error_msg) diff --git a/strix/agents/state.py b/strix/agents/state.py index 6af402e..da04ee7 100644 --- a/strix/agents/state.py +++ b/strix/agents/state.py @@ -25,6 +25,7 @@ class AgentState(BaseModel): waiting_for_input: bool = False llm_failed: bool = False waiting_start_time: datetime | None = None + waiting_timeout: int = 600 final_result: dict[str, Any] | None = None max_iterations_warning_sent: bool = False @@ -116,6 +117,9 @@ class AgentState(BaseModel): return self.iteration >= int(self.max_iterations * threshold) def has_waiting_timeout(self) -> bool: + if self.waiting_timeout == 0: + return False + if not self.waiting_for_input or not self.waiting_start_time: return False @@ -128,7 +132,7 @@ class AgentState(BaseModel): return False elapsed = (datetime.now(UTC) - self.waiting_start_time).total_seconds() - return elapsed > 600 + return elapsed > self.waiting_timeout def has_empty_last_messages(self, count: int = 3) -> bool: if len(self.messages) < count: diff --git a/strix/interface/cli.py b/strix/interface/cli.py index 4b5d109..430eebc 100644 --- a/strix/interface/cli.py +++ b/strix/interface/cli.py @@ -78,7 +78,6 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915 agent_config = { "llm_config": llm_config, "max_iterations": 300, - "non_interactive": True, } if getattr(args, "local_sources", None): diff --git a/strix/interface/tui.py b/strix/interface/tui.py index 1a62255..7f453ba 100644 --- a/strix/interface/tui.py +++ b/strix/interface/tui.py @@ -747,7 +747,7 @@ class StrixTUIApp(App): # type: ignore[misc] def _build_agent_config(self, args: argparse.Namespace) -> dict[str, Any]: scan_mode = getattr(args, "scan_mode", "deep") - llm_config = LLMConfig(scan_mode=scan_mode) + llm_config = LLMConfig(scan_mode=scan_mode, interactive=True) config = { "llm_config": llm_config, diff --git a/strix/llm/config.py b/strix/llm/config.py index a2217bb..c3a371d 100644 --- a/strix/llm/config.py +++ b/strix/llm/config.py @@ -11,6 +11,7 @@ class LLMConfig: skills: list[str] | None = None, timeout: int | None = None, scan_mode: str = "deep", + interactive: bool = False, ): resolved_model, self.api_key, self.api_base = resolve_llm_config() self.model_name = model_name or resolved_model @@ -28,3 +29,5 @@ class LLMConfig: self.timeout = timeout or int(Config.get("llm_timeout") or "300") self.scan_mode = scan_mode if scan_mode in ["quick", "standard", "deep"] else "deep" + + self.interactive = interactive diff --git a/strix/llm/llm.py b/strix/llm/llm.py index d941361..1091f3b 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -97,6 +97,7 @@ class LLM: result = env.get_template("system_prompt.jinja").render( get_tools_prompt=get_tools_prompt, loaded_skill_names=list(skill_content.keys()), + interactive=self.config.interactive, **skill_content, ) return str(result) @@ -186,7 +187,7 @@ class LLM: conversation_history.extend(compressed) messages.extend(compressed) - if messages[-1].get("role") == "assistant": + if messages[-1].get("role") == "assistant" and not self.config.interactive: messages.append({"role": "user", "content": "Continue the task."}) if self._is_anthropic() and self.config.enable_prompt_caching: diff --git a/strix/tools/agents_graph/agents_graph_actions.py b/strix/tools/agents_graph/agents_graph_actions.py index dd0e569..a351ee3 100644 --- a/strix/tools/agents_graph/agents_graph_actions.py +++ b/strix/tools/agents_graph/agents_graph_actions.py @@ -227,26 +227,37 @@ def create_agent( from strix.agents.state import AgentState from strix.llm.config import LLMConfig - state = AgentState(task=task, agent_name=name, parent_id=parent_id, max_iterations=300) - parent_agent = _agent_instances.get(parent_id) timeout = None scan_mode = "deep" + interactive = False if parent_agent and hasattr(parent_agent, "llm_config"): if hasattr(parent_agent.llm_config, "timeout"): timeout = parent_agent.llm_config.timeout if hasattr(parent_agent.llm_config, "scan_mode"): scan_mode = parent_agent.llm_config.scan_mode + interactive = getattr(parent_agent.llm_config, "interactive", False) - llm_config = LLMConfig(skills=skill_list, timeout=timeout, scan_mode=scan_mode) + state = AgentState( + task=task, + agent_name=name, + parent_id=parent_id, + max_iterations=300, + waiting_timeout=300 if interactive else 600, + ) + + llm_config = LLMConfig( + skills=skill_list, + timeout=timeout, + scan_mode=scan_mode, + interactive=interactive, + ) agent_config = { "llm_config": llm_config, "state": state, } - if parent_agent and hasattr(parent_agent, "non_interactive"): - agent_config["non_interactive"] = parent_agent.non_interactive agent = StrixAgent(agent_config) From f0f8f3d4cc3289900060db6f5822327bd72e7a1d Mon Sep 17 00:00:00 2001 From: Ahmed Allam <49919286+0xallam@users.noreply.github.com> Date: Tue, 17 Mar 2026 22:00:34 -0700 Subject: [PATCH 42/43] Add tip about Strix integration with GitHub Actions --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index e1e8818..8dbfa9b 100644 --- a/README.md +++ b/README.md @@ -32,6 +32,10 @@ + +> [!TIP] +> **New!** Strix integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production! + --- From 86341597c15198dc4d9418f2b2391583e4fe79f5 Mon Sep 17 00:00:00 2001 From: alex s <46074070+bearsyankees@users.noreply.github.com> Date: Thu, 19 Mar 2026 17:47:29 -0600 Subject: [PATCH 43/43] feat: add skills for specific tools (#366) Co-authored-by: 0xallam --- docs/advanced/skills.mdx | 15 ++ strix/interface/tool_components/__init__.py | 2 + .../tool_components/load_skill_renderer.py | 33 +++++ strix/llm/llm.py | 36 ++++- strix/skills/README.md | 1 + strix/skills/__init__.py | 24 +++ strix/skills/tooling/ffuf.md | 66 +++++++++ strix/skills/tooling/httpx.md | 77 ++++++++++ strix/skills/tooling/katana.md | 76 ++++++++++ strix/skills/tooling/naabu.md | 68 +++++++++ strix/skills/tooling/nmap.md | 66 +++++++++ strix/skills/tooling/nuclei.md | 67 +++++++++ strix/skills/tooling/semgrep.md | 72 +++++++++ strix/skills/tooling/sqlmap.md | 67 +++++++++ strix/skills/tooling/subfinder.md | 66 +++++++++ strix/tools/__init__.py | 1 + .../agents_graph/agents_graph_actions.py | 27 +--- strix/tools/load_skill/__init__.py | 4 + strix/tools/load_skill/load_skill_actions.py | 71 +++++++++ .../load_skill/load_skill_actions_schema.xml | 33 +++++ tests/skills/__init__.py | 1 + tests/tools/test_load_skill_tool.py | 139 ++++++++++++++++++ 22 files changed, 986 insertions(+), 26 deletions(-) create mode 100644 strix/interface/tool_components/load_skill_renderer.py create mode 100644 strix/skills/tooling/ffuf.md create mode 100644 strix/skills/tooling/httpx.md create mode 100644 strix/skills/tooling/katana.md create mode 100644 strix/skills/tooling/naabu.md create mode 100644 strix/skills/tooling/nmap.md create mode 100644 strix/skills/tooling/nuclei.md create mode 100644 strix/skills/tooling/semgrep.md create mode 100644 strix/skills/tooling/sqlmap.md create mode 100644 strix/skills/tooling/subfinder.md create mode 100644 strix/tools/load_skill/__init__.py create mode 100644 strix/tools/load_skill/load_skill_actions.py create mode 100644 strix/tools/load_skill/load_skill_actions_schema.xml create mode 100644 tests/skills/__init__.py create mode 100644 tests/tools/test_load_skill_tool.py diff --git a/docs/advanced/skills.mdx b/docs/advanced/skills.mdx index 5345600..38aacd0 100644 --- a/docs/advanced/skills.mdx +++ b/docs/advanced/skills.mdx @@ -81,6 +81,21 @@ Protocol-specific testing techniques. | --------- | ------------------------------------------------ | | `graphql` | GraphQL introspection, batching, resolver issues | +### Tooling + +Sandbox CLI playbooks for core recon and scanning tools. + +| Skill | Coverage | +| ----------- | ------------------------------------------------------- | +| `nmap` | Port/service scan syntax and high-signal scan patterns | +| `nuclei` | Template selection, severity filtering, and rate tuning | +| `httpx` | HTTP probing and fingerprint output patterns | +| `ffuf` | Wordlist fuzzing, matcher/filter strategy, recursion | +| `subfinder` | Passive subdomain enumeration and source control | +| `naabu` | Fast port scanning with explicit rate/verify controls | +| `katana` | Crawl depth/JS/known-files behavior and pitfalls | +| `sqlmap` | SQLi workflow for enumeration and controlled extraction | + ## Skill Structure Each skill is a Markdown file with YAML frontmatter for metadata: diff --git a/strix/interface/tool_components/__init__.py b/strix/interface/tool_components/__init__.py index cb8aeea..c8b6007 100644 --- a/strix/interface/tool_components/__init__.py +++ b/strix/interface/tool_components/__init__.py @@ -4,6 +4,7 @@ from . import ( browser_renderer, file_edit_renderer, finish_renderer, + load_skill_renderer, notes_renderer, proxy_renderer, python_renderer, @@ -28,6 +29,7 @@ __all__ = [ "file_edit_renderer", "finish_renderer", "get_tool_renderer", + "load_skill_renderer", "notes_renderer", "proxy_renderer", "python_renderer", diff --git a/strix/interface/tool_components/load_skill_renderer.py b/strix/interface/tool_components/load_skill_renderer.py new file mode 100644 index 0000000..41a1868 --- /dev/null +++ b/strix/interface/tool_components/load_skill_renderer.py @@ -0,0 +1,33 @@ +from typing import Any, ClassVar + +from rich.text import Text +from textual.widgets import Static + +from .base_renderer import BaseToolRenderer +from .registry import register_tool_renderer + + +@register_tool_renderer +class LoadSkillRenderer(BaseToolRenderer): + tool_name: ClassVar[str] = "load_skill" + css_classes: ClassVar[list[str]] = ["tool-call", "load-skill-tool"] + + @classmethod + def render(cls, tool_data: dict[str, Any]) -> Static: + args = tool_data.get("args", {}) + status = tool_data.get("status", "completed") + + requested = args.get("skills", "") + + text = Text() + text.append("◇ ", style="#10b981") + text.append("loading skill", style="dim") + + if requested: + text.append(" ") + text.append(requested, style="#10b981") + elif not tool_data.get("result"): + text.append("\n ") + text.append("Loading...", style="dim") + + return Static(text, classes=cls.get_css_classes(status)) diff --git a/strix/llm/llm.py b/strix/llm/llm.py index 1091f3b..fe1758f 100644 --- a/strix/llm/llm.py +++ b/strix/llm/llm.py @@ -63,6 +63,7 @@ class LLM: self.config = config self.agent_name = agent_name self.agent_id: str | None = None + self._active_skills: list[str] = list(config.skills or []) self._total_stats = RequestStats() self.memory_compressor = MemoryCompressor(model_name=config.litellm_model) self.system_prompt = self._load_system_prompt(agent_name) @@ -87,10 +88,7 @@ class LLM: autoescape=select_autoescape(enabled_extensions=(), default_for_string=False), ) - skills_to_load = [ - *list(self.config.skills or []), - f"scan_modes/{self.config.scan_mode}", - ] + skills_to_load = self._get_skills_to_load() skill_content = load_skills(skills_to_load) env.globals["get_skill"] = lambda name: skill_content.get(name, "") @@ -104,6 +102,36 @@ class LLM: except Exception: # noqa: BLE001 return "" + def _get_skills_to_load(self) -> list[str]: + ordered_skills = [*self._active_skills] + ordered_skills.append(f"scan_modes/{self.config.scan_mode}") + + deduped: list[str] = [] + seen: set[str] = set() + for skill_name in ordered_skills: + if skill_name not in seen: + deduped.append(skill_name) + seen.add(skill_name) + + return deduped + + def add_skills(self, skill_names: list[str]) -> list[str]: + added: list[str] = [] + for skill_name in skill_names: + if not skill_name or skill_name in self._active_skills: + continue + self._active_skills.append(skill_name) + added.append(skill_name) + + if not added: + return [] + + updated_prompt = self._load_system_prompt(self.agent_name) + if updated_prompt: + self.system_prompt = updated_prompt + + return added + def set_agent_identity(self, agent_name: str | None, agent_id: str | None) -> None: if agent_name: self.agent_name = agent_name diff --git a/strix/skills/README.md b/strix/skills/README.md index 4543cd5..1d4f71d 100644 --- a/strix/skills/README.md +++ b/strix/skills/README.md @@ -33,6 +33,7 @@ The skills are dynamically injected into the agent's system prompt, allowing it | **`/frameworks`** | Specific testing methods for popular frameworks e.g. Django, Express, FastAPI, and Next.js | | **`/technologies`** | Specialized techniques for third-party services such as Supabase, Firebase, Auth0, and payment gateways | | **`/protocols`** | Protocol-specific testing patterns for GraphQL, WebSocket, OAuth, and other communication standards | +| **`/tooling`** | Command-line playbooks for core sandbox tools (nmap, nuclei, httpx, ffuf, subfinder, naabu, katana, sqlmap) | | **`/cloud`** | Cloud provider security testing for AWS, Azure, GCP, and Kubernetes environments | | **`/reconnaissance`** | Advanced information gathering and enumeration techniques for comprehensive attack surface mapping | | **`/custom`** | Community-contributed skills for specialized or industry-specific testing scenarios | diff --git a/strix/skills/__init__.py b/strix/skills/__init__.py index c9cdf03..37ffc58 100644 --- a/strix/skills/__init__.py +++ b/strix/skills/__init__.py @@ -54,6 +54,30 @@ def validate_skill_names(skill_names: list[str]) -> dict[str, list[str]]: return {"valid": valid_skills, "invalid": invalid_skills} +def parse_skill_list(skills: str | None) -> list[str]: + if not skills: + return [] + return [s.strip() for s in skills.split(",") if s.strip()] + + +def validate_requested_skills(skill_list: list[str], max_skills: int = 5) -> str | None: + if len(skill_list) > max_skills: + return "Cannot specify more than 5 skills for an agent (use comma-separated format)" + + if not skill_list: + return None + + validation = validate_skill_names(skill_list) + if validation["invalid"]: + available_skills = list(get_all_skill_names()) + return ( + f"Invalid skills: {validation['invalid']}. " + f"Available skills: {', '.join(available_skills)}" + ) + + return None + + def generate_skills_description() -> str: available_skills = get_available_skills() diff --git a/strix/skills/tooling/ffuf.md b/strix/skills/tooling/ffuf.md new file mode 100644 index 0000000..0c4d1f0 --- /dev/null +++ b/strix/skills/tooling/ffuf.md @@ -0,0 +1,66 @@ +--- +name: ffuf +description: ffuf fuzzing syntax with matcher/filter strategy and non-interactive defaults. +--- + +# ffuf CLI Playbook + +Official docs: +- https://github.com/ffuf/ffuf + +Canonical syntax: +`ffuf -w -u [flags]` + +High-signal flags: +- `-u ` target URL containing `FUZZ` +- `-w ` wordlist input (supports `KEYWORD` mapping via `-w file:KEYWORD`) +- `-mc ` match status codes +- `-fc ` filter status codes +- `-fs ` filter by body size +- `-ac` auto-calibration +- `-t ` threads +- `-rate ` request rate +- `-timeout ` HTTP timeout +- `-x ` upstream proxy (HTTP/SOCKS) +- `-ignore-body` skip downloading response body +- `-noninteractive` disable interactive console mode +- `-recursion` and `-recursion-depth ` recursive discovery +- `-H
` custom headers +- `-X ` and `-d ` for non-GET fuzzing +- `-o -of ` structured output + +Agent-safe baseline for automation: +`ffuf -w wordlist.txt -u https://target.tld/FUZZ -mc 200,204,301,302,307,401,403,405 -ac -t 20 -rate 50 -timeout 10 -noninteractive -of json -o ffuf.json` + +Common patterns: +- Basic path fuzzing: + `ffuf -w /path/wordlist.txt -u https://target.tld/FUZZ -mc 200,204,301,302,307,401,403 -ac -t 40 -rate 200 -noninteractive` +- Vhost fuzzing: + `ffuf -w vhosts.txt -u https://target.tld -H 'Host: FUZZ.target.tld' -fs 0 -ac -noninteractive` +- Parameter value fuzzing: + `ffuf -w values.txt -u 'https://target.tld/search?q=FUZZ' -mc all -fs 0 -ac -t 30 -noninteractive` +- POST body fuzzing: + `ffuf -w payloads.txt -u https://target.tld/login -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'username=admin&password=FUZZ' -fc 401 -noninteractive` +- Recursive discovery: + `ffuf -w dirs.txt -u https://target.tld/FUZZ -recursion -recursion-depth 2 -ac -t 30 -noninteractive` +- Proxy-instrumented run: + `ffuf -w wordlist.txt -u https://target.tld/FUZZ -x http://127.0.0.1:48080 -mc 200,301,302,403 -ac -noninteractive` + +Critical correctness rules: +- `FUZZ` must appear exactly at the mutation point in URL/header/body. +- If using `-w file:KEYWORD`, that same `KEYWORD` must be present in URL/header/body. +- Always include `-noninteractive` in agent/script execution to prevent ffuf console mode from swallowing subsequent shell commands. +- Save structured output with `-of json -o ` for deterministic parsing. + +Usage rules: +- Prefer explicit matcher/filter strategy (`-mc`/`-fc`/`-fs`) over default-only output. +- Start conservative (`-rate`, `-t`) and scale only if target tolerance is known. +- Do not use `-h`/`--help` during normal execution unless absolutely necessary. + +Failure recovery: +- If ffuf drops into interactive mode, send `C-c` and rerun with `-noninteractive`. +- If response noise is too high, tighten `-mc/-fc/-fs` instead of increasing load. +- If runtime is too long, lower `-rate/-t` and tighten scope. + +If uncertain, query web_search with: +`site:github.com/ffuf/ffuf README` diff --git a/strix/skills/tooling/httpx.md b/strix/skills/tooling/httpx.md new file mode 100644 index 0000000..50fcf53 --- /dev/null +++ b/strix/skills/tooling/httpx.md @@ -0,0 +1,77 @@ +--- +name: httpx +description: ProjectDiscovery httpx probing syntax, exact probe flags, and automation-safe output patterns. +--- + +# httpx CLI Playbook + +Official docs: +- https://docs.projectdiscovery.io/opensource/httpx/usage +- https://docs.projectdiscovery.io/opensource/httpx/running +- https://github.com/projectdiscovery/httpx + +Canonical syntax: +`httpx [flags]` + +High-signal flags: +- `-u, -target ` single target +- `-l, -list ` target list +- `-nf, -no-fallback` probe both HTTP and HTTPS +- `-nfs, -no-fallback-scheme` do not auto-switch schemes +- `-sc` status code +- `-title` page title +- `-server, -web-server` server header +- `-td, -tech-detect` technology detection +- `-fr, -follow-redirects` follow redirects +- `-mc ` / `-fc ` match or filter status codes +- `-path ` probe specific paths +- `-p, -ports ` probe custom ports +- `-proxy, -http-proxy ` proxy target requests +- `-tlsi, -tls-impersonate` experimental TLS impersonation +- `-j, -json` JSONL output +- `-sr, -store-response` store request/response artifacts +- `-srd, -store-response-dir ` custom directory for stored artifacts +- `-silent` compact output +- `-rl ` requests/second cap +- `-t ` threads +- `-timeout ` request timeout +- `-retries ` retry attempts +- `-o ` output file + +Agent-safe baseline for automation: +`httpx -l hosts.txt -sc -title -server -td -fr -timeout 10 -retries 1 -rl 50 -t 25 -silent -j -o httpx.jsonl` + +Common patterns: +- Quick live+fingerprint check: + `httpx -l hosts.txt -sc -title -server -td -silent -o httpx.txt` +- Probe known admin paths: + `httpx -l hosts.txt -path /,/login,/admin -sc -title -silent -j -o httpx_paths.jsonl` +- Probe both schemes explicitly: + `httpx -l hosts.txt -nf -sc -title -silent` +- Vhost detection pass: + `httpx -l hosts.txt -vhost -sc -title -silent -j -o httpx_vhost.jsonl` +- Proxy-instrumented probing: + `httpx -l hosts.txt -sc -title -proxy http://127.0.0.1:48080 -silent -j -o httpx_proxy.jsonl` +- Response-storage pass for downstream content parsing: + `httpx -l hosts.txt -fr -sr -srd recon/httpx_store -sc -title -server -cl -ct -location -probe -silent` + +Critical correctness rules: +- For machine parsing, prefer `-j -o `. +- Keep `-rl` and `-t` explicit for reproducible throughput. +- Use `-nf` when you need dual-scheme probing from host-only input. +- When using `-path` or `-ports`, keep scope tight to avoid accidental scan inflation. +- Use `-sr -srd ` when later steps need raw response artifacts (JS/route extraction, grepping, replay). + +Usage rules: +- Use `-silent` for pipeline-friendly output. +- Use `-mc/-fc` when downstream steps depend on specific response classes. +- Prefer `-proxy` flag over global proxy env vars when only httpx traffic should be proxied. +- Do not use `-h`/`--help` for routine runs unless absolutely necessary. + +Failure recovery: +- If too many timeouts occur, reduce `-rl/-t` and/or increase `-timeout`. +- If output is noisy, add `-fc` filters or `-fd` duplicate filtering. +- If HTTPS-only probing misses HTTP services, rerun with `-nf` (and avoid `-nfs`). + +If uncertain, query web_search with: +`site:docs.projectdiscovery.io httpx usage` diff --git a/strix/skills/tooling/katana.md b/strix/skills/tooling/katana.md new file mode 100644 index 0000000..258e8e0 --- /dev/null +++ b/strix/skills/tooling/katana.md @@ -0,0 +1,76 @@ +--- +name: katana +description: Katana crawler syntax, depth/js/known-files behavior, and stable concurrency controls. +--- + +# Katana CLI Playbook + +Official docs: +- https://docs.projectdiscovery.io/opensource/katana/usage +- https://docs.projectdiscovery.io/opensource/katana/running +- https://github.com/projectdiscovery/katana + +Canonical syntax: +`katana [flags]` + +High-signal flags: +- `-u, -list ` target URL(s) +- `-d, -depth ` crawl depth +- `-jc, -js-crawl` parse JavaScript-discovered endpoints +- `-jsl, -jsluice` deeper JS parsing (memory intensive) +- `-kf, -known-files ` known-file crawling mode +- `-proxy ` explicit proxy setting +- `-c, -concurrency ` concurrent fetchers +- `-p, -parallelism ` concurrent input targets +- `-rl, -rate-limit ` request rate limit +- `-timeout ` request timeout +- `-retry ` retry count +- `-ef, -extension-filter ` extension exclusions +- `-tlsi, -tls-impersonate` experimental JA3/TLS impersonation +- `-hl, -headless` enable hybrid headless crawling +- `-sc, -system-chrome` use local Chrome for headless mode +- `-ho, -headless-options ` extra Chrome options (for example proxy-server) +- `-nos, -no-sandbox` run Chrome headless with no-sandbox +- `-noi, -no-incognito` disable incognito in headless mode +- `-cdd, -chrome-data-dir ` persist browser profile/session +- `-xhr, -xhr-extraction` include XHR endpoints in JSONL output +- `-silent`, `-j, -jsonl`, `-o ` output controls + +Agent-safe baseline for automation: +`mkdir -p crawl && katana -u https://target.tld -d 3 -jc -kf robotstxt -c 10 -p 10 -rl 50 -timeout 10 -retry 1 -ef png,jpg,jpeg,gif,svg,css,woff,woff2,ttf,eot,map -silent -j -o crawl/katana.jsonl` + +Common patterns: +- Fast crawl baseline: + `katana -u https://target.tld -d 3 -jc -silent` +- Deeper JS-aware crawl: + `katana -u https://target.tld -d 5 -jc -jsl -kf all -c 10 -p 10 -rl 50 -o katana_urls.txt` +- Multi-target run with JSONL output: + `katana -list urls.txt -d 3 -jc -silent -j -o katana.jsonl` +- Headless crawl with local Chrome: + `katana -u https://target.tld -hl -sc -nos -xhr -j -o crawl/katana_headless.jsonl` +- Headless crawl through proxy: + `katana -u https://target.tld -hl -sc -ho proxy-server=http://127.0.0.1:48080 -j -o crawl/katana_proxy.jsonl` + +Critical correctness rules: +- `-kf` must be followed by one of `all`, `robotstxt`, or `sitemapxml`. +- Use documented `-hl` for headless mode. +- `-proxy` expects a single proxy URL string (for example `http://127.0.0.1:8080`). +- `-ho` expects comma-separated Chrome options (example: `-ho --disable-gpu,proxy-server=http://127.0.0.1:8080`). +- For `-kf`, keep depth at least `-d 3` so known files are fully covered. +- If writing to a file, ensure parent directory exists before `-o`. + +Usage rules: +- Keep `-d`, `-c`, `-p`, and `-rl` explicit for reproducible runs. +- Use `-ef` early to reduce static-file noise before fuzzing. +- Prefer `-proxy` over environment proxy variables when proxying only Katana traffic. +- Use `-hc` only for one-time diagnostics, not routine crawling loops. +- Do not use `-h`/`--help` for routine runs unless absolutely necessary. + +Failure recovery: +- If crawl runs too long, lower `-d` and optionally add `-ct`. +- If memory spikes, disable `-jsl` and lower `-c/-p`. +- If headless fails with Chrome errors, drop `-sc` or install system Chrome. +- If output is noisy, tighten scope and add `-ef` filters. + +If uncertain, query web_search with: +`site:docs.projectdiscovery.io katana usage` diff --git a/strix/skills/tooling/naabu.md b/strix/skills/tooling/naabu.md new file mode 100644 index 0000000..f39d44b --- /dev/null +++ b/strix/skills/tooling/naabu.md @@ -0,0 +1,68 @@ +--- +name: naabu +description: Naabu port-scanning syntax with host input, scan-type, verification, and rate controls. +--- + +# Naabu CLI Playbook + +Official docs: +- https://docs.projectdiscovery.io/opensource/naabu/usage +- https://docs.projectdiscovery.io/opensource/naabu/running +- https://github.com/projectdiscovery/naabu + +Canonical syntax: +`naabu [flags]` + +High-signal flags: +- `-host ` single host +- `-list, -l ` hosts list +- `-p ` explicit ports (supports ranges) +- `-top-ports ` top ports profile +- `-exclude-ports ` exclusions +- `-scan-type ` SYN or CONNECT scan +- `-Pn` skip host discovery +- `-rate ` packets per second +- `-c ` worker count +- `-timeout ` per-probe timeout in milliseconds +- `-retries ` retry attempts +- `-proxy ` SOCKS5 proxy +- `-verify` verify discovered open ports +- `-j, -json` JSONL output +- `-silent` compact output +- `-o ` output file + +Agent-safe baseline for automation: +`naabu -list hosts.txt -top-ports 100 -scan-type c -Pn -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent -j -o naabu.jsonl` + +Common patterns: +- Top ports with controlled rate: + `naabu -list hosts.txt -top-ports 100 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent -o naabu.txt` +- Focused web-ports sweep: + `naabu -list hosts.txt -p 80,443,8080,8443 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent` +- Single-host quick check: + `naabu -host target.tld -p 22,80,443 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify` +- Root SYN mode (if available): + `sudo naabu -list hosts.txt -top-ports 100 -scan-type syn -rate 500 -c 25 -timeout 1000 -retries 1 -verify -silent` + +Critical correctness rules: +- Use `-scan-type connect` when running without root/privileged raw socket access. +- Always set `-timeout` explicitly; it is in milliseconds. +- Set `-rate` explicitly to avoid unstable or noisy scans. +- `-timeout` is in milliseconds, not seconds. +- Keep port scope tight: prefer explicit important ports or a small `-top-ports` value unless broader coverage is explicitly required. +- Do not spam traffic; start with the smallest useful port set and conservative rate/worker settings. +- Prefer `-verify` before handing ports to follow-up scanners. + +Usage rules: +- Keep host discovery behavior explicit (`-Pn` or default discovery). +- Use `-j -o ` for automation pipelines. +- Prefer `-p 22,80,443,8080,8443` or `-top-ports 100` before considering larger sweeps. +- Do not use `-h`/`--help` for normal flow unless absolutely necessary. + +Failure recovery: +- If privileged socket errors occur, switch to `-scan-type c`. +- If scans are slow or lossy, lower `-rate`, lower `-c`, and tighten `-p`/`-top-ports`. +- If many hosts appear down, compare runs with and without `-Pn`. + +If uncertain, query web_search with: +`site:docs.projectdiscovery.io naabu usage` diff --git a/strix/skills/tooling/nmap.md b/strix/skills/tooling/nmap.md new file mode 100644 index 0000000..831b4c6 --- /dev/null +++ b/strix/skills/tooling/nmap.md @@ -0,0 +1,66 @@ +--- +name: nmap +description: Canonical Nmap CLI syntax, two-pass scanning workflow, and sandbox-safe bounded scan patterns. +--- + +# Nmap CLI Playbook + +Official docs: +- https://nmap.org/book/man-briefoptions.html +- https://nmap.org/book/man.html +- https://nmap.org/book/man-performance.html + +Canonical syntax: +`nmap [Scan Type(s)] [Options] {target specification}` + +High-signal flags: +- `-n` skip DNS resolution +- `-Pn` skip host discovery when ICMP/ping is filtered +- `-sS` SYN scan (root/privileged) +- `-sT` TCP connect scan (no raw-socket privilege) +- `-sV` detect service versions +- `-sC` run default NSE scripts +- `-p ` explicit ports (`-p-` for all TCP ports) +- `--top-ports ` quick common-port sweep +- `--open` show only hosts with open ports +- `-T<0-5>` timing template (`-T4` common) +- `--max-retries ` cap retransmissions +- `--host-timeout