diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index f9b9106..85919ce 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -27,7 +27,7 @@ If applicable, add screenshots to help explain your problem.
- OS: [e.g. Ubuntu 22.04]
- Strix Version or Commit: [e.g. 0.1.18]
- Python Version: [e.g. 3.12]
-- LLM Used: [e.g. GPT-5, Claude Sonnet 4]
+- LLM Used: [e.g. GPT-5, Claude Sonnet 4.6]
**Additional context**
Add any other context about the problem here.
diff --git a/README.md b/README.md
index f164532..e15d766 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@
-[](https://discord.gg/strix-ai)
+[](https://discord.gg/strix-ai)
@@ -32,6 +32,7 @@
+
> [!TIP]
> **New!** Strix integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production!
@@ -72,7 +73,9 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
**Prerequisites:**
- Docker (running)
-- An LLM provider key (e.g. [get OpenAI API key](https://platform.openai.com/api-keys) or use a local LLM)
+- An LLM API key:
+ - Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
+ - Or [Strix Router](https://models.strix.ai) — single API key for multiple providers
### Installation & First Scan
@@ -80,11 +83,8 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
# Install Strix
curl -sSL https://strix.ai/install | bash
-# Or via pipx
-pipx install strix-agent
-
# Configure your AI provider
-export STRIX_LLM="openai/gpt-5"
+export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai)
export LLM_API_KEY="your-api-key"
# Run your first security assessment
@@ -96,6 +96,20 @@ strix --target ./app-directory
---
+## ☁️ Strix Platform
+
+Try the Strix full-stack security platform at **[app.strix.ai](https://app.strix.ai)** — sign up for free, connect your repos and domains, and launch a pentest in minutes.
+
+- **Validated findings with PoCs** and reproduction steps
+- **One-click autofix** as ready-to-merge pull requests
+- **Continuous monitoring** across code, cloud, and infrastructure
+- **Integrations** with GitHub, Slack, Jira, Linear, and CI/CD pipelines
+- **Continuous learning** that builds on past findings and remediations
+
+[**Start your first pentest →**](https://app.strix.ai)
+
+---
+
## ✨ Features
### Agentic Security Tools
@@ -229,11 +243,15 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high,
**Recommended models for best results:**
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
-- [Anthropic Claude Sonnet 4.5](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-5`
+- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.
+## Enterprise
+
+Get the same Strix experience with [enterprise-grade](https://strix.ai/demo) controls: SSO (SAML/OIDC), custom compliance reports, dedicated support & SLA, custom deployment options (VPC/self-hosted), BYOK model support, and tailored agents optimized for your environment. [Learn more](https://strix.ai/demo).
+
## Documentation
Full documentation is available at **[docs.strix.ai](https://docs.strix.ai)** — including detailed guides for usage, CI/CD integrations, skills, and advanced configuration.
diff --git a/containers/docker-entrypoint.sh b/containers/docker-entrypoint.sh
index 8d6fc58..cbef471 100644
--- a/containers/docker-entrypoint.sh
+++ b/containers/docker-entrypoint.sh
@@ -9,7 +9,7 @@ if [ ! -f /app/certs/ca.p12 ]; then
exit 1
fi
-caido-cli --listen 127.0.0.1:${CAIDO_PORT} \
+caido-cli --listen 0.0.0.0:${CAIDO_PORT} \
--allow-guests \
--no-logging \
--no-open \
diff --git a/docs/advanced/configuration.mdx b/docs/advanced/configuration.mdx
index 91f19bb..cf8eb93 100644
--- a/docs/advanced/configuration.mdx
+++ b/docs/advanced/configuration.mdx
@@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file.
## LLM Configuration
- Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-5`).
+ Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-6`).
@@ -46,9 +46,37 @@ Configure Strix using environment variables or a config file.
- Enable/disable anonymous telemetry. Set to `0`, `false`, `no`, or `off` to disable.
+ Global telemetry default toggle. Set to `0`, `false`, `no`, or `off` to disable both PostHog and OTEL unless overridden by per-channel flags below.
+
+ Enable/disable OpenTelemetry run observability independently. When unset, falls back to `STRIX_TELEMETRY`.
+
+
+
+ Enable/disable PostHog product telemetry independently. When unset, falls back to `STRIX_TELEMETRY`.
+
+
+
+ OTLP/Traceloop base URL for remote OpenTelemetry export. If unset, Strix keeps traces local only.
+
+
+
+ API key used for remote trace export. Remote export is enabled only when both `TRACELOOP_BASE_URL` and `TRACELOOP_API_KEY` are set.
+
+
+
+ Optional custom OTEL headers (JSON object or `key=value,key2=value2`). Useful for Langfuse or custom/self-hosted OTLP gateways.
+
+
+When remote OTEL vars are not set, Strix still writes complete run telemetry locally to:
+
+```bash
+strix_runs//events.jsonl
+```
+
+When remote vars are set, Strix dual-writes telemetry to both local JSONL and the remote OTEL endpoint.
+
## Docker Configuration
@@ -106,4 +134,5 @@ export PERPLEXITY_API_KEY="pplx-..."
# Optional: Custom timeouts
export LLM_TIMEOUT="600"
export STRIX_SANDBOX_EXECUTION_TIMEOUT="300"
+
```
diff --git a/docs/advanced/skills.mdx b/docs/advanced/skills.mdx
index 5345600..38aacd0 100644
--- a/docs/advanced/skills.mdx
+++ b/docs/advanced/skills.mdx
@@ -81,6 +81,21 @@ Protocol-specific testing techniques.
| --------- | ------------------------------------------------ |
| `graphql` | GraphQL introspection, batching, resolver issues |
+### Tooling
+
+Sandbox CLI playbooks for core recon and scanning tools.
+
+| Skill | Coverage |
+| ----------- | ------------------------------------------------------- |
+| `nmap` | Port/service scan syntax and high-signal scan patterns |
+| `nuclei` | Template selection, severity filtering, and rate tuning |
+| `httpx` | HTTP probing and fingerprint output patterns |
+| `ffuf` | Wordlist fuzzing, matcher/filter strategy, recursion |
+| `subfinder` | Passive subdomain enumeration and source control |
+| `naabu` | Fast port scanning with explicit rate/verify controls |
+| `katana` | Crawl depth/JS/known-files behavior and pitfalls |
+| `sqlmap` | SQLi workflow for enumeration and controlled extraction |
+
## Skill Structure
Each skill is a Markdown file with YAML frontmatter for metadata:
diff --git a/docs/docs.json b/docs/docs.json
index e15b496..27ee5dc 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -32,6 +32,7 @@
"group": "LLM Providers",
"pages": [
"llm-providers/overview",
+ "llm-providers/models",
"llm-providers/openai",
"llm-providers/anthropic",
"llm-providers/openrouter",
diff --git a/docs/llm-providers/anthropic.mdx b/docs/llm-providers/anthropic.mdx
index 81680a1..47a94be 100644
--- a/docs/llm-providers/anthropic.mdx
+++ b/docs/llm-providers/anthropic.mdx
@@ -6,7 +6,7 @@ description: "Configure Strix with Claude models"
## Setup
```bash
-export STRIX_LLM="anthropic/claude-sonnet-4-5"
+export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="sk-ant-..."
```
@@ -14,8 +14,8 @@ export LLM_API_KEY="sk-ant-..."
| Model | Description |
|-------|-------------|
-| `anthropic/claude-sonnet-4-5` | Best balance of intelligence and speed (recommended) |
-| `anthropic/claude-opus-4-5` | Maximum capability for deep analysis |
+| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed |
+| `anthropic/claude-opus-4-6` | Maximum capability for deep analysis |
## Get API Key
diff --git a/docs/llm-providers/models.mdx b/docs/llm-providers/models.mdx
new file mode 100644
index 0000000..758679b
--- /dev/null
+++ b/docs/llm-providers/models.mdx
@@ -0,0 +1,75 @@
+---
+title: "Strix Router"
+description: "Access top LLMs through a single API with high rate limits and zero data retention"
+---
+
+Strix Router gives you access to the best LLMs through a single API key.
+
+
+Strix Router is currently in **beta**. It's completely optional — Strix works with any [LiteLLM-compatible provider](/llm-providers/overview) using your own API keys, or with [local models](/llm-providers/local). Strix Router is just the setup we test and optimize for.
+
+
+## Why Use Strix Router?
+
+- **High rate limits** — No throttling during long-running scans
+- **Zero data retention** — Routes to providers with zero data retention policies enabled
+- **Failover & load balancing** — Automatic fallback across providers for reliability
+- **Simple setup** — One API key, one environment variable, no provider accounts needed
+- **No markup** — Same token pricing as the underlying providers, no extra fees
+
+## Quick Start
+
+1. Get your API key at [models.strix.ai](https://models.strix.ai)
+2. Set your environment:
+
+```bash
+export LLM_API_KEY='your-strix-api-key'
+export STRIX_LLM='strix/gpt-5'
+```
+
+3. Run a scan:
+
+```bash
+strix --target ./your-app
+```
+
+## Available Models
+
+### Anthropic
+
+| Model | ID |
+|-------|-----|
+| Claude Sonnet 4.6 | `strix/claude-sonnet-4.6` |
+| Claude Opus 4.6 | `strix/claude-opus-4.6` |
+
+### OpenAI
+
+| Model | ID |
+|-------|-----|
+| GPT-5.2 | `strix/gpt-5.2` |
+| GPT-5.1 | `strix/gpt-5.1` |
+| GPT-5 | `strix/gpt-5` |
+
+### Google
+
+| Model | ID |
+|-------|-----|
+| Gemini 3 Pro | `strix/gemini-3-pro-preview` |
+| Gemini 3 Flash | `strix/gemini-3-flash-preview` |
+
+### Other
+
+| Model | ID |
+|-------|-----|
+| GLM-5 | `strix/glm-5` |
+| GLM-4.7 | `strix/glm-4.7` |
+
+## Configuration Reference
+
+
+ Your Strix API key from [models.strix.ai](https://models.strix.ai).
+
+
+
+ Model ID from the tables above. Must be prefixed with `strix/`.
+
diff --git a/docs/llm-providers/openrouter.mdx b/docs/llm-providers/openrouter.mdx
index 31919c1..d4d36bf 100644
--- a/docs/llm-providers/openrouter.mdx
+++ b/docs/llm-providers/openrouter.mdx
@@ -19,7 +19,7 @@ Access any model on OpenRouter using the format `openrouter//`:
| Model | Configuration |
|-------|---------------|
| GPT-5 | `openrouter/openai/gpt-5` |
-| Claude 4.5 Sonnet | `openrouter/anthropic/claude-sonnet-4.5` |
+| Claude Sonnet 4.6 | `openrouter/anthropic/claude-sonnet-4.6` |
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |
diff --git a/docs/llm-providers/overview.mdx b/docs/llm-providers/overview.mdx
index 9027aac..153ad0c 100644
--- a/docs/llm-providers/overview.mdx
+++ b/docs/llm-providers/overview.mdx
@@ -5,31 +5,54 @@ description: "Configure your AI model for Strix"
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
-## Recommended Models
+## Strix Router (Recommended)
-For best results, use one of these models:
+The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention.
+
+```bash
+export STRIX_LLM="strix/gpt-5"
+export LLM_API_KEY="your-strix-api-key"
+```
+
+Get your API key at [models.strix.ai](https://models.strix.ai).
+
+## Bring Your Own Key
+
+You can also use any LiteLLM-compatible provider with your own API keys:
| Model | Provider | Configuration |
| ----------------- | ------------- | -------------------------------- |
| GPT-5 | OpenAI | `openai/gpt-5` |
-| Claude 4.5 Sonnet | Anthropic | `anthropic/claude-sonnet-4-5` |
+| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
-## Quick Setup
-
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
```
+## Local Models
+
+Run models locally with [Ollama](https://ollama.com), [LM Studio](https://lmstudio.ai), or any OpenAI-compatible server:
+
+```bash
+export STRIX_LLM="ollama/llama4"
+export LLM_API_BASE="http://localhost:11434"
+```
+
+See the [Local Models guide](/llm-providers/local) for setup instructions and recommended models.
+
## Provider Guides
+
+ Recommended models router with high rate limits.
+
- GPT-5 and Codex models.
+ GPT-5 models.
- Claude 4.5 Sonnet, Opus, and Haiku.
+ Claude Opus, Sonnet, and Haiku.
Access 100+ models through a single API.
@@ -38,7 +61,7 @@ export LLM_API_KEY="your-api-key"
Gemini 3 models via Google Cloud.
- Claude 4.5 and Titan models via AWS.
+ Claude and Titan models via AWS.
GPT-5 via Azure.
@@ -54,7 +77,7 @@ Use LiteLLM's `provider/model-name` format:
```
openai/gpt-5
-anthropic/claude-sonnet-4-5
+anthropic/claude-sonnet-4-6
vertex_ai/gemini-3-pro-preview
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
ollama/llama4
diff --git a/docs/llm-providers/vertex.mdx b/docs/llm-providers/vertex.mdx
index 18c6ecc..d7ed971 100644
--- a/docs/llm-providers/vertex.mdx
+++ b/docs/llm-providers/vertex.mdx
@@ -44,7 +44,7 @@ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
```bash
export VERTEXAI_PROJECT="your-project-id"
-export VERTEXAI_LOCATION="us-central1"
+export VERTEXAI_LOCATION="global"
```
## Prerequisites
diff --git a/docs/quickstart.mdx b/docs/quickstart.mdx
index 487caae..bd7a8d9 100644
--- a/docs/quickstart.mdx
+++ b/docs/quickstart.mdx
@@ -6,7 +6,7 @@ description: "Install Strix and run your first security scan"
## Prerequisites
- Docker (running)
-- An LLM provider API key (OpenAI, Anthropic, or local model)
+- An LLM API key — use [Strix Router](/llm-providers/models) for the easiest setup, or bring your own key from any [supported provider](/llm-providers/overview)
## Installation
@@ -27,13 +27,23 @@ description: "Install Strix and run your first security scan"
Set your LLM provider:
-```bash
-export STRIX_LLM="openai/gpt-5"
-export LLM_API_KEY="your-api-key"
-```
+
+
+ ```bash
+ export STRIX_LLM="strix/gpt-5"
+ export LLM_API_KEY="your-strix-api-key"
+ ```
+
+
+ ```bash
+ export STRIX_LLM="openai/gpt-5"
+ export LLM_API_KEY="your-api-key"
+ ```
+
+
-For best results, use `openai/gpt-5`, `anthropic/claude-sonnet-4-5`, or `vertex_ai/gemini-3-pro-preview`.
+For best results, use `strix/gpt-5`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`.
## Run Your First Scan
diff --git a/docs/tools/proxy.mdx b/docs/tools/proxy.mdx
index f870bad..39b7be6 100644
--- a/docs/tools/proxy.mdx
+++ b/docs/tools/proxy.mdx
@@ -80,6 +80,27 @@ for req in user_requests.get('requests', []):
print(f"Potential IDOR: {test_id} returned 200")
```
+## Human-in-the-Loop
+
+Strix exposes the Caido proxy to your host machine, so you can interact with it alongside the automated scan. When the sandbox starts, the Caido URL is displayed in the TUI sidebar — click it to copy, then open it in Caido Desktop.
+
+### Accessing Caido
+
+1. Start a scan as usual
+2. Look for the **Caido** URL in the sidebar stats panel (e.g. `localhost:52341`)
+3. Open the URL in Caido Desktop
+4. Click **Continue as guest** to access the instance
+
+### What You Can Do
+
+- **Inspect traffic** — Browse all HTTP/HTTPS requests the agent is making in real time
+- **Replay requests** — Take any captured request and resend it with your own modifications
+- **Intercept and modify** — Pause requests mid-flight, edit them, then forward
+- **Explore the sitemap** — See the full attack surface the agent has discovered
+- **Manual testing** — Use Caido's tools to test findings the agent reports, or explore areas it hasn't reached
+
+This turns Strix from a fully automated scanner into a collaborative tool — the agent handles the heavy lifting while you focus on the interesting parts.
+
## Scope
Create scopes to filter traffic to relevant domains:
diff --git a/poetry.lock b/poetry.lock
index 37a393c..5930b42 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -1,4 +1,4 @@
-# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
+# This file is automatically @generated by Poetry 2.3.2 and should not be changed by hand.
[[package]]
name = "aiohappyeyeballs"
@@ -220,6 +220,34 @@ files = [
{file = "annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89"},
]
+[[package]]
+name = "anthropic"
+version = "0.84.0"
+description = "The official Python library for the anthropic API"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "anthropic-0.84.0-py3-none-any.whl", hash = "sha256:861c4c50f91ca45f942e091d83b60530ad6d4f98733bfe648065364da05d29e7"},
+ {file = "anthropic-0.84.0.tar.gz", hash = "sha256:72f5f90e5aebe62dca316cb013629cfa24996b0f5a4593b8c3d712bc03c43c37"},
+]
+
+[package.dependencies]
+anyio = ">=3.5.0,<5"
+distro = ">=1.7.0,<2"
+docstring-parser = ">=0.15,<1"
+httpx = ">=0.25.0,<1"
+jiter = ">=0.4.0,<1"
+pydantic = ">=1.9.0,<3"
+sniffio = "*"
+typing-extensions = ">=4.10,<5"
+
+[package.extras]
+aiohttp = ["aiohttp", "httpx-aiohttp (>=0.1.9)"]
+bedrock = ["boto3 (>=1.28.57)", "botocore (>=1.31.57)"]
+mcp = ["mcp (>=1.0) ; python_version >= \"3.10\""]
+vertex = ["google-auth[requests] (>=2,<3)"]
+
[[package]]
name = "anyio"
version = "4.10.0"
@@ -622,12 +650,24 @@ description = "Extensible memoizing collections and decorators"
optional = true
python-versions = ">=3.7"
groups = ["main"]
-markers = "extra == \"vertex\" or extra == \"sandbox\""
+markers = "extra == \"sandbox\""
files = [
{file = "cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a"},
{file = "cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4"},
]
+[[package]]
+name = "catalogue"
+version = "2.0.10"
+description = "Super lightweight function registries for your library"
+optional = false
+python-versions = ">=3.6"
+groups = ["main"]
+files = [
+ {file = "catalogue-2.0.10-py3-none-any.whl", hash = "sha256:58c2de0020aa90f4a2da7dfad161bf7b3b054c86a5f09fcedc0b2b740c109a9f"},
+ {file = "catalogue-2.0.10.tar.gz", hash = "sha256:4f56daa940913d3f09d589c191c74e5a6d51762b3a9e37dd53b7437afd6cda15"},
+]
+
[[package]]
name = "certifi"
version = "2025.8.3"
@@ -890,7 +930,7 @@ files = [
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
]
-markers = {main = "sys_platform == \"win32\" and extra == \"sandbox\" or platform_system == \"Windows\"", dev = "platform_system == \"Windows\" or sys_platform == \"win32\""}
+markers = {dev = "platform_system == \"Windows\" or sys_platform == \"win32\""}
[[package]]
name = "contourpy"
@@ -1174,6 +1214,17 @@ ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.5)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
+[[package]]
+name = "cuid"
+version = "0.4"
+description = "Fast, scalable unique ID generation"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "cuid-0.4.tar.gz", hash = "sha256:74eaba154916a2240405c3631acee708c263ef8fa05a86820b87d0f59f84e978"},
+]
+
[[package]]
name = "cvss"
version = "3.6"
@@ -1203,6 +1254,29 @@ files = [
docs = ["ipython", "matplotlib", "numpydoc", "sphinx"]
tests = ["pytest", "pytest-cov", "pytest-xdist"]
+[[package]]
+name = "dateparser"
+version = "1.3.0"
+description = "Date parsing library designed to parse dates from HTML pages"
+optional = false
+python-versions = ">=3.10"
+groups = ["main"]
+files = [
+ {file = "dateparser-1.3.0-py3-none-any.whl", hash = "sha256:8dc678b0a526e103379f02ae44337d424bd366aac727d3c6cf52ce1b01efbb5a"},
+ {file = "dateparser-1.3.0.tar.gz", hash = "sha256:5bccf5d1ec6785e5be71cc7ec80f014575a09b4923e762f850e57443bddbf1a5"},
+]
+
+[package.dependencies]
+python-dateutil = ">=2.7.0"
+pytz = ">=2024.2"
+regex = ">=2024.9.11"
+tzlocal = ">=0.2"
+
+[package.extras]
+calendars = ["convertdate (>=2.2.1)", "hijridate"]
+fasttext = ["fasttext (>=0.9.1)", "numpy (>=1.22.0,<2)"]
+langdetect = ["langdetect (>=1.0.0)"]
+
[[package]]
name = "decorator"
version = "5.2.1"
@@ -1228,6 +1302,24 @@ files = [
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
+[[package]]
+name = "deprecated"
+version = "1.3.1"
+description = "Python @deprecated decorator to deprecate old python classes, functions or methods."
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
+groups = ["main"]
+files = [
+ {file = "deprecated-1.3.1-py2.py3-none-any.whl", hash = "sha256:597bfef186b6f60181535a29fbe44865ce137a5079f295b479886c82729d5f3f"},
+ {file = "deprecated-1.3.1.tar.gz", hash = "sha256:b1b50e0ff0c1fddaa5708a2c6b0a6588bb09b892825ab2b214ac9ea9d92a5223"},
+]
+
+[package.dependencies]
+wrapt = ">=1.10,<3"
+
+[package.extras]
+dev = ["PyTest", "PyTest-Cov", "bump2version (<1)", "setuptools ; python_version >= \"3.12\"", "tox"]
+
[[package]]
name = "dill"
version = "0.4.0"
@@ -1316,10 +1408,9 @@ websockets = ["websocket-client (>=1.3.0)"]
name = "docstring-parser"
version = "0.17.0"
description = "Parse Python docstrings in reST, Google and Numpydoc format"
-optional = true
+optional = false
python-versions = ">=3.8"
groups = ["main"]
-markers = "extra == \"vertex\""
files = [
{file = "docstring_parser-0.17.0-py3-none-any.whl", hash = "sha256:cf2569abd23dce8099b300f9b4fa8191e9582dda731fd533daf54c4551658708"},
{file = "docstring_parser-0.17.0.tar.gz", hash = "sha256:583de4a309722b3315439bb31d64ba3eebada841f2e2cee23b99df001434c912"},
@@ -1388,6 +1479,24 @@ files = [
[package.extras]
tests = ["asttokens (>=2.1.0)", "coverage", "coverage-enable-subprocess", "ipython", "littleutils", "pytest", "rich ; python_version >= \"3.11\""]
+[[package]]
+name = "faker"
+version = "40.8.0"
+description = "Faker is a Python package that generates fake data for you."
+optional = false
+python-versions = ">=3.10"
+groups = ["main"]
+files = [
+ {file = "faker-40.8.0-py3-none-any.whl", hash = "sha256:eb21bdba18f7a8375382eb94fb436fce07046893dc94cb20817d28deb0c3d579"},
+ {file = "faker-40.8.0.tar.gz", hash = "sha256:936a3c9be6c004433f20aa4d99095df5dec82b8c7ad07459756041f8c1728875"},
+]
+
+[package.dependencies]
+tzdata = {version = "*", markers = "platform_system == \"Windows\""}
+
+[package.extras]
+tzdata = ["tzdata"]
+
[[package]]
name = "fastapi"
version = "0.121.0"
@@ -1850,50 +1959,51 @@ grpcio-gcp = ["grpcio-gcp (>=0.2.2,<1.0.0)"]
[[package]]
name = "google-auth"
-version = "2.43.0"
+version = "2.48.0"
description = "Google Authentication Library"
optional = true
-python-versions = ">=3.7"
+python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"vertex\""
files = [
- {file = "google_auth-2.43.0-py2.py3-none-any.whl", hash = "sha256:af628ba6fa493f75c7e9dbe9373d148ca9f4399b5ea29976519e0a3848eddd16"},
- {file = "google_auth-2.43.0.tar.gz", hash = "sha256:88228eee5fc21b62a1b5fe773ca15e67778cb07dc8363adcb4a8827b52d81483"},
+ {file = "google_auth-2.48.0-py3-none-any.whl", hash = "sha256:2e2a537873d449434252a9632c28bfc268b0adb1e53f9fb62afc5333a975903f"},
+ {file = "google_auth-2.48.0.tar.gz", hash = "sha256:4f7e706b0cd3208a3d940a19a822c37a476ddba5450156c3e6624a71f7c841ce"},
]
[package.dependencies]
-cachetools = ">=2.0.0,<7.0"
+cryptography = ">=38.0.3"
pyasn1-modules = ">=0.2.1"
requests = {version = ">=2.20.0,<3.0.0", optional = true, markers = "extra == \"requests\""}
rsa = ">=3.1.4,<5"
[package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0)", "requests (>=2.20.0,<3.0.0)"]
-enterprise-cert = ["cryptography", "pyopenssl"]
-pyjwt = ["cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "pyjwt (>=2.0)"]
-pyopenssl = ["cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "pyopenssl (>=20.0.0)"]
+cryptography = ["cryptography (>=38.0.3)"]
+enterprise-cert = ["pyopenssl"]
+pyjwt = ["pyjwt (>=2.0)"]
+pyopenssl = ["pyopenssl (>=20.0.0)"]
reauth = ["pyu2f (>=0.1.5)"]
requests = ["requests (>=2.20.0,<3.0.0)"]
-testing = ["aiohttp (<3.10.0)", "aiohttp (>=3.6.2,<4.0.0)", "aioresponses", "cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (<39.0.0) ; python_version < \"3.8\"", "cryptography (>=38.0.3)", "cryptography (>=38.0.3)", "flask", "freezegun", "grpcio", "mock", "oauth2client", "packaging", "pyjwt (>=2.0)", "pyopenssl (<24.3.0)", "pyopenssl (>=20.0.0)", "pytest", "pytest-asyncio", "pytest-cov", "pytest-localserver", "pyu2f (>=0.1.5)", "requests (>=2.20.0,<3.0.0)", "responses", "urllib3"]
+testing = ["aiohttp (<3.10.0)", "aiohttp (>=3.6.2,<4.0.0)", "aioresponses", "flask", "freezegun", "grpcio", "oauth2client", "packaging", "pyjwt (>=2.0)", "pyopenssl (<24.3.0)", "pyopenssl (>=20.0.0)", "pytest", "pytest-asyncio", "pytest-cov", "pytest-localserver", "pyu2f (>=0.1.5)", "requests (>=2.20.0,<3.0.0)", "responses", "urllib3"]
urllib3 = ["packaging", "urllib3"]
[[package]]
name = "google-cloud-aiplatform"
-version = "1.129.0"
+version = "1.133.0"
description = "Vertex AI API client library"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"vertex\""
files = [
- {file = "google_cloud_aiplatform-1.129.0-py2.py3-none-any.whl", hash = "sha256:b0052143a1bc05894e59fc6f910e84c504e194fadf877f84fc790b38a2267739"},
- {file = "google_cloud_aiplatform-1.129.0.tar.gz", hash = "sha256:c53b9d6c529b4de2962b34425b0116f7a382a926b26e02c2196e372f9a31d196"},
+ {file = "google_cloud_aiplatform-1.133.0-py2.py3-none-any.whl", hash = "sha256:dfc81228e987ca10d1c32c7204e2131b3c8d6b7c8e0b4e23bf7c56816bc4c566"},
+ {file = "google_cloud_aiplatform-1.133.0.tar.gz", hash = "sha256:3a6540711956dd178daaab3c2c05db476e46d94ac25912b8cf4f59b00b058ae0"},
]
[package.dependencies]
docstring_parser = "<1"
google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.8.dev0,<3.0.0", extras = ["grpc"]}
-google-auth = ">=2.14.1,<3.0.0"
+google-auth = ">=2.47.0,<3.0.0"
google-cloud-bigquery = ">=1.15.0,<3.20.0 || >3.20.0,<4.0.0"
google-cloud-resource-manager = ">=1.3.3,<3.0.0"
google-cloud-storage = [
@@ -1905,7 +2015,6 @@ packaging = ">=14.3"
proto-plus = ">=1.22.3,<2.0.0"
protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0"
pydantic = "<3"
-shapely = "<3.0.0"
typing_extensions = "*"
[package.extras]
@@ -1918,21 +2027,21 @@ cloud-profiler = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2
datasets = ["pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\""]
endpoint = ["requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)"]
evaluation = ["jsonschema", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "pandas (>=1.0.0)", "pyyaml", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "tqdm (>=4.23.0)"]
-full = ["docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.114.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "jsonschema", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)"]
+full = ["docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.124.4)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "jsonschema", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)"]
langchain = ["langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)"]
langchain-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "langchain (>=0.3,<0.4)", "langchain-core (>=0.3,<0.4)", "langchain-google-vertexai (>=2.0.22,<3)", "langgraph (>=0.2.45,<0.4)", "openinference-instrumentation-langchain (>=0.1.19,<0.2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"]
-lit = ["explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""]
+lit = ["explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "pandas (>=1.0.0)", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""]
llama-index = ["llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)"]
llama-index-testing = ["absl-py", "cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "llama-index", "llama-index-llms-google-genai", "openinference-instrumentation-llama-index (>=3.0,<4.0)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "pytest-xdist", "typing_extensions"]
metadata = ["numpy (>=1.15.0)", "pandas (>=1.0.0)"]
pipelines = ["pyyaml (>=5.3.1,<7)"]
-prediction = ["docker (>=5.0.3)", "fastapi (>=0.71.0,<=0.114.0)", "httpx (>=0.23.0,<=0.28.1)", "starlette (>=0.17.1)", "uvicorn[standard] (>=0.16.0)"]
+prediction = ["docker (>=5.0.3)", "fastapi (>=0.71.0,<=0.124.4)", "httpx (>=0.23.0,<=0.28.1)", "starlette (>=0.17.1)", "uvicorn[standard] (>=0.16.0)"]
private-endpoints = ["requests (>=2.28.1)", "urllib3 (>=1.21.1,<1.27)"]
ray = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\""]
ray-testing = ["google-cloud-bigquery", "google-cloud-bigquery-storage", "immutabledict", "pandas (>=1.0.0)", "pyarrow (>=6.0.1)", "pytest-xdist", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "ray[train]", "scikit-learn (<1.6.0)", "tensorflow ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0)", "xgboost", "xgboost_ray"]
reasoningengine = ["cloudpickle (>=3.0,<4.0)", "google-cloud-trace (<2)", "opentelemetry-exporter-gcp-logging (>=1.11.0a0,<2.0.0)", "opentelemetry-exporter-gcp-trace (<2)", "opentelemetry-exporter-otlp-proto-http (<2)", "opentelemetry-sdk (<2)", "pydantic (>=2.11.1,<3)", "typing_extensions"]
tensorboard = ["tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "werkzeug (>=2.0.0,<4.0.0)"]
-testing = ["Pillow", "aiohttp", "bigframes ; python_version >= \"3.10\" and python_version < \"3.14\"", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.114.0)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "google-vizier (>=0.1.6)", "grpcio-testing", "grpcio-tools (>=1.63.0) ; python_version >= \"3.13\"", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "immutabledict", "ipython", "jsonschema", "kfp (>=2.6.0,<3.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.14\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "mock", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "protobuf (<=5.29.4)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pytest-asyncio", "pytest-cov", "pytest-xdist", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "sentencepiece (>=0.2.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (==2.14.1) ; python_version <= \"3.11\"", "tensorflow (==2.19.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0) ; python_version <= \"3.11\"", "torch (>=2.2.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)", "werkzeug (>=2.0.0,<4.0.0)", "xgboost"]
+testing = ["Pillow", "aiohttp", "bigframes ; python_version >= \"3.10\" and python_version < \"3.14\"", "docker (>=5.0.3)", "explainable-ai-sdk (>=1.0.0) ; python_version < \"3.13\"", "fastapi (>=0.71.0,<=0.124.4)", "google-api-core (>=2.11,<3.0.0)", "google-cloud-bigquery", "google-cloud-bigquery-storage", "google-vizier (>=0.1.6)", "google-vizier (>=0.1.6)", "grpcio-testing", "grpcio-tools (>=1.63.0) ; python_version >= \"3.13\"", "httpx (>=0.23.0,<=0.28.1)", "immutabledict", "immutabledict", "ipython", "jsonschema", "kfp (>=2.6.0,<3.0.0) ; python_version < \"3.13\"", "lit-nlp (==0.4.0) ; python_version < \"3.13\"", "litellm (>=1.72.4,!=1.77.2,!=1.77.3,!=1.77.4)", "mlflow (>=1.27.0) ; python_version >= \"3.13\"", "mlflow (>=1.27.0,<=2.16.0) ; python_version < \"3.13\"", "mock", "nltk", "numpy (>=1.15.0)", "pandas (>=1.0.0)", "protobuf (<=5.29.4)", "pyarrow (>=10.0.1) ; python_version == \"3.11\"", "pyarrow (>=14.0.0) ; python_version >= \"3.12\"", "pyarrow (>=3.0.0,<8.0.0) ; python_version < \"3.11\"", "pyarrow (>=6.0.1)", "pytest-asyncio", "pytest-cov", "pytest-xdist", "pyyaml", "pyyaml (>=5.3.1,<7)", "ray[default] (>=2.4,<2.5.dev0 || >2.9.0,!=2.9.1,!=2.9.2,<2.10.dev0 || ==2.33.* || >=2.42.dev0,<=2.42.0) ; python_version < \"3.11\"", "ray[default] (>=2.5,<=2.47.1) ; python_version == \"3.11\"", "requests (>=2.28.1)", "requests-toolbelt (<=1.0.0)", "requests-toolbelt (<=1.0.0)", "ruamel.yaml", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn (<1.6.0) ; python_version <= \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "scikit-learn ; python_version > \"3.10\"", "sentencepiece (>=0.2.0)", "starlette (>=0.17.1)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorboard-plugin-profile (>=2.4.0,<2.18.0)", "tensorflow (==2.14.1) ; python_version <= \"3.11\"", "tensorflow (==2.19.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\"", "torch (>=2.0.0,<2.1.0) ; python_version <= \"3.11\"", "torch (>=2.2.0) ; python_version > \"3.11\" and python_version < \"3.13\"", "tqdm (>=4.23.0)", "urllib3 (>=1.21.1,<1.27)", "uvicorn[standard] (>=0.16.0)", "werkzeug (>=2.0.0,<4.0.0)", "werkzeug (>=2.0.0,<4.0.0)", "xgboost"]
tokenization = ["sentencepiece (>=0.2.0)"]
vizier = ["google-vizier (>=0.1.6)"]
xai = ["tensorflow (>=2.3.0,<3.0.0) ; python_version < \"3.13\""]
@@ -2142,10 +2251,9 @@ requests = ["requests (>=2.18.0,<3.0.0)"]
name = "googleapis-common-protos"
version = "1.72.0"
description = "Common protobufs used in Google APIs"
-optional = true
+optional = false
python-versions = ">=3.7"
groups = ["main"]
-markers = "extra == \"vertex\""
files = [
{file = "googleapis_common_protos-1.72.0-py3-none-any.whl", hash = "sha256:4299c5a82d5ae1a9702ada957347726b167f9f8d1fc352477702a1e851ff4038"},
{file = "googleapis_common_protos-1.72.0.tar.gz", hash = "sha256:e55a601c1b32b52d7a3e65f43563e2aa61bcd737998ee672ac9b951cd49319f5"},
@@ -2635,6 +2743,18 @@ perf = ["ipython"]
test = ["flufl.flake8", "importlib_resources (>=1.3) ; python_version < \"3.9\"", "jaraco.test (>=5.4)", "packaging", "pyfakefs", "pytest (>=6,!=8.1.*)", "pytest-perf (>=0.9.2)"]
type = ["pytest-mypy"]
+[[package]]
+name = "inflection"
+version = "0.5.1"
+description = "A port of Ruby on Rails inflector to Python"
+optional = false
+python-versions = ">=3.5"
+groups = ["main"]
+files = [
+ {file = "inflection-0.5.1-py2.py3-none-any.whl", hash = "sha256:f38b2b640938a4f35ade69ac3d053042959b62a0f1076a5bbaa1b9526605a8a2"},
+ {file = "inflection-0.5.1.tar.gz", hash = "sha256:1a29730d366e996aaacffb2f1f1cb9593dc38e2ddd30c91250c6dde09ea9b417"},
+]
+
[[package]]
name = "iniconfig"
version = "2.1.0"
@@ -2862,6 +2982,18 @@ files = [
{file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
]
+[[package]]
+name = "joblib"
+version = "1.5.3"
+description = "Lightweight pipelining with Python functions"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "joblib-1.5.3-py3-none-any.whl", hash = "sha256:5fc3c5039fc5ca8c0276333a188bbd59d6b7ab37fe6632daa76bc7f9ec18e713"},
+ {file = "joblib-1.5.3.tar.gz", hash = "sha256:8561a3269e6801106863fd0d6d84bb737be9e7631e33aaed3fb9ce5953688da3"},
+]
+
[[package]]
name = "jsonschema"
version = "4.25.1"
@@ -2876,7 +3008,7 @@ files = [
[package.dependencies]
attrs = ">=22.2.0"
-jsonschema-specifications = ">=2023.03.6"
+jsonschema-specifications = ">=2023.3.6"
referencing = ">=0.28.4"
rpds-py = ">=0.7.1"
@@ -3863,6 +3995,32 @@ extra = ["lxml (>=4.6)", "pydot (>=3.0.1)", "pygraphviz (>=1.14)", "sympy (>=1.1
test = ["pytest (>=7.2)", "pytest-cov (>=4.0)", "pytest-xdist (>=3.0)"]
test-extras = ["pytest-mpl", "pytest-randomly"]
+[[package]]
+name = "nltk"
+version = "3.9.3"
+description = "Natural Language Toolkit"
+optional = false
+python-versions = ">=3.10"
+groups = ["main"]
+files = [
+ {file = "nltk-3.9.3-py3-none-any.whl", hash = "sha256:60b3db6e9995b3dd976b1f0fa7dec22069b2677e759c28eb69b62ddd44870522"},
+ {file = "nltk-3.9.3.tar.gz", hash = "sha256:cb5945d6424a98d694c2b9a0264519fab4363711065a46aa0ae7a2195b92e71f"},
+]
+
+[package.dependencies]
+click = "*"
+joblib = "*"
+regex = ">=2021.8.3"
+tqdm = "*"
+
+[package.extras]
+all = ["matplotlib", "numpy", "pyparsing", "python-crfsuite", "requests", "scikit-learn", "scipy", "twython"]
+corenlp = ["requests"]
+machine-learning = ["numpy", "python-crfsuite", "scikit-learn", "scipy"]
+plot = ["matplotlib"]
+tgrep = ["pyparsing"]
+twitter = ["twython"]
+
[[package]]
name = "nodeenv"
version = "1.9.1"
@@ -3879,10 +4037,9 @@ files = [
name = "numpy"
version = "2.3.2"
description = "Fundamental package for array computing in Python"
-optional = true
+optional = false
python-versions = ">=3.11"
groups = ["main"]
-markers = "extra == \"sandbox\" or extra == \"vertex\""
files = [
{file = "numpy-2.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:852ae5bed3478b92f093e30f785c98e0cb62fa0a939ed057c31716e18a7a22b9"},
{file = "numpy-2.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7a0e27186e781a69959d0230dd9909b5e26024f8da10683bd6344baea1885168"},
@@ -4084,6 +4241,957 @@ files = [
[package.dependencies]
et-xmlfile = "*"
+[[package]]
+name = "opentelemetry-api"
+version = "1.40.0"
+description = "OpenTelemetry Python API"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_api-1.40.0-py3-none-any.whl", hash = "sha256:82dd69331ae74b06f6a874704be0cfaa49a1650e1537d4a813b86ecef7d0ecf9"},
+ {file = "opentelemetry_api-1.40.0.tar.gz", hash = "sha256:159be641c0b04d11e9ecd576906462773eb97ae1b657730f0ecf64d32071569f"},
+]
+
+[package.dependencies]
+importlib-metadata = ">=6.0,<8.8.0"
+typing-extensions = ">=4.5.0"
+
+[[package]]
+name = "opentelemetry-exporter-otlp-proto-common"
+version = "1.40.0"
+description = "OpenTelemetry Protobuf encoding"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_exporter_otlp_proto_common-1.40.0-py3-none-any.whl", hash = "sha256:7081ff453835a82417bf38dccf122c827c3cbc94f2079b03bba02a3165f25149"},
+ {file = "opentelemetry_exporter_otlp_proto_common-1.40.0.tar.gz", hash = "sha256:1cbee86a4064790b362a86601ee7934f368b81cd4cc2f2e163902a6e7818a0fa"},
+]
+
+[package.dependencies]
+opentelemetry-proto = "1.40.0"
+
+[[package]]
+name = "opentelemetry-exporter-otlp-proto-grpc"
+version = "1.40.0"
+description = "OpenTelemetry Collector Protobuf over gRPC Exporter"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_exporter_otlp_proto_grpc-1.40.0-py3-none-any.whl", hash = "sha256:2aa0ca53483fe0cf6405087a7491472b70335bc5c7944378a0a8e72e86995c52"},
+ {file = "opentelemetry_exporter_otlp_proto_grpc-1.40.0.tar.gz", hash = "sha256:bd4015183e40b635b3dab8da528b27161ba83bf4ef545776b196f0fb4ec47740"},
+]
+
+[package.dependencies]
+googleapis-common-protos = ">=1.57,<2.0"
+grpcio = [
+ {version = ">=1.63.2,<2.0.0", markers = "python_version < \"3.13\""},
+ {version = ">=1.66.2,<2.0.0", markers = "python_version == \"3.13\""},
+ {version = ">=1.75.1,<2.0.0", markers = "python_version >= \"3.14\""},
+]
+opentelemetry-api = ">=1.15,<2.0"
+opentelemetry-exporter-otlp-proto-common = "1.40.0"
+opentelemetry-proto = "1.40.0"
+opentelemetry-sdk = ">=1.40.0,<1.41.0"
+typing-extensions = ">=4.6.0"
+
+[package.extras]
+gcp-auth = ["opentelemetry-exporter-credential-provider-gcp (>=0.59b0)"]
+
+[[package]]
+name = "opentelemetry-exporter-otlp-proto-http"
+version = "1.40.0"
+description = "OpenTelemetry Collector Protobuf over HTTP Exporter"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_exporter_otlp_proto_http-1.40.0-py3-none-any.whl", hash = "sha256:a8d1dab28f504c5d96577d6509f80a8150e44e8f45f82cdbe0e34c99ab040069"},
+ {file = "opentelemetry_exporter_otlp_proto_http-1.40.0.tar.gz", hash = "sha256:db48f5e0f33217588bbc00274a31517ba830da576e59503507c839b38fa0869c"},
+]
+
+[package.dependencies]
+googleapis-common-protos = ">=1.52,<2.0"
+opentelemetry-api = ">=1.15,<2.0"
+opentelemetry-exporter-otlp-proto-common = "1.40.0"
+opentelemetry-proto = "1.40.0"
+opentelemetry-sdk = ">=1.40.0,<1.41.0"
+requests = ">=2.7,<3.0"
+typing-extensions = ">=4.5.0"
+
+[package.extras]
+gcp-auth = ["opentelemetry-exporter-credential-provider-gcp (>=0.59b0)"]
+
+[[package]]
+name = "opentelemetry-instrumentation"
+version = "0.61b0"
+description = "Instrumentation Tools & Auto Instrumentation for OpenTelemetry Python"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation-0.61b0-py3-none-any.whl", hash = "sha256:92a93a280e69788e8f88391247cc530fd81f16f2b011979d4d6398f805cfbc63"},
+ {file = "opentelemetry_instrumentation-0.61b0.tar.gz", hash = "sha256:cb21b48db738c9de196eba6b805b4ff9de3b7f187e4bbf9a466fa170514f1fc7"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.4,<2.0"
+opentelemetry-semantic-conventions = "0.61b0"
+packaging = ">=18.0"
+wrapt = ">=1.0.0,<2.0.0"
+
+[[package]]
+name = "opentelemetry-instrumentation-agno"
+version = "0.53.0"
+description = "OpenTelemetry Agno instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_agno-0.53.0-py3-none-any.whl", hash = "sha256:bab72e73e12dfcfae6440d6d47f124d6cdd9d6a5ef391ef896b79742696595d1"},
+ {file = "opentelemetry_instrumentation_agno-0.53.0.tar.gz", hash = "sha256:67ff165475ca1c48ea41fe9db2d9f89d72430b8e995ea1aa8b329f04473b7a0c"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.28.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["agno"]
+
+[[package]]
+name = "opentelemetry-instrumentation-alephalpha"
+version = "0.53.0"
+description = "OpenTelemetry Aleph Alpha instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_alephalpha-0.53.0-py3-none-any.whl", hash = "sha256:905d97267097c4d35426fda6893590908a4f15c58f50fdfbe9b59f8cfef266ea"},
+ {file = "opentelemetry_instrumentation_alephalpha-0.53.0.tar.gz", hash = "sha256:e558d0c5aa17c4278619242d06792f272a32297ab1bb6dce61498863f40ee270"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["aleph-alpha-client"]
+
+[[package]]
+name = "opentelemetry-instrumentation-anthropic"
+version = "0.53.0"
+description = "OpenTelemetry Anthropic instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_anthropic-0.53.0-py3-none-any.whl", hash = "sha256:e89f19457cb697fd94d63f29883f38d640603a7a0351c25052f3674f41af1c99"},
+ {file = "opentelemetry_instrumentation_anthropic-0.53.0.tar.gz", hash = "sha256:de8d405f5ed2f6af5f368e028e6ad07504acecd20b133b84a9fa45827deaba15"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.14,<0.5.0"
+
+[package.extras]
+instruments = ["anthropic"]
+
+[[package]]
+name = "opentelemetry-instrumentation-bedrock"
+version = "0.53.0"
+description = "OpenTelemetry Bedrock instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_bedrock-0.53.0-py3-none-any.whl", hash = "sha256:1e13877d1bcf31e4617b0801f0369f2c2aa42fca17e9174d3cbf23b0c1a63315"},
+ {file = "opentelemetry_instrumentation_bedrock-0.53.0.tar.gz", hash = "sha256:0bf17a81fdeddeeee2baf567b30ea42853c9dfd2ba8dca55fcbdb7c306aa0825"},
+]
+
+[package.dependencies]
+anthropic = ">=0.17.0"
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+tokenizers = ">=0.13.0"
+
+[package.extras]
+instruments = ["boto3"]
+
+[[package]]
+name = "opentelemetry-instrumentation-chromadb"
+version = "0.53.0"
+description = "OpenTelemetry Chroma DB instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_chromadb-0.53.0-py3-none-any.whl", hash = "sha256:5c1c17dc07ae94b4dec01022e2c5f9c51d31c8912d9ddde7ac392dd97094d317"},
+ {file = "opentelemetry_instrumentation_chromadb-0.53.0.tar.gz", hash = "sha256:131495c56fdc6131abb8d8a31addcf86e9ab10e63e86927bb74380da351f1b5a"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["chromadb"]
+
+[[package]]
+name = "opentelemetry-instrumentation-cohere"
+version = "0.53.0"
+description = "OpenTelemetry Cohere instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_cohere-0.53.0-py3-none-any.whl", hash = "sha256:7a1483c99db7f30c4dde1763834ee6844f0d2ba1a986b52eb740c5c4e68ed926"},
+ {file = "opentelemetry_instrumentation_cohere-0.53.0.tar.gz", hash = "sha256:51a128e317d0ec09c1b42fb1b955258c2bb337150e55c23a70dbad627dac5097"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["cohere"]
+
+[[package]]
+name = "opentelemetry-instrumentation-crewai"
+version = "0.53.0"
+description = "OpenTelemetry crewAI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_crewai-0.53.0-py3-none-any.whl", hash = "sha256:348b9214f2557f33057a49fb648402cb46a231a063a9ffa7469047c1b2383afe"},
+ {file = "opentelemetry_instrumentation_crewai-0.53.0.tar.gz", hash = "sha256:9b50cd375ca0b366f1f23e8f7e8d8a8baac61792fe1d3f515e41ef45a7dc360f"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["crewai"]
+
+[[package]]
+name = "opentelemetry-instrumentation-google-generativeai"
+version = "0.53.0"
+description = "OpenTelemetry Google Generative AI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_google_generativeai-0.53.0-py3-none-any.whl", hash = "sha256:8f3b14ac2bcf348502f039f9b0a1440b9e8a041280c4ee8c6e7ffb79e35f7bd8"},
+ {file = "opentelemetry_instrumentation_google_generativeai-0.53.0.tar.gz", hash = "sha256:c30ed87c3ebb9b52558c97e465a36451e5dc6f40e18d1dbfef482ecbdadcf42f"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["google-genai"]
+
+[[package]]
+name = "opentelemetry-instrumentation-groq"
+version = "0.53.0"
+description = "OpenTelemetry Groq instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_groq-0.53.0-py3-none-any.whl", hash = "sha256:40efe9df236e785ae31a498f3fe5b2287afa7465b4b7786f2ca36cfa70943aa3"},
+ {file = "opentelemetry_instrumentation_groq-0.53.0.tar.gz", hash = "sha256:19065150a7236a2c99f1bcea6056456922a6997102198642285d3c7e80b011e4"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["groq"]
+
+[[package]]
+name = "opentelemetry-instrumentation-haystack"
+version = "0.53.0"
+description = "OpenTelemetry Haystack instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_haystack-0.53.0-py3-none-any.whl", hash = "sha256:782daac342840f3c63194c6655258fc2c80b03b399458a30b6b332727e5a9d57"},
+ {file = "opentelemetry_instrumentation_haystack-0.53.0.tar.gz", hash = "sha256:62307cf41d613b69fe1495e233ff4ec0f86e83fd9b5c8fe208eefc229ebde010"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["haystack-ai"]
+
+[[package]]
+name = "opentelemetry-instrumentation-lancedb"
+version = "0.53.0"
+description = "OpenTelemetry Lancedb instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_lancedb-0.53.0-py3-none-any.whl", hash = "sha256:30e6b1b4b83c3513101931531919b650ea61ab65b8594f9966159f4eeaf436a8"},
+ {file = "opentelemetry_instrumentation_lancedb-0.53.0.tar.gz", hash = "sha256:e646e8e850e4f646199dbf2c62d3bb3e495c00ab093303e5b4dbbd4c76f0738f"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["lancedb"]
+
+[[package]]
+name = "opentelemetry-instrumentation-langchain"
+version = "0.53.0"
+description = "OpenTelemetry Langchain instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_langchain-0.53.0-py3-none-any.whl", hash = "sha256:5426917b76ffc5e9765c0b2eaac516ac7b30f70bd53bbbee51d65364ae668276"},
+ {file = "opentelemetry_instrumentation_langchain-0.53.0.tar.gz", hash = "sha256:47d9ad0baa6b3f2e44b9b31bd655b87eac2d86794dc38079d61a2eb24b747f51"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["langchain"]
+
+[[package]]
+name = "opentelemetry-instrumentation-llamaindex"
+version = "0.53.0"
+description = "OpenTelemetry LlamaIndex instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_llamaindex-0.53.0-py3-none-any.whl", hash = "sha256:c4a0043bc0305b860b0da4840466ffb5fae83595a52a49212a85fb46ddbb6617"},
+ {file = "opentelemetry_instrumentation_llamaindex-0.53.0.tar.gz", hash = "sha256:c7b0bd1fe818002286d0122f6a57c516c6a4b248813ca3a4adff61a547f83050"},
+]
+
+[package.dependencies]
+inflection = ">=0.5.1,<0.6.0"
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["llama-index"]
+llamaparse = ["llama-parse"]
+
+[[package]]
+name = "opentelemetry-instrumentation-logging"
+version = "0.61b0"
+description = "OpenTelemetry Logging instrumentation"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_logging-0.61b0-py3-none-any.whl", hash = "sha256:6d87e5ded6a0128d775d41511f8380910a1b610671081d16efb05ac3711c0074"},
+ {file = "opentelemetry_instrumentation_logging-0.61b0.tar.gz", hash = "sha256:feaa30b700acd2a37cc81db5f562ab0c3a5b6cc2453595e98b72c01dcf649584"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.12,<2.0"
+opentelemetry-instrumentation = "0.61b0"
+
+[[package]]
+name = "opentelemetry-instrumentation-marqo"
+version = "0.53.0"
+description = "OpenTelemetry Marqo instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_marqo-0.53.0-py3-none-any.whl", hash = "sha256:7e3ffb849d45ffade704a24118d4f05df13217a13bb421489a2765dd8996df9a"},
+ {file = "opentelemetry_instrumentation_marqo-0.53.0.tar.gz", hash = "sha256:c2756ca5f2dbdbb48140174119e7e6637d7b6af84ae8125aba4fbf58915cd08b"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["marqo"]
+
+[[package]]
+name = "opentelemetry-instrumentation-mcp"
+version = "0.53.0"
+description = "OpenTelemetry mcp instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_mcp-0.53.0-py3-none-any.whl", hash = "sha256:39172f541a9f74035a1e3108fd1760921962a2e8627f01ba3b9e4822e4d25f37"},
+ {file = "opentelemetry_instrumentation_mcp-0.53.0.tar.gz", hash = "sha256:95bb08cd628ea8d347fb243a831a1ddc104cd4b5d88401885da327345b8e890f"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["mcp"]
+
+[[package]]
+name = "opentelemetry-instrumentation-milvus"
+version = "0.53.0"
+description = "OpenTelemetry Milvus instrumentation"
+optional = false
+python-versions = "<4,>=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_milvus-0.53.0-py3-none-any.whl", hash = "sha256:26e74998bd735cea4d31d02137a65b8dbc15dd857acdeea2a23af020f2e4cbe6"},
+ {file = "opentelemetry_instrumentation_milvus-0.53.0.tar.gz", hash = "sha256:613b32bee958dacb05ff3325050b87eedb4697eda9c75c304d1438bbb47f929c"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["pymilvus"]
+
+[[package]]
+name = "opentelemetry-instrumentation-mistralai"
+version = "0.53.0"
+description = "OpenTelemetry Mistral AI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_mistralai-0.53.0-py3-none-any.whl", hash = "sha256:f23c892366262be6c0011105167e7db455a73a72675ce4529258f66aa24f7fb3"},
+ {file = "opentelemetry_instrumentation_mistralai-0.53.0.tar.gz", hash = "sha256:1d05ab9b303efe32dc3e6fb7c7cc844b32b33355535b3a5f03d0d5100b0db36e"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["mistralai"]
+
+[[package]]
+name = "opentelemetry-instrumentation-ollama"
+version = "0.53.0"
+description = "OpenTelemetry Ollama instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_ollama-0.53.0-py3-none-any.whl", hash = "sha256:44aa9e53b9359b9571e2f84ee5313ea39cb49626db42fa0a27c77441b6f7fe1b"},
+ {file = "opentelemetry_instrumentation_ollama-0.53.0.tar.gz", hash = "sha256:2039ac601ff68f2a1fa97e8af5de94f00ccae67797d07c04a3cc706979bcb4cb"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["ollama"]
+
+[[package]]
+name = "opentelemetry-instrumentation-openai"
+version = "0.53.0"
+description = "OpenTelemetry OpenAI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_openai-0.53.0-py3-none-any.whl", hash = "sha256:91d9f69673636f5f7d50e5a4782e4526d6df3a1ddfd6ac2d9e15a957f8fd9ad8"},
+ {file = "opentelemetry_instrumentation_openai-0.53.0.tar.gz", hash = "sha256:c0cd83d223d138309af3cc5f53c9c6d22136374bfa00e8f66dff31cd322ef547"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["openai"]
+
+[[package]]
+name = "opentelemetry-instrumentation-openai-agents"
+version = "0.53.0"
+description = "OpenTelemetry OpenAI Agents instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_openai_agents-0.53.0-py3-none-any.whl", hash = "sha256:2f19e3348359de73cef8a97865cad82f6ba3820ab52bba671e83e091b1dca6d4"},
+ {file = "opentelemetry_instrumentation_openai_agents-0.53.0.tar.gz", hash = "sha256:f8877927da7de87bafc9757173ff3ce63b487f952260017299678d290c1c432f"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["openai-agents"]
+
+[[package]]
+name = "opentelemetry-instrumentation-pinecone"
+version = "0.53.0"
+description = "OpenTelemetry Pinecone instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_pinecone-0.53.0-py3-none-any.whl", hash = "sha256:b972992b8dae9af5fb811c52333c54d4ac5d0eff0a71e6a9220b4905aa94eee3"},
+ {file = "opentelemetry_instrumentation_pinecone-0.53.0.tar.gz", hash = "sha256:c7918da22d719d15ad6c0148d79f2d25bfeef3ddb3a10800222d8d8491575fd4"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["pinecone (>=5.1.0,<9)"]
+
+[[package]]
+name = "opentelemetry-instrumentation-qdrant"
+version = "0.53.0"
+description = "OpenTelemetry Qdrant instrumentation"
+optional = false
+python-versions = "<4,>=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_qdrant-0.53.0-py3-none-any.whl", hash = "sha256:448bca5e4ce4061fbb760a51a9732dbb91c07193bb1774a3eb6579d79007e2b3"},
+ {file = "opentelemetry_instrumentation_qdrant-0.53.0.tar.gz", hash = "sha256:4a739516f3864963cab42f8c67c632cb276861b590b852df91124585031e07dc"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["qdrant-client"]
+
+[[package]]
+name = "opentelemetry-instrumentation-redis"
+version = "0.61b0"
+description = "OpenTelemetry Redis instrumentation"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_redis-0.61b0-py3-none-any.whl", hash = "sha256:8d4e850bbb5f8eeafa44c0eac3a007990c7125de187bc9c3659e29ff7e091172"},
+ {file = "opentelemetry_instrumentation_redis-0.61b0.tar.gz", hash = "sha256:ae0fbb56be9a641e621d55b02a7d62977a2c77c5ee760addd79b9b266e46e523"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.12,<2.0"
+opentelemetry-instrumentation = "0.61b0"
+opentelemetry-semantic-conventions = "0.61b0"
+wrapt = ">=1.12.1"
+
+[package.extras]
+instruments = ["redis (>=2.6)"]
+
+[[package]]
+name = "opentelemetry-instrumentation-replicate"
+version = "0.53.0"
+description = "OpenTelemetry Replicate instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_replicate-0.53.0-py3-none-any.whl", hash = "sha256:318b9f59acb6b83b51075d1fbdc5fee1a79867fb24268a030c4e27953ed283b2"},
+ {file = "opentelemetry_instrumentation_replicate-0.53.0.tar.gz", hash = "sha256:ca348b6dd57267d15e715d27eaf33c52113bbb9c27875c479fd868228a812941"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["replicate"]
+
+[[package]]
+name = "opentelemetry-instrumentation-requests"
+version = "0.61b0"
+description = "OpenTelemetry requests instrumentation"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_requests-0.61b0-py3-none-any.whl", hash = "sha256:cce19b379949fe637eb73ba39b02c57d2d0805447ca6d86534aa33fcb141f683"},
+ {file = "opentelemetry_instrumentation_requests-0.61b0.tar.gz", hash = "sha256:15f879ce8fb206bd7e6fdc61663ea63481040a845218c0cf42902ce70bd7e9d9"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.12,<2.0"
+opentelemetry-instrumentation = "0.61b0"
+opentelemetry-semantic-conventions = "0.61b0"
+opentelemetry-util-http = "0.61b0"
+
+[package.extras]
+instruments = ["requests (>=2.0,<3.0)"]
+
+[[package]]
+name = "opentelemetry-instrumentation-sagemaker"
+version = "0.53.0"
+description = "OpenTelemetry SageMaker instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_sagemaker-0.53.0-py3-none-any.whl", hash = "sha256:d20e07fe7765908bbd58a6e00ac970a38482bf05ac7bd737027abd92507fc367"},
+ {file = "opentelemetry_instrumentation_sagemaker-0.53.0.tar.gz", hash = "sha256:08d34be9f9cf6a12457b90713c8589ec5cbc3c87ddff862543f5590549fd202a"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["boto3"]
+
+[[package]]
+name = "opentelemetry-instrumentation-sqlalchemy"
+version = "0.61b0"
+description = "OpenTelemetry SQLAlchemy instrumentation"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_sqlalchemy-0.61b0-py3-none-any.whl", hash = "sha256:f115e0be54116ba4c327b8d7b68db4045ee18d44439d888ab8130a549c50d1c1"},
+ {file = "opentelemetry_instrumentation_sqlalchemy-0.61b0.tar.gz", hash = "sha256:13a3a159a2043a52f0180b3757fbaa26741b0e08abb50deddce4394c118956e6"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.12,<2.0"
+opentelemetry-instrumentation = "0.61b0"
+opentelemetry-semantic-conventions = "0.61b0"
+packaging = ">=21.0"
+wrapt = ">=1.11.2"
+
+[package.extras]
+instruments = ["sqlalchemy (>=1.0.0,<2.1.0)"]
+
+[[package]]
+name = "opentelemetry-instrumentation-threading"
+version = "0.61b0"
+description = "Thread context propagation support for OpenTelemetry"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_threading-0.61b0-py3-none-any.whl", hash = "sha256:735f4a1dc964202fc8aff475efc12bb64e6566f22dff52d5cb5de864b3fe1a70"},
+ {file = "opentelemetry_instrumentation_threading-0.61b0.tar.gz", hash = "sha256:38e0263c692d15a7a458b3fa0286d29290448fa4ac4c63045edac438c6113433"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.12,<2.0"
+opentelemetry-instrumentation = "0.61b0"
+wrapt = ">=1.0.0,<2.0.0"
+
+[[package]]
+name = "opentelemetry-instrumentation-together"
+version = "0.53.0"
+description = "OpenTelemetry Together AI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_together-0.53.0-py3-none-any.whl", hash = "sha256:686ebf9b181aa942355f44fed2fbb2c7e04174f0622127f7a80c41730fe1bc8c"},
+ {file = "opentelemetry_instrumentation_together-0.53.0.tar.gz", hash = "sha256:f34c411bdc0ed1f72d33ca05ef4d16fcd8935b2ce18b6d9f625cec91a290b3b9"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["together"]
+
+[[package]]
+name = "opentelemetry-instrumentation-transformers"
+version = "0.53.0"
+description = "OpenTelemetry transformers instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_transformers-0.53.0-py3-none-any.whl", hash = "sha256:c2dff5f32579f702842d98dd53b626f25e859a6d9cb9e46f4807a46647f8d6a5"},
+ {file = "opentelemetry_instrumentation_transformers-0.53.0.tar.gz", hash = "sha256:c29c2fd97b01e0ca111996e22a4d4fa5da023b61c643e385e6ce62f2a46b18a1"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["transformers"]
+
+[[package]]
+name = "opentelemetry-instrumentation-urllib3"
+version = "0.61b0"
+description = "OpenTelemetry urllib3 instrumentation"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_urllib3-0.61b0-py3-none-any.whl", hash = "sha256:9644f8c07870266e52f129e6226859ff3a35192555abe46fa0ef9bbbf5b6b46d"},
+ {file = "opentelemetry_instrumentation_urllib3-0.61b0.tar.gz", hash = "sha256:f00037bc8ff813153c4b79306f55a14618c40469a69c6c03a3add29dc7e8b928"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.12,<2.0"
+opentelemetry-instrumentation = "0.61b0"
+opentelemetry-semantic-conventions = "0.61b0"
+opentelemetry-util-http = "0.61b0"
+wrapt = ">=1.0.0,<2.0.0"
+
+[package.extras]
+instruments = ["urllib3 (>=1.0.0,<3.0.0)"]
+
+[[package]]
+name = "opentelemetry-instrumentation-vertexai"
+version = "0.53.0"
+description = "OpenTelemetry Vertex AI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_vertexai-0.53.0-py3-none-any.whl", hash = "sha256:8f2d610e3da3e717069a439d61a3adfa2b375d4658de03f2e05131a3cbbd4681"},
+ {file = "opentelemetry_instrumentation_vertexai-0.53.0.tar.gz", hash = "sha256:436ebbb284af8c067d5ea98e349c53692d801989f61769481b45b75774756fc8"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["google-cloud-aiplatform"]
+
+[[package]]
+name = "opentelemetry-instrumentation-voyageai"
+version = "0.53.0"
+description = "OpenTelemetry Voyage AI instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_voyageai-0.53.0-py3-none-any.whl", hash = "sha256:43342c73dc6cafe4e7d7c6ce66fc5964481d43d1dd71de55ef1fcd5d6c72c6e3"},
+ {file = "opentelemetry_instrumentation_voyageai-0.53.0.tar.gz", hash = "sha256:8382bbbf00d32dcf38d6b0faabff6bd933163d46a5a4de3e86c49114bb00c9b5"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["voyageai"]
+
+[[package]]
+name = "opentelemetry-instrumentation-watsonx"
+version = "0.53.0"
+description = "OpenTelemetry IBM Watsonx Instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_watsonx-0.53.0-py3-none-any.whl", hash = "sha256:d7567f1f58fb78e37aee04a154f5aedd116628930835d10e78267e122f7f5589"},
+ {file = "opentelemetry_instrumentation_watsonx-0.53.0.tar.gz", hash = "sha256:e0064eb9f173cd06e685c2a55f8afc12a603306ca22d946864ba7db34920edd3"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["ibm-watson-machine-learning"]
+
+[[package]]
+name = "opentelemetry-instrumentation-weaviate"
+version = "0.53.0"
+description = "OpenTelemetry Weaviate instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_weaviate-0.53.0-py3-none-any.whl", hash = "sha256:2d825fe52e83db0c3db8cc5536ea8cede80844e51d2c64a88eb4b3531c55731a"},
+ {file = "opentelemetry_instrumentation_weaviate-0.53.0.tar.gz", hash = "sha256:f843fdac67d07ac99039d889f4f20e36e69358df26de943f490cccaa47da79bd"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+
+[package.extras]
+instruments = ["weaviate-client"]
+
+[[package]]
+name = "opentelemetry-instrumentation-writer"
+version = "0.53.0"
+description = "OpenTelemetry Writer instrumentation"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_instrumentation_writer-0.53.0-py3-none-any.whl", hash = "sha256:04a1c1840ba170fae53b48d80462cb572166ad1e3434969a1293a1dfc68f9dfe"},
+ {file = "opentelemetry_instrumentation_writer-0.53.0.tar.gz", hash = "sha256:802598df8ba6a131fdd2912aa0b7fc4082f541e2d79a57a0ef7fbec78691158d"},
+]
+
+[package.dependencies]
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-instrumentation = ">=0.59b0"
+opentelemetry-semantic-conventions = ">=0.59b0"
+opentelemetry-semantic-conventions-ai = ">=0.4.11"
+
+[package.extras]
+instruments = ["writer"]
+
+[[package]]
+name = "opentelemetry-proto"
+version = "1.40.0"
+description = "OpenTelemetry Python Proto"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_proto-1.40.0-py3-none-any.whl", hash = "sha256:266c4385d88923a23d63e353e9761af0f47a6ed0d486979777fe4de59dc9b25f"},
+ {file = "opentelemetry_proto-1.40.0.tar.gz", hash = "sha256:03f639ca129ba513f5819810f5b1f42bcb371391405d99c168fe6937c62febcd"},
+]
+
+[package.dependencies]
+protobuf = ">=5.0,<7.0"
+
+[[package]]
+name = "opentelemetry-sdk"
+version = "1.40.0"
+description = "OpenTelemetry Python SDK"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_sdk-1.40.0-py3-none-any.whl", hash = "sha256:787d2154a71f4b3d81f20524a8ce061b7db667d24e46753f32a7bc48f1c1f3f1"},
+ {file = "opentelemetry_sdk-1.40.0.tar.gz", hash = "sha256:18e9f5ec20d859d268c7cb3c5198c8d105d073714db3de50b593b8c1345a48f2"},
+]
+
+[package.dependencies]
+opentelemetry-api = "1.40.0"
+opentelemetry-semantic-conventions = "0.61b0"
+typing-extensions = ">=4.5.0"
+
+[[package]]
+name = "opentelemetry-semantic-conventions"
+version = "0.61b0"
+description = "OpenTelemetry Semantic Conventions"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_semantic_conventions-0.61b0-py3-none-any.whl", hash = "sha256:fa530a96be229795f8cef353739b618148b0fe2b4b3f005e60e262926c4d38e2"},
+ {file = "opentelemetry_semantic_conventions-0.61b0.tar.gz", hash = "sha256:072f65473c5d7c6dc0355b27d6c9d1a679d63b6d4b4b16a9773062cb7e31192a"},
+]
+
+[package.dependencies]
+opentelemetry-api = "1.40.0"
+typing-extensions = ">=4.5.0"
+
+[[package]]
+name = "opentelemetry-semantic-conventions-ai"
+version = "0.4.15"
+description = "OpenTelemetry Semantic Conventions Extension for Large Language Models"
+optional = false
+python-versions = "<4,>=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_semantic_conventions_ai-0.4.15-py3-none-any.whl", hash = "sha256:011461f1fba30f27035c49ab3b8344367adc72da0a6c8d3c7428303c6779edc9"},
+ {file = "opentelemetry_semantic_conventions_ai-0.4.15.tar.gz", hash = "sha256:12de172d1e11d21c6e82bbf578c7e8a713589a7fda76af9ed785632564a28b81"},
+]
+
+[package.dependencies]
+opentelemetry-sdk = ">=1.38.0,<2"
+opentelemetry-semantic-conventions = ">=0.59b0"
+
+[[package]]
+name = "opentelemetry-util-http"
+version = "0.61b0"
+description = "Web util for OpenTelemetry"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_util_http-0.61b0-py3-none-any.whl", hash = "sha256:8e715e848233e9527ea47e275659ea60a57a75edf5206a3b937e236a6da5fc33"},
+ {file = "opentelemetry_util_http-0.61b0.tar.gz", hash = "sha256:1039cb891334ad2731affdf034d8fb8b48c239af9b6dd295e5fabd07f1c95572"},
+]
+
[[package]]
name = "orjson"
version = "3.11.2"
@@ -4369,13 +5477,26 @@ files = [
[package.dependencies]
ptyprocess = ">=0.5"
+[[package]]
+name = "phonenumbers"
+version = "9.0.25"
+description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
+optional = false
+python-versions = ">=2.5"
+groups = ["main"]
+files = [
+ {file = "phonenumbers-9.0.25-py2.py3-none-any.whl", hash = "sha256:b1fd6c20d588f5bcd40af3899d727a9f536364211ec6eac554fcd75ca58992a3"},
+ {file = "phonenumbers-9.0.25.tar.gz", hash = "sha256:a5f236fa384c6a77378d7836c8e486ade5f984ad2e8e6cc0dbe5124315cdc81b"},
+]
+
[[package]]
name = "pillow"
version = "12.1.1"
description = "Python Imaging Library (fork)"
-optional = false
+optional = true
python-versions = ">=3.10"
groups = ["main"]
+markers = "extra == \"sandbox\""
files = [
{file = "pillow-12.1.1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1f1625b72740fdda5d77b4def688eb8fd6490975d06b909fd19f13f391e077e0"},
{file = "pillow-12.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:178aa072084bd88ec759052feca8e56cbb14a60b39322b99a049e58090479713"},
@@ -5235,21 +6356,21 @@ diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pypdf"
-version = "6.6.2"
+version = "6.7.5"
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"sandbox\""
files = [
- {file = "pypdf-6.6.2-py3-none-any.whl", hash = "sha256:44c0c9811cfb3b83b28f1c3d054531d5b8b81abaedee0d8cb403650d023832ba"},
- {file = "pypdf-6.6.2.tar.gz", hash = "sha256:0a3ea3b3303982333404e22d8f75d7b3144f9cf4b2970b96856391a516f9f016"},
+ {file = "pypdf-6.7.5-py3-none-any.whl", hash = "sha256:07ba7f1d6e6d9aa2a17f5452e320a84718d4ce863367f7ede2fd72280349ab13"},
+ {file = "pypdf-6.7.5.tar.gz", hash = "sha256:40bb2e2e872078655f12b9b89e2f900888bb505e88a82150b64f9f34fa25651d"},
]
[package.extras]
crypto = ["cryptography"]
cryptodome = ["PyCryptodome"]
-dev = ["black", "flit", "pip-tools", "pre-commit", "pytest-cov", "pytest-socket", "pytest-timeout", "pytest-xdist", "wheel"]
+dev = ["flit", "pip-tools", "pre-commit", "pytest-cov", "pytest-socket", "pytest-timeout", "pytest-xdist", "wheel"]
docs = ["myst_parser", "sphinx", "sphinx_rtd_theme"]
full = ["Pillow (>=8.0.0)", "cryptography"]
image = ["Pillow (>=8.0.0)"]
@@ -5451,6 +6572,21 @@ Pillow = ">=3.3.2"
typing-extensions = ">=4.9.0"
XlsxWriter = ">=0.5.7"
+[[package]]
+name = "python-stdnum"
+version = "2.2"
+description = "Python module to handle standardized numbers and codes"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "python_stdnum-2.2-py3-none-any.whl", hash = "sha256:bdf98fd117a0ca152e4047aa8ad254bae63853d4e915ddd4e0effb33ba0e9260"},
+ {file = "python_stdnum-2.2.tar.gz", hash = "sha256:e95fcfa858a703d4a40130cb3eaac133c60d8808a7f3c98efeedac968c2479b9"},
+]
+
+[package.extras]
+soap = ["zeep"]
+
[[package]]
name = "pytz"
version = "2025.2"
@@ -6122,10 +7258,173 @@ files = [
]
[package.dependencies]
-botocore = ">=1.37.4,<2.0a.0"
+botocore = ">=1.37.4,<2.0a0"
[package.extras]
-crt = ["botocore[crt] (>=1.37.4,<2.0a.0)"]
+crt = ["botocore[crt] (>=1.37.4,<2.0a0)"]
+
+[[package]]
+name = "scikit-learn"
+version = "1.8.0"
+description = "A set of python modules for machine learning and data mining"
+optional = false
+python-versions = ">=3.11"
+groups = ["main"]
+files = [
+ {file = "scikit_learn-1.8.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:146b4d36f800c013d267b29168813f7a03a43ecd2895d04861f1240b564421da"},
+ {file = "scikit_learn-1.8.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:f984ca4b14914e6b4094c5d52a32ea16b49832c03bd17a110f004db3c223e8e1"},
+ {file = "scikit_learn-1.8.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5e30adb87f0cc81c7690a84f7932dd66be5bac57cfe16b91cb9151683a4a2d3b"},
+ {file = "scikit_learn-1.8.0-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ada8121bcb4dac28d930febc791a69f7cb1673c8495e5eee274190b73a4559c1"},
+ {file = "scikit_learn-1.8.0-cp311-cp311-win_amd64.whl", hash = "sha256:c57b1b610bd1f40ba43970e11ce62821c2e6569e4d74023db19c6b26f246cb3b"},
+ {file = "scikit_learn-1.8.0-cp311-cp311-win_arm64.whl", hash = "sha256:2838551e011a64e3053ad7618dda9310175f7515f1742fa2d756f7c874c05961"},
+ {file = "scikit_learn-1.8.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:5fb63362b5a7ddab88e52b6dbb47dac3fd7dafeee740dc6c8d8a446ddedade8e"},
+ {file = "scikit_learn-1.8.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:5025ce924beccb28298246e589c691fe1b8c1c96507e6d27d12c5fadd85bfd76"},
+ {file = "scikit_learn-1.8.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4496bb2cf7a43ce1a2d7524a79e40bc5da45cf598dbf9545b7e8316ccba47bb4"},
+ {file = "scikit_learn-1.8.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a0bcfe4d0d14aec44921545fd2af2338c7471de9cb701f1da4c9d85906ab847a"},
+ {file = "scikit_learn-1.8.0-cp312-cp312-win_amd64.whl", hash = "sha256:35c007dedb2ffe38fe3ee7d201ebac4a2deccd2408e8621d53067733e3c74809"},
+ {file = "scikit_learn-1.8.0-cp312-cp312-win_arm64.whl", hash = "sha256:8c497fff237d7b4e07e9ef1a640887fa4fb765647f86fbe00f969ff6280ce2bb"},
+ {file = "scikit_learn-1.8.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0d6ae97234d5d7079dc0040990a6f7aeb97cb7fa7e8945f1999a429b23569e0a"},
+ {file = "scikit_learn-1.8.0-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:edec98c5e7c128328124a029bceb09eda2d526997780fef8d65e9a69eead963e"},
+ {file = "scikit_learn-1.8.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:74b66d8689d52ed04c271e1329f0c61635bcaf5b926db9b12d58914cdc01fe57"},
+ {file = "scikit_learn-1.8.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8fdf95767f989b0cfedb85f7ed8ca215d4be728031f56ff5a519ee1e3276dc2e"},
+ {file = "scikit_learn-1.8.0-cp313-cp313-win_amd64.whl", hash = "sha256:2de443b9373b3b615aec1bb57f9baa6bb3a9bd093f1269ba95c17d870422b271"},
+ {file = "scikit_learn-1.8.0-cp313-cp313-win_arm64.whl", hash = "sha256:eddde82a035681427cbedded4e6eff5e57fa59216c2e3e90b10b19ab1d0a65c3"},
+ {file = "scikit_learn-1.8.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:7cc267b6108f0a1499a734167282c00c4ebf61328566b55ef262d48e9849c735"},
+ {file = "scikit_learn-1.8.0-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:fe1c011a640a9f0791146011dfd3c7d9669785f9fed2b2a5f9e207536cf5c2fd"},
+ {file = "scikit_learn-1.8.0-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72358cce49465d140cc4e7792015bb1f0296a9742d5622c67e31399b75468b9e"},
+ {file = "scikit_learn-1.8.0-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:80832434a6cc114f5219211eec13dcbc16c2bac0e31ef64c6d346cde3cf054cb"},
+ {file = "scikit_learn-1.8.0-cp313-cp313t-win_amd64.whl", hash = "sha256:ee787491dbfe082d9c3013f01f5991658b0f38aa8177e4cd4bf434c58f551702"},
+ {file = "scikit_learn-1.8.0-cp313-cp313t-win_arm64.whl", hash = "sha256:bf97c10a3f5a7543f9b88cbf488d33d175e9146115a451ae34568597ba33dcde"},
+ {file = "scikit_learn-1.8.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:c22a2da7a198c28dd1a6e1136f19c830beab7fdca5b3e5c8bba8394f8a5c45b3"},
+ {file = "scikit_learn-1.8.0-cp314-cp314-macosx_12_0_arm64.whl", hash = "sha256:6b595b07a03069a2b1740dc08c2299993850ea81cce4fe19b2421e0c970de6b7"},
+ {file = "scikit_learn-1.8.0-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:29ffc74089f3d5e87dfca4c2c8450f88bdc61b0fc6ed5d267f3988f19a1309f6"},
+ {file = "scikit_learn-1.8.0-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fb65db5d7531bccf3a4f6bec3462223bea71384e2cda41da0f10b7c292b9e7c4"},
+ {file = "scikit_learn-1.8.0-cp314-cp314-win_amd64.whl", hash = "sha256:56079a99c20d230e873ea40753102102734c5953366972a71d5cb39a32bc40c6"},
+ {file = "scikit_learn-1.8.0-cp314-cp314-win_arm64.whl", hash = "sha256:3bad7565bc9cf37ce19a7c0d107742b320c1285df7aab1a6e2d28780df167242"},
+ {file = "scikit_learn-1.8.0-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:4511be56637e46c25721e83d1a9cea9614e7badc7040c4d573d75fbe257d6fd7"},
+ {file = "scikit_learn-1.8.0-cp314-cp314t-macosx_12_0_arm64.whl", hash = "sha256:a69525355a641bf8ef136a7fa447672fb54fe8d60cab5538d9eb7c6438543fb9"},
+ {file = "scikit_learn-1.8.0-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c2656924ec73e5939c76ac4c8b026fc203b83d8900362eb2599d8aee80e4880f"},
+ {file = "scikit_learn-1.8.0-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:15fc3b5d19cc2be65404786857f2e13c70c83dd4782676dd6814e3b89dc8f5b9"},
+ {file = "scikit_learn-1.8.0-cp314-cp314t-win_amd64.whl", hash = "sha256:00d6f1d66fbcf4eba6e356e1420d33cc06c70a45bb1363cd6f6a8e4ebbbdece2"},
+ {file = "scikit_learn-1.8.0-cp314-cp314t-win_arm64.whl", hash = "sha256:f28dd15c6bb0b66ba09728cf09fd8736c304be29409bd8445a080c1280619e8c"},
+ {file = "scikit_learn-1.8.0.tar.gz", hash = "sha256:9bccbb3b40e3de10351f8f5068e105d0f4083b1a65fa07b6634fbc401a6287fd"},
+]
+
+[package.dependencies]
+joblib = ">=1.3.0"
+numpy = ">=1.24.1"
+scipy = ">=1.10.0"
+threadpoolctl = ">=3.2.0"
+
+[package.extras]
+benchmark = ["matplotlib (>=3.6.1)", "memory_profiler (>=0.57.0)", "pandas (>=1.5.0)"]
+build = ["cython (>=3.1.2)", "meson-python (>=0.17.1)", "numpy (>=1.24.1)", "scipy (>=1.10.0)"]
+docs = ["Pillow (>=10.1.0)", "matplotlib (>=3.6.1)", "memory_profiler (>=0.57.0)", "numpydoc (>=1.2.0)", "pandas (>=1.5.0)", "plotly (>=5.18.0)", "polars (>=0.20.30)", "pooch (>=1.8.0)", "pydata-sphinx-theme (>=0.15.3)", "scikit-image (>=0.22.0)", "seaborn (>=0.13.0)", "sphinx (>=7.3.7)", "sphinx-copybutton (>=0.5.2)", "sphinx-design (>=0.6.0)", "sphinx-gallery (>=0.17.1)", "sphinx-prompt (>=1.4.0)", "sphinx-remove-toctrees (>=1.0.0.post1)", "sphinxcontrib-sass (>=0.3.4)", "sphinxext-opengraph (>=0.9.1)", "towncrier (>=24.8.0)"]
+examples = ["matplotlib (>=3.6.1)", "pandas (>=1.5.0)", "plotly (>=5.18.0)", "pooch (>=1.8.0)", "scikit-image (>=0.22.0)", "seaborn (>=0.13.0)"]
+install = ["joblib (>=1.3.0)", "numpy (>=1.24.1)", "scipy (>=1.10.0)", "threadpoolctl (>=3.2.0)"]
+maintenance = ["conda-lock (==3.0.1)"]
+tests = ["matplotlib (>=3.6.1)", "mypy (>=1.15)", "numpydoc (>=1.2.0)", "pandas (>=1.5.0)", "polars (>=0.20.30)", "pooch (>=1.8.0)", "pyamg (>=5.0.0)", "pyarrow (>=12.0.0)", "pytest (>=7.1.2)", "pytest-cov (>=2.9.0)", "ruff (>=0.11.7)"]
+
+[[package]]
+name = "scipy"
+version = "1.17.1"
+description = "Fundamental algorithms for scientific computing in Python"
+optional = false
+python-versions = ">=3.11"
+groups = ["main"]
+files = [
+ {file = "scipy-1.17.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:1f95b894f13729334fb990162e911c9e5dc1ab390c58aa6cbecb389c5b5e28ec"},
+ {file = "scipy-1.17.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:e18f12c6b0bc5a592ed23d3f7b891f68fd7f8241d69b7883769eb5d5dfb52696"},
+ {file = "scipy-1.17.1-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:a3472cfbca0a54177d0faa68f697d8ba4c80bbdc19908c3465556d9f7efce9ee"},
+ {file = "scipy-1.17.1-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:766e0dc5a616d026a3a1cffa379af959671729083882f50307e18175797b3dfd"},
+ {file = "scipy-1.17.1-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:744b2bf3640d907b79f3fd7874efe432d1cf171ee721243e350f55234b4cec4c"},
+ {file = "scipy-1.17.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:43af8d1f3bea642559019edfe64e9b11192a8978efbd1539d7bc2aaa23d92de4"},
+ {file = "scipy-1.17.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:cd96a1898c0a47be4520327e01f874acfd61fb48a9420f8aa9f6483412ffa444"},
+ {file = "scipy-1.17.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4eb6c25dd62ee8d5edf68a8e1c171dd71c292fdae95d8aeb3dd7d7de4c364082"},
+ {file = "scipy-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:d30e57c72013c2a4fe441c2fcb8e77b14e152ad48b5464858e07e2ad9fbfceff"},
+ {file = "scipy-1.17.1-cp311-cp311-win_arm64.whl", hash = "sha256:9ecb4efb1cd6e8c4afea0daa91a87fbddbce1b99d2895d151596716c0b2e859d"},
+ {file = "scipy-1.17.1-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:35c3a56d2ef83efc372eaec584314bd0ef2e2f0d2adb21c55e6ad5b344c0dcb8"},
+ {file = "scipy-1.17.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:fcb310ddb270a06114bb64bbe53c94926b943f5b7f0842194d585c65eb4edd76"},
+ {file = "scipy-1.17.1-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:cc90d2e9c7e5c7f1a482c9875007c095c3194b1cfedca3c2f3291cdc2bc7c086"},
+ {file = "scipy-1.17.1-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:c80be5ede8f3f8eded4eff73cc99a25c388ce98e555b17d31da05287015ffa5b"},
+ {file = "scipy-1.17.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e19ebea31758fac5893a2ac360fedd00116cbb7628e650842a6691ba7ca28a21"},
+ {file = "scipy-1.17.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02ae3b274fde71c5e92ac4d54bc06c42d80e399fec704383dcd99b301df37458"},
+ {file = "scipy-1.17.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8a604bae87c6195d8b1045eddece0514d041604b14f2727bbc2b3020172045eb"},
+ {file = "scipy-1.17.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f590cd684941912d10becc07325a3eeb77886fe981415660d9265c4c418d0bea"},
+ {file = "scipy-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:41b71f4a3a4cab9d366cd9065b288efc4d4f3c0b37a91a8e0947fb5bd7f31d87"},
+ {file = "scipy-1.17.1-cp312-cp312-win_arm64.whl", hash = "sha256:f4115102802df98b2b0db3cce5cb9b92572633a1197c77b7553e5203f284a5b3"},
+ {file = "scipy-1.17.1-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:5e3c5c011904115f88a39308379c17f91546f77c1667cea98739fe0fccea804c"},
+ {file = "scipy-1.17.1-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:6fac755ca3d2c3edcb22f479fceaa241704111414831ddd3bc6056e18516892f"},
+ {file = "scipy-1.17.1-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:7ff200bf9d24f2e4d5dc6ee8c3ac64d739d3a89e2326ba68aaf6c4a2b838fd7d"},
+ {file = "scipy-1.17.1-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:4b400bdc6f79fa02a4d86640310dde87a21fba0c979efff5248908c6f15fad1b"},
+ {file = "scipy-1.17.1-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:2b64ca7d4aee0102a97f3ba22124052b4bd2152522355073580bf4845e2550b6"},
+ {file = "scipy-1.17.1-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:581b2264fc0aa555f3f435a5944da7504ea3a065d7029ad60e7c3d1ae09c5464"},
+ {file = "scipy-1.17.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:beeda3d4ae615106d7094f7e7cef6218392e4465cc95d25f900bebabfded0950"},
+ {file = "scipy-1.17.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6609bc224e9568f65064cfa72edc0f24ee6655b47575954ec6339534b2798369"},
+ {file = "scipy-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:37425bc9175607b0268f493d79a292c39f9d001a357bebb6b88fdfaff13f6448"},
+ {file = "scipy-1.17.1-cp313-cp313-win_arm64.whl", hash = "sha256:5cf36e801231b6a2059bf354720274b7558746f3b1a4efb43fcf557ccd484a87"},
+ {file = "scipy-1.17.1-cp313-cp313t-macosx_10_14_x86_64.whl", hash = "sha256:d59c30000a16d8edc7e64152e30220bfbd724c9bbb08368c054e24c651314f0a"},
+ {file = "scipy-1.17.1-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:010f4333c96c9bb1a4516269e33cb5917b08ef2166d5556ca2fd9f082a9e6ea0"},
+ {file = "scipy-1.17.1-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:2ceb2d3e01c5f1d83c4189737a42d9cb2fc38a6eeed225e7515eef71ad301dce"},
+ {file = "scipy-1.17.1-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:844e165636711ef41f80b4103ed234181646b98a53c8f05da12ca5ca289134f6"},
+ {file = "scipy-1.17.1-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:158dd96d2207e21c966063e1635b1063cd7787b627b6f07305315dd73d9c679e"},
+ {file = "scipy-1.17.1-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:74cbb80d93260fe2ffa334efa24cb8f2f0f622a9b9febf8b483c0b865bfb3475"},
+ {file = "scipy-1.17.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:dbc12c9f3d185f5c737d801da555fb74b3dcfa1a50b66a1a93e09190f41fab50"},
+ {file = "scipy-1.17.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:94055a11dfebe37c656e70317e1996dc197e1a15bbcc351bcdd4610e128fe1ca"},
+ {file = "scipy-1.17.1-cp313-cp313t-win_amd64.whl", hash = "sha256:e30bdeaa5deed6bc27b4cc490823cd0347d7dae09119b8803ae576ea0ce52e4c"},
+ {file = "scipy-1.17.1-cp313-cp313t-win_arm64.whl", hash = "sha256:a720477885a9d2411f94a93d16f9d89bad0f28ca23c3f8daa521e2dcc3f44d49"},
+ {file = "scipy-1.17.1-cp314-cp314-macosx_10_14_x86_64.whl", hash = "sha256:a48a72c77a310327f6a3a920092fa2b8fd03d7deaa60f093038f22d98e096717"},
+ {file = "scipy-1.17.1-cp314-cp314-macosx_12_0_arm64.whl", hash = "sha256:45abad819184f07240d8a696117a7aacd39787af9e0b719d00285549ed19a1e9"},
+ {file = "scipy-1.17.1-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:3fd1fcdab3ea951b610dc4cef356d416d5802991e7e32b5254828d342f7b7e0b"},
+ {file = "scipy-1.17.1-cp314-cp314-macosx_14_0_x86_64.whl", hash = "sha256:7bdf2da170b67fdf10bca777614b1c7d96ae3ca5794fd9587dce41eb2966e866"},
+ {file = "scipy-1.17.1-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:adb2642e060a6549c343603a3851ba76ef0b74cc8c079a9a58121c7ec9fe2350"},
+ {file = "scipy-1.17.1-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eee2cfda04c00a857206a4330f0c5e3e56535494e30ca445eb19ec624ae75118"},
+ {file = "scipy-1.17.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:d2650c1fb97e184d12d8ba010493ee7b322864f7d3d00d3f9bb97d9c21de4068"},
+ {file = "scipy-1.17.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:08b900519463543aa604a06bec02461558a6e1cef8fdbb8098f77a48a83c8118"},
+ {file = "scipy-1.17.1-cp314-cp314-win_amd64.whl", hash = "sha256:3877ac408e14da24a6196de0ddcace62092bfc12a83823e92e49e40747e52c19"},
+ {file = "scipy-1.17.1-cp314-cp314-win_arm64.whl", hash = "sha256:f8885db0bc2bffa59d5c1b72fad7a6a92d3e80e7257f967dd81abb553a90d293"},
+ {file = "scipy-1.17.1-cp314-cp314t-macosx_10_14_x86_64.whl", hash = "sha256:1cc682cea2ae55524432f3cdff9e9a3be743d52a7443d0cba9017c23c87ae2f6"},
+ {file = "scipy-1.17.1-cp314-cp314t-macosx_12_0_arm64.whl", hash = "sha256:2040ad4d1795a0ae89bfc7e8429677f365d45aa9fd5e4587cf1ea737f927b4a1"},
+ {file = "scipy-1.17.1-cp314-cp314t-macosx_14_0_arm64.whl", hash = "sha256:131f5aaea57602008f9822e2115029b55d4b5f7c070287699fe45c661d051e39"},
+ {file = "scipy-1.17.1-cp314-cp314t-macosx_14_0_x86_64.whl", hash = "sha256:9cdc1a2fcfd5c52cfb3045feb399f7b3ce822abdde3a193a6b9a60b3cb5854ca"},
+ {file = "scipy-1.17.1-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e3dcd57ab780c741fde8dc68619de988b966db759a3c3152e8e9142c26295ad"},
+ {file = "scipy-1.17.1-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:a9956e4d4f4a301ebf6cde39850333a6b6110799d470dbbb1e25326ac447f52a"},
+ {file = "scipy-1.17.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:a4328d245944d09fd639771de275701ccadf5f781ba0ff092ad141e017eccda4"},
+ {file = "scipy-1.17.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:a77cbd07b940d326d39a1d1b37817e2ee4d79cb30e7338f3d0cddffae70fcaa2"},
+ {file = "scipy-1.17.1-cp314-cp314t-win_amd64.whl", hash = "sha256:eb092099205ef62cd1782b006658db09e2fed75bffcae7cc0d44052d8aa0f484"},
+ {file = "scipy-1.17.1-cp314-cp314t-win_arm64.whl", hash = "sha256:200e1050faffacc162be6a486a984a0497866ec54149a01270adc8a59b7c7d21"},
+ {file = "scipy-1.17.1.tar.gz", hash = "sha256:95d8e012d8cb8816c226aef832200b1d45109ed4464303e997c5b13122b297c0"},
+]
+
+[package.dependencies]
+numpy = ">=1.26.4,<2.7"
+
+[package.extras]
+dev = ["click (<8.3.0)", "cython-lint (>=0.12.2)", "mypy (==1.10.0)", "pycodestyle", "ruff (>=0.12.0)", "spin", "types-psutil", "typing_extensions"]
+doc = ["intersphinx_registry", "jupyterlite-pyodide-kernel", "jupyterlite-sphinx (>=0.19.1)", "jupytext", "linkify-it-py", "matplotlib (>=3.5)", "myst-nb (>=1.2.0)", "numpydoc", "pooch", "pydata-sphinx-theme (>=0.15.2)", "sphinx (>=5.0.0,<8.2.0)", "sphinx-copybutton", "sphinx-design (>=0.4.0)", "tabulate"]
+test = ["Cython", "array-api-strict (>=2.3.1)", "asv", "gmpy2", "hypothesis (>=6.30)", "meson", "mpmath", "ninja ; sys_platform != \"emscripten\"", "pooch", "pytest (>=8.0.0)", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
+
+[[package]]
+name = "scrubadub"
+version = "2.0.1"
+description = "Clean personally identifiable information from dirty dirty text."
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "scrubadub-2.0.1-py3-none-any.whl", hash = "sha256:44b9004998a03aff4c6b5d9073a52895081742f994470083a7be610b373e62b7"},
+ {file = "scrubadub-2.0.1.tar.gz", hash = "sha256:52a1fb8aa9bc0226043e02c3ec22d450bd4ebeede9e7e8db2def7c89b37c5aad"},
+]
+
+[package.dependencies]
+catalogue = "*"
+dateparser = "*"
+faker = "*"
+phonenumbers = "*"
+python-stdnum = "*"
+scikit-learn = "*"
+textblob = "0.15.3"
+typing-extensions = "*"
[[package]]
name = "setuptools"
@@ -6148,81 +7447,6 @@ enabler = ["pytest-enabler (>=2.2)"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21) ; python_version >= \"3.9\" and sys_platform != \"cygwin\"", "jaraco.envs (>=2.2)", "jaraco.path (>=3.7.2)", "jaraco.test (>=5.5)", "packaging (>=24.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-home (>=0.5)", "pytest-perf ; sys_platform != \"cygwin\"", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel (>=0.44.0)"]
type = ["importlib_metadata (>=7.0.2) ; python_version < \"3.10\"", "jaraco.develop (>=7.21) ; sys_platform != \"cygwin\"", "mypy (==1.14.*)", "pytest-mypy"]
-[[package]]
-name = "shapely"
-version = "2.1.2"
-description = "Manipulation and analysis of geometric objects"
-optional = true
-python-versions = ">=3.10"
-groups = ["main"]
-markers = "extra == \"vertex\""
-files = [
- {file = "shapely-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7ae48c236c0324b4e139bea88a306a04ca630f49be66741b340729d380d8f52f"},
- {file = "shapely-2.1.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:eba6710407f1daa8e7602c347dfc94adc02205ec27ed956346190d66579eb9ea"},
- {file = "shapely-2.1.2-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ef4a456cc8b7b3d50ccec29642aa4aeda959e9da2fe9540a92754770d5f0cf1f"},
- {file = "shapely-2.1.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:e38a190442aacc67ff9f75ce60aec04893041f16f97d242209106d502486a142"},
- {file = "shapely-2.1.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:40d784101f5d06a1fd30b55fc11ea58a61be23f930d934d86f19a180909908a4"},
- {file = "shapely-2.1.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:f6f6cd5819c50d9bcf921882784586aab34a4bd53e7553e175dece6db513a6f0"},
- {file = "shapely-2.1.2-cp310-cp310-win32.whl", hash = "sha256:fe9627c39c59e553c90f5bc3128252cb85dc3b3be8189710666d2f8bc3a5503e"},
- {file = "shapely-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:1d0bfb4b8f661b3b4ec3565fa36c340bfb1cda82087199711f86a88647d26b2f"},
- {file = "shapely-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:91121757b0a36c9aac3427a651a7e6567110a4a67c97edf04f8d55d4765f6618"},
- {file = "shapely-2.1.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:16a9c722ba774cf50b5d4541242b4cce05aafd44a015290c82ba8a16931ff63d"},
- {file = "shapely-2.1.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cc4f7397459b12c0b196c9efe1f9d7e92463cbba142632b4cc6d8bbbbd3e2b09"},
- {file = "shapely-2.1.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:136ab87b17e733e22f0961504d05e77e7be8c9b5a8184f685b4a91a84efe3c26"},
- {file = "shapely-2.1.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:16c5d0fc45d3aa0a69074979f4f1928ca2734fb2e0dde8af9611e134e46774e7"},
- {file = "shapely-2.1.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:6ddc759f72b5b2b0f54a7e7cde44acef680a55019eb52ac63a7af2cf17cb9cd2"},
- {file = "shapely-2.1.2-cp311-cp311-win32.whl", hash = "sha256:2fa78b49485391224755a856ed3b3bd91c8455f6121fee0db0e71cefb07d0ef6"},
- {file = "shapely-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:c64d5c97b2f47e3cd9b712eaced3b061f2b71234b3fc263e0fcf7d889c6559dc"},
- {file = "shapely-2.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fe2533caae6a91a543dec62e8360fe86ffcdc42a7c55f9dfd0128a977a896b94"},
- {file = "shapely-2.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ba4d1333cc0bc94381d6d4308d2e4e008e0bd128bdcff5573199742ee3634359"},
- {file = "shapely-2.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:0bd308103340030feef6c111d3eb98d50dc13feea33affc8a6f9fa549e9458a3"},
- {file = "shapely-2.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:1e7d4d7ad262a48bb44277ca12c7c78cb1b0f56b32c10734ec9a1d30c0b0c54b"},
- {file = "shapely-2.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e9eddfe513096a71896441a7c37db72da0687b34752c4e193577a145c71736fc"},
- {file = "shapely-2.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:980c777c612514c0cf99bc8a9de6d286f5e186dcaf9091252fcd444e5638193d"},
- {file = "shapely-2.1.2-cp312-cp312-win32.whl", hash = "sha256:9111274b88e4d7b54a95218e243282709b330ef52b7b86bc6aaf4f805306f454"},
- {file = "shapely-2.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:743044b4cfb34f9a67205cee9279feaf60ba7d02e69febc2afc609047cb49179"},
- {file = "shapely-2.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:b510dda1a3672d6879beb319bc7c5fd302c6c354584690973c838f46ec3e0fa8"},
- {file = "shapely-2.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:8cff473e81017594d20ec55d86b54bc635544897e13a7cfc12e36909c5309a2a"},
- {file = "shapely-2.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:fe7b77dc63d707c09726b7908f575fc04ff1d1ad0f3fb92aec212396bc6cfe5e"},
- {file = "shapely-2.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7ed1a5bbfb386ee8332713bf7508bc24e32d24b74fc9a7b9f8529a55db9f4ee6"},
- {file = "shapely-2.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a84e0582858d841d54355246ddfcbd1fce3179f185da7470f41ce39d001ee1af"},
- {file = "shapely-2.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:dc3487447a43d42adcdf52d7ac73804f2312cbfa5d433a7d2c506dcab0033dfd"},
- {file = "shapely-2.1.2-cp313-cp313-win32.whl", hash = "sha256:9c3a3c648aedc9f99c09263b39f2d8252f199cb3ac154fadc173283d7d111350"},
- {file = "shapely-2.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:ca2591bff6645c216695bdf1614fca9c82ea1144d4a7591a466fef64f28f0715"},
- {file = "shapely-2.1.2-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:2d93d23bdd2ed9dc157b46bc2f19b7da143ca8714464249bef6771c679d5ff40"},
- {file = "shapely-2.1.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:01d0d304b25634d60bd7cf291828119ab55a3bab87dc4af1e44b07fb225f188b"},
- {file = "shapely-2.1.2-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8d8382dd120d64b03698b7298b89611a6ea6f55ada9d39942838b79c9bc89801"},
- {file = "shapely-2.1.2-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:19efa3611eef966e776183e338b2d7ea43569ae99ab34f8d17c2c054d3205cc0"},
- {file = "shapely-2.1.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:346ec0c1a0fcd32f57f00e4134d1200e14bf3f5ae12af87ba83ca275c502498c"},
- {file = "shapely-2.1.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:6305993a35989391bd3476ee538a5c9a845861462327efe00dd11a5c8c709a99"},
- {file = "shapely-2.1.2-cp313-cp313t-win32.whl", hash = "sha256:c8876673449f3401f278c86eb33224c5764582f72b653a415d0e6672fde887bf"},
- {file = "shapely-2.1.2-cp313-cp313t-win_amd64.whl", hash = "sha256:4a44bc62a10d84c11a7a3d7c1c4fe857f7477c3506e24c9062da0db0ae0c449c"},
- {file = "shapely-2.1.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:9a522f460d28e2bf4e12396240a5fc1518788b2fcd73535166d748399ef0c223"},
- {file = "shapely-2.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1ff629e00818033b8d71139565527ced7d776c269a49bd78c9df84e8f852190c"},
- {file = "shapely-2.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:f67b34271dedc3c653eba4e3d7111aa421d5be9b4c4c7d38d30907f796cb30df"},
- {file = "shapely-2.1.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:21952dc00df38a2c28375659b07a3979d22641aeb104751e769c3ee825aadecf"},
- {file = "shapely-2.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:1f2f33f486777456586948e333a56ae21f35ae273be99255a191f5c1fa302eb4"},
- {file = "shapely-2.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:cf831a13e0d5a7eb519e96f58ec26e049b1fad411fc6fc23b162a7ce04d9cffc"},
- {file = "shapely-2.1.2-cp314-cp314-win32.whl", hash = "sha256:61edcd8d0d17dd99075d320a1dd39c0cb9616f7572f10ef91b4b5b00c4aeb566"},
- {file = "shapely-2.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:a444e7afccdb0999e203b976adb37ea633725333e5b119ad40b1ca291ecf311c"},
- {file = "shapely-2.1.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:5ebe3f84c6112ad3d4632b1fd2290665aa75d4cef5f6c5d77c4c95b324527c6a"},
- {file = "shapely-2.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5860eb9f00a1d49ebb14e881f5caf6c2cf472c7fd38bd7f253bbd34f934eb076"},
- {file = "shapely-2.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b705c99c76695702656327b819c9660768ec33f5ce01fa32b2af62b56ba400a1"},
- {file = "shapely-2.1.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:a1fd0ea855b2cf7c9cddaf25543e914dd75af9de08785f20ca3085f2c9ca60b0"},
- {file = "shapely-2.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:df90e2db118c3671a0754f38e36802db75fe0920d211a27481daf50a711fdf26"},
- {file = "shapely-2.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:361b6d45030b4ac64ddd0a26046906c8202eb60d0f9f53085f5179f1d23021a0"},
- {file = "shapely-2.1.2-cp314-cp314t-win32.whl", hash = "sha256:b54df60f1fbdecc8ebc2c5b11870461a6417b3d617f555e5033f1505d36e5735"},
- {file = "shapely-2.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:0036ac886e0923417932c2e6369b6c52e38e0ff5d9120b90eef5cd9a5fc5cae9"},
- {file = "shapely-2.1.2.tar.gz", hash = "sha256:2ed4ecb28320a433db18a5bf029986aa8afcfd740745e78847e330d5d94922a9"},
-]
-
-[package.dependencies]
-numpy = ">=1.21"
-
-[package.extras]
-docs = ["matplotlib", "numpydoc (==1.1.*)", "sphinx", "sphinx-book-theme", "sphinx-remove-toctrees"]
-test = ["pytest", "pytest-cov", "scipy-doctest"]
-
[[package]]
name = "six"
version = "1.17.0"
@@ -6604,6 +7828,21 @@ files = [
doc = ["reno", "sphinx"]
test = ["pytest", "tornado (>=4.5)", "typeguard"]
+[[package]]
+name = "textblob"
+version = "0.15.3"
+description = "Simple, Pythonic text processing. Sentiment analysis, part-of-speech tagging, noun phrase parsing, and more."
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "textblob-0.15.3-py2.py3-none-any.whl", hash = "sha256:b0eafd8b129c9b196c8128056caed891d64b7fa20ba570e1fcde438f4f7dd312"},
+ {file = "textblob-0.15.3.tar.gz", hash = "sha256:7ff3c00cb5a85a30132ee6768b8c68cb2b9d76432fec18cd1b3ffe2f8594ec8c"},
+]
+
+[package.dependencies]
+nltk = ">=3.1"
+
[[package]]
name = "textual"
version = "4.0.0"
@@ -6625,6 +7864,18 @@ typing-extensions = ">=4.4.0,<5.0.0"
[package.extras]
syntax = ["tree-sitter (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-bash (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-css (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-go (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-html (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-java (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-javascript (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-json (>=0.24.0) ; python_version >= \"3.9\"", "tree-sitter-markdown (>=0.3.0) ; python_version >= \"3.9\"", "tree-sitter-python (>=0.23.0) ; python_version >= \"3.9\"", "tree-sitter-regex (>=0.24.0) ; python_version >= \"3.9\"", "tree-sitter-rust (>=0.23.0,<=0.23.2) ; python_version >= \"3.9\"", "tree-sitter-sql (>=0.3.0,<0.3.8) ; python_version >= \"3.9\"", "tree-sitter-toml (>=0.6.0) ; python_version >= \"3.9\"", "tree-sitter-xml (>=0.7.0) ; python_version >= \"3.9\"", "tree-sitter-yaml (>=0.6.0) ; python_version >= \"3.9\""]
+[[package]]
+name = "threadpoolctl"
+version = "3.6.0"
+description = "threadpoolctl"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "threadpoolctl-3.6.0-py3-none-any.whl", hash = "sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb"},
+ {file = "threadpoolctl-3.6.0.tar.gz", hash = "sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e"},
+]
+
[[package]]
name = "tiktoken"
version = "0.11.0"
@@ -6740,6 +7991,72 @@ notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
+[[package]]
+name = "traceloop-sdk"
+version = "0.53.0"
+description = "Traceloop Software Development Kit (SDK) for Python"
+optional = false
+python-versions = "<4,>=3.10"
+groups = ["main"]
+files = [
+ {file = "traceloop_sdk-0.53.0-py3-none-any.whl", hash = "sha256:29cee493dda92c872b4578a7f570794669a64f51ab09d61a0893749d616bfcfd"},
+ {file = "traceloop_sdk-0.53.0.tar.gz", hash = "sha256:3cd761733eea055d0dc87b5a22c8cc8a6350eca896a80acb5a7e11d089aee3fb"},
+]
+
+[package.dependencies]
+aiohttp = ">=3.11.11,<4"
+colorama = ">=0.4.6,<0.5.0"
+cuid = ">=0.4,<0.5"
+deprecated = ">=1.2.14,<2"
+jinja2 = ">=3.1.5,<4"
+opentelemetry-api = ">=1.38.0,<2"
+opentelemetry-exporter-otlp-proto-grpc = ">=1.38.0,<2"
+opentelemetry-exporter-otlp-proto-http = ">=1.38.0,<2"
+opentelemetry-instrumentation-agno = "*"
+opentelemetry-instrumentation-alephalpha = "*"
+opentelemetry-instrumentation-anthropic = "*"
+opentelemetry-instrumentation-bedrock = "*"
+opentelemetry-instrumentation-chromadb = "*"
+opentelemetry-instrumentation-cohere = "*"
+opentelemetry-instrumentation-crewai = "*"
+opentelemetry-instrumentation-google-generativeai = "*"
+opentelemetry-instrumentation-groq = "*"
+opentelemetry-instrumentation-haystack = "*"
+opentelemetry-instrumentation-lancedb = "*"
+opentelemetry-instrumentation-langchain = "*"
+opentelemetry-instrumentation-llamaindex = "*"
+opentelemetry-instrumentation-logging = ">=0.59b0"
+opentelemetry-instrumentation-marqo = "*"
+opentelemetry-instrumentation-mcp = "*"
+opentelemetry-instrumentation-milvus = "*"
+opentelemetry-instrumentation-mistralai = "*"
+opentelemetry-instrumentation-ollama = "*"
+opentelemetry-instrumentation-openai = "*"
+opentelemetry-instrumentation-openai-agents = "*"
+opentelemetry-instrumentation-pinecone = "*"
+opentelemetry-instrumentation-qdrant = "*"
+opentelemetry-instrumentation-redis = ">=0.59b0"
+opentelemetry-instrumentation-replicate = "*"
+opentelemetry-instrumentation-requests = ">=0.59b0"
+opentelemetry-instrumentation-sagemaker = "*"
+opentelemetry-instrumentation-sqlalchemy = ">=0.59b0"
+opentelemetry-instrumentation-threading = ">=0.59b0"
+opentelemetry-instrumentation-together = "*"
+opentelemetry-instrumentation-transformers = "*"
+opentelemetry-instrumentation-urllib3 = ">=0.59b0"
+opentelemetry-instrumentation-vertexai = "*"
+opentelemetry-instrumentation-voyageai = "*"
+opentelemetry-instrumentation-watsonx = "*"
+opentelemetry-instrumentation-weaviate = "*"
+opentelemetry-instrumentation-writer = "*"
+opentelemetry-sdk = ">=1.38.0,<2"
+opentelemetry-semantic-conventions-ai = ">=0.4.13,<0.5.0"
+pydantic = ">=1"
+tenacity = ">=8.2.3,<10.0"
+
+[package.extras]
+datasets = ["pandas"]
+
[[package]]
name = "traitlets"
version = "5.14.3"
@@ -7178,6 +8495,97 @@ files = [
{file = "whatthepatch-1.0.7.tar.gz", hash = "sha256:9eefb4ebea5200408e02d413d2b4bc28daea6b78bb4b4d53431af7245f7d7edf"},
]
+[[package]]
+name = "wrapt"
+version = "1.17.3"
+description = "Module for decorators, wrappers and monkey patching."
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "wrapt-1.17.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:88bbae4d40d5a46142e70d58bf664a89b6b4befaea7b2ecc14e03cedb8e06c04"},
+ {file = "wrapt-1.17.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e6b13af258d6a9ad602d57d889f83b9d5543acd471eee12eb51f5b01f8eb1bc2"},
+ {file = "wrapt-1.17.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:fd341868a4b6714a5962c1af0bd44f7c404ef78720c7de4892901e540417111c"},
+ {file = "wrapt-1.17.3-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:f9b2601381be482f70e5d1051a5965c25fb3625455a2bf520b5a077b22afb775"},
+ {file = "wrapt-1.17.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:343e44b2a8e60e06a7e0d29c1671a0d9951f59174f3709962b5143f60a2a98bd"},
+ {file = "wrapt-1.17.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:33486899acd2d7d3066156b03465b949da3fd41a5da6e394ec49d271baefcf05"},
+ {file = "wrapt-1.17.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e6f40a8aa5a92f150bdb3e1c44b7e98fb7113955b2e5394122fa5532fec4b418"},
+ {file = "wrapt-1.17.3-cp310-cp310-win32.whl", hash = "sha256:a36692b8491d30a8c75f1dfee65bef119d6f39ea84ee04d9f9311f83c5ad9390"},
+ {file = "wrapt-1.17.3-cp310-cp310-win_amd64.whl", hash = "sha256:afd964fd43b10c12213574db492cb8f73b2f0826c8df07a68288f8f19af2ebe6"},
+ {file = "wrapt-1.17.3-cp310-cp310-win_arm64.whl", hash = "sha256:af338aa93554be859173c39c85243970dc6a289fa907402289eeae7543e1ae18"},
+ {file = "wrapt-1.17.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:273a736c4645e63ac582c60a56b0acb529ef07f78e08dc6bfadf6a46b19c0da7"},
+ {file = "wrapt-1.17.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5531d911795e3f935a9c23eb1c8c03c211661a5060aab167065896bbf62a5f85"},
+ {file = "wrapt-1.17.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0610b46293c59a3adbae3dee552b648b984176f8562ee0dba099a56cfbe4df1f"},
+ {file = "wrapt-1.17.3-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:b32888aad8b6e68f83a8fdccbf3165f5469702a7544472bdf41f582970ed3311"},
+ {file = "wrapt-1.17.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8cccf4f81371f257440c88faed6b74f1053eef90807b77e31ca057b2db74edb1"},
+ {file = "wrapt-1.17.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8a210b158a34164de8bb68b0e7780041a903d7b00c87e906fb69928bf7890d5"},
+ {file = "wrapt-1.17.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:79573c24a46ce11aab457b472efd8d125e5a51da2d1d24387666cd85f54c05b2"},
+ {file = "wrapt-1.17.3-cp311-cp311-win32.whl", hash = "sha256:c31eebe420a9a5d2887b13000b043ff6ca27c452a9a22fa71f35f118e8d4bf89"},
+ {file = "wrapt-1.17.3-cp311-cp311-win_amd64.whl", hash = "sha256:0b1831115c97f0663cb77aa27d381237e73ad4f721391a9bfb2fe8bc25fa6e77"},
+ {file = "wrapt-1.17.3-cp311-cp311-win_arm64.whl", hash = "sha256:5a7b3c1ee8265eb4c8f1b7d29943f195c00673f5ab60c192eba2d4a7eae5f46a"},
+ {file = "wrapt-1.17.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:ab232e7fdb44cdfbf55fc3afa31bcdb0d8980b9b95c38b6405df2acb672af0e0"},
+ {file = "wrapt-1.17.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:9baa544e6acc91130e926e8c802a17f3b16fbea0fd441b5a60f5cf2cc5c3deba"},
+ {file = "wrapt-1.17.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6b538e31eca1a7ea4605e44f81a48aa24c4632a277431a6ed3f328835901f4fd"},
+ {file = "wrapt-1.17.3-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:042ec3bb8f319c147b1301f2393bc19dba6e176b7da446853406d041c36c7828"},
+ {file = "wrapt-1.17.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3af60380ba0b7b5aeb329bc4e402acd25bd877e98b3727b0135cb5c2efdaefe9"},
+ {file = "wrapt-1.17.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0b02e424deef65c9f7326d8c19220a2c9040c51dc165cddb732f16198c168396"},
+ {file = "wrapt-1.17.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:74afa28374a3c3a11b3b5e5fca0ae03bef8450d6aa3ab3a1e2c30e3a75d023dc"},
+ {file = "wrapt-1.17.3-cp312-cp312-win32.whl", hash = "sha256:4da9f45279fff3543c371d5ababc57a0384f70be244de7759c85a7f989cb4ebe"},
+ {file = "wrapt-1.17.3-cp312-cp312-win_amd64.whl", hash = "sha256:e71d5c6ebac14875668a1e90baf2ea0ef5b7ac7918355850c0908ae82bcb297c"},
+ {file = "wrapt-1.17.3-cp312-cp312-win_arm64.whl", hash = "sha256:604d076c55e2fdd4c1c03d06dc1a31b95130010517b5019db15365ec4a405fc6"},
+ {file = "wrapt-1.17.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:a47681378a0439215912ef542c45a783484d4dd82bac412b71e59cf9c0e1cea0"},
+ {file = "wrapt-1.17.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:54a30837587c6ee3cd1a4d1c2ec5d24e77984d44e2f34547e2323ddb4e22eb77"},
+ {file = "wrapt-1.17.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:16ecf15d6af39246fe33e507105d67e4b81d8f8d2c6598ff7e3ca1b8a37213f7"},
+ {file = "wrapt-1.17.3-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6fd1ad24dc235e4ab88cda009e19bf347aabb975e44fd5c2fb22a3f6e4141277"},
+ {file = "wrapt-1.17.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ed61b7c2d49cee3c027372df5809a59d60cf1b6c2f81ee980a091f3afed6a2d"},
+ {file = "wrapt-1.17.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:423ed5420ad5f5529db9ce89eac09c8a2f97da18eb1c870237e84c5a5c2d60aa"},
+ {file = "wrapt-1.17.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e01375f275f010fcbf7f643b4279896d04e571889b8a5b3f848423d91bf07050"},
+ {file = "wrapt-1.17.3-cp313-cp313-win32.whl", hash = "sha256:53e5e39ff71b3fc484df8a522c933ea2b7cdd0d5d15ae82e5b23fde87d44cbd8"},
+ {file = "wrapt-1.17.3-cp313-cp313-win_amd64.whl", hash = "sha256:1f0b2f40cf341ee8cc1a97d51ff50dddb9fcc73241b9143ec74b30fc4f44f6cb"},
+ {file = "wrapt-1.17.3-cp313-cp313-win_arm64.whl", hash = "sha256:7425ac3c54430f5fc5e7b6f41d41e704db073309acfc09305816bc6a0b26bb16"},
+ {file = "wrapt-1.17.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:cf30f6e3c077c8e6a9a7809c94551203c8843e74ba0c960f4a98cd80d4665d39"},
+ {file = "wrapt-1.17.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e228514a06843cae89621384cfe3a80418f3c04aadf8a3b14e46a7be704e4235"},
+ {file = "wrapt-1.17.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:5ea5eb3c0c071862997d6f3e02af1d055f381b1d25b286b9d6644b79db77657c"},
+ {file = "wrapt-1.17.3-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:281262213373b6d5e4bb4353bc36d1ba4084e6d6b5d242863721ef2bf2c2930b"},
+ {file = "wrapt-1.17.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:dc4a8d2b25efb6681ecacad42fca8859f88092d8732b170de6a5dddd80a1c8fa"},
+ {file = "wrapt-1.17.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:373342dd05b1d07d752cecbec0c41817231f29f3a89aa8b8843f7b95992ed0c7"},
+ {file = "wrapt-1.17.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d40770d7c0fd5cbed9d84b2c3f2e156431a12c9a37dc6284060fb4bec0b7ffd4"},
+ {file = "wrapt-1.17.3-cp314-cp314-win32.whl", hash = "sha256:fbd3c8319de8e1dc79d346929cd71d523622da527cca14e0c1d257e31c2b8b10"},
+ {file = "wrapt-1.17.3-cp314-cp314-win_amd64.whl", hash = "sha256:e1a4120ae5705f673727d3253de3ed0e016f7cd78dc463db1b31e2463e1f3cf6"},
+ {file = "wrapt-1.17.3-cp314-cp314-win_arm64.whl", hash = "sha256:507553480670cab08a800b9463bdb881b2edeed77dc677b0a5915e6106e91a58"},
+ {file = "wrapt-1.17.3-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:ed7c635ae45cfbc1a7371f708727bf74690daedc49b4dba310590ca0bd28aa8a"},
+ {file = "wrapt-1.17.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:249f88ed15503f6492a71f01442abddd73856a0032ae860de6d75ca62eed8067"},
+ {file = "wrapt-1.17.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5a03a38adec8066d5a37bea22f2ba6bbf39fcdefbe2d91419ab864c3fb515454"},
+ {file = "wrapt-1.17.3-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:5d4478d72eb61c36e5b446e375bbc49ed002430d17cdec3cecb36993398e1a9e"},
+ {file = "wrapt-1.17.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:223db574bb38637e8230eb14b185565023ab624474df94d2af18f1cdb625216f"},
+ {file = "wrapt-1.17.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e405adefb53a435f01efa7ccdec012c016b5a1d3f35459990afc39b6be4d5056"},
+ {file = "wrapt-1.17.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:88547535b787a6c9ce4086917b6e1d291aa8ed914fdd3a838b3539dc95c12804"},
+ {file = "wrapt-1.17.3-cp314-cp314t-win32.whl", hash = "sha256:41b1d2bc74c2cac6f9074df52b2efbef2b30bdfe5f40cb78f8ca22963bc62977"},
+ {file = "wrapt-1.17.3-cp314-cp314t-win_amd64.whl", hash = "sha256:73d496de46cd2cdbdbcce4ae4bcdb4afb6a11234a1df9c085249d55166b95116"},
+ {file = "wrapt-1.17.3-cp314-cp314t-win_arm64.whl", hash = "sha256:f38e60678850c42461d4202739f9bf1e3a737c7ad283638251e79cc49effb6b6"},
+ {file = "wrapt-1.17.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:70d86fa5197b8947a2fa70260b48e400bf2ccacdcab97bb7de47e3d1e6312225"},
+ {file = "wrapt-1.17.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:df7d30371a2accfe4013e90445f6388c570f103d61019b6b7c57e0265250072a"},
+ {file = "wrapt-1.17.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:caea3e9c79d5f0d2c6d9ab96111601797ea5da8e6d0723f77eabb0d4068d2b2f"},
+ {file = "wrapt-1.17.3-cp38-cp38-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:758895b01d546812d1f42204bd443b8c433c44d090248bf22689df673ccafe00"},
+ {file = "wrapt-1.17.3-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:02b551d101f31694fc785e58e0720ef7d9a10c4e62c1c9358ce6f63f23e30a56"},
+ {file = "wrapt-1.17.3-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:656873859b3b50eeebe6db8b1455e99d90c26ab058db8e427046dbc35c3140a5"},
+ {file = "wrapt-1.17.3-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:a9a2203361a6e6404f80b99234fe7fb37d1fc73487b5a78dc1aa5b97201e0f22"},
+ {file = "wrapt-1.17.3-cp38-cp38-win32.whl", hash = "sha256:55cbbc356c2842f39bcc553cf695932e8b30e30e797f961860afb308e6b1bb7c"},
+ {file = "wrapt-1.17.3-cp38-cp38-win_amd64.whl", hash = "sha256:ad85e269fe54d506b240d2d7b9f5f2057c2aa9a2ea5b32c66f8902f768117ed2"},
+ {file = "wrapt-1.17.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:30ce38e66630599e1193798285706903110d4f057aab3168a34b7fdc85569afc"},
+ {file = "wrapt-1.17.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:65d1d00fbfb3ea5f20add88bbc0f815150dbbde3b026e6c24759466c8b5a9ef9"},
+ {file = "wrapt-1.17.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a7c06742645f914f26c7f1fa47b8bc4c91d222f76ee20116c43d5ef0912bba2d"},
+ {file = "wrapt-1.17.3-cp39-cp39-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:7e18f01b0c3e4a07fe6dfdb00e29049ba17eadbc5e7609a2a3a4af83ab7d710a"},
+ {file = "wrapt-1.17.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0f5f51a6466667a5a356e6381d362d259125b57f059103dd9fdc8c0cf1d14139"},
+ {file = "wrapt-1.17.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:59923aa12d0157f6b82d686c3fd8e1166fa8cdfb3e17b42ce3b6147ff81528df"},
+ {file = "wrapt-1.17.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:46acc57b331e0b3bcb3e1ca3b421d65637915cfcd65eb783cb2f78a511193f9b"},
+ {file = "wrapt-1.17.3-cp39-cp39-win32.whl", hash = "sha256:3e62d15d3cfa26e3d0788094de7b64efa75f3a53875cdbccdf78547aed547a81"},
+ {file = "wrapt-1.17.3-cp39-cp39-win_amd64.whl", hash = "sha256:1f23fa283f51c890eda8e34e4937079114c74b4c81d2b2f1f1d94948f5cc3d7f"},
+ {file = "wrapt-1.17.3-cp39-cp39-win_arm64.whl", hash = "sha256:24c2ed34dc222ed754247a2702b1e1e89fdbaa4016f324b4b8f1a802d4ffe87f"},
+ {file = "wrapt-1.17.3-py3-none-any.whl", hash = "sha256:7171ae35d2c33d326ac19dd8facb1e82e5fd04ef8c6c0e394d7af55a55051c22"},
+ {file = "wrapt-1.17.3.tar.gz", hash = "sha256:f66eb08feaa410fe4eebd17f2a2c8e2e46d3476e9f8c783daa8e09e0faa666d0"},
+]
+
[[package]]
name = "xlrd"
version = "2.0.2"
@@ -7383,4 +8791,4 @@ vertex = ["google-cloud-aiplatform"]
[metadata]
lock-version = "2.1"
python-versions = "^3.12"
-content-hash = "4a67311f830ccf488e636a127723741d5de84d7368131ccb99afb065ca4a12b1"
+content-hash = "d6a1cc4aac053c720cd224f72c4bac24371559ab0725a1fb9eb6ab4ed8d64b06"
diff --git a/pyproject.toml b/pyproject.toml
index 6c32647..48f9196 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "strix-agent"
-version = "0.7.0"
+version = "0.8.2"
description = "Open-source AI Hackers for your apps"
authors = ["Strix "]
readme = "README.md"
@@ -56,6 +56,9 @@ textual = "^4.0.0"
xmltodict = "^0.13.0"
requests = "^2.32.0"
cvss = "^3.2"
+traceloop-sdk = "^0.53.0"
+opentelemetry-exporter-otlp-proto-http = "^1.40.0"
+scrubadub = "^2.0.1"
# Optional LLM provider dependencies
google-cloud-aiplatform = { version = ">=1.38", optional = true }
@@ -148,6 +151,9 @@ module = [
"libtmux.*",
"pytest.*",
"cvss.*",
+ "opentelemetry.*",
+ "scrubadub.*",
+ "traceloop.*",
]
ignore_missing_imports = true
@@ -155,6 +161,7 @@ ignore_missing_imports = true
[[tool.mypy.overrides]]
module = ["tests.*"]
disallow_untyped_decorators = false
+disallow_untyped_defs = false
# ============================================================================
# Ruff Configuration (Fast Python Linter & Formatter)
diff --git a/scripts/install.sh b/scripts/install.sh
index ae5b11d..868a95e 100755
--- a/scripts/install.sh
+++ b/scripts/install.sh
@@ -4,7 +4,7 @@ set -euo pipefail
APP=strix
REPO="usestrix/strix"
-STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.11"
+STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.12"
MUTED='\033[0;2m'
RED='\033[0;31m'
@@ -335,14 +335,18 @@ echo -e "${MUTED} AI Penetration Testing Agent${NC}"
echo ""
echo -e "${MUTED}To get started:${NC}"
echo ""
-echo -e " ${CYAN}1.${NC} Set your LLM provider:"
-echo -e " ${MUTED}export STRIX_LLM='openai/gpt-5'${NC}"
-echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
+echo -e " ${CYAN}1.${NC} Get your Strix API key:"
+echo -e " ${MUTED}https://models.strix.ai${NC}"
echo ""
-echo -e " ${CYAN}2.${NC} Run a penetration test:"
+echo -e " ${CYAN}2.${NC} Set your environment:"
+echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
+echo -e " ${MUTED}export STRIX_LLM='strix/gpt-5'${NC}"
+echo ""
+echo -e " ${CYAN}3.${NC} Run a penetration test:"
echo -e " ${MUTED}strix --target https://example.com${NC}"
echo ""
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
+echo -e "${MUTED}Supported models ${NC}https://docs.strix.ai/llm-providers/overview"
echo -e "${MUTED}Join our community ${NC}https://discord.gg/strix-ai"
echo ""
diff --git a/strix/agents/StrixAgent/system_prompt.jinja b/strix/agents/StrixAgent/system_prompt.jinja
index 5f8f35c..2097aae 100644
--- a/strix/agents/StrixAgent/system_prompt.jinja
+++ b/strix/agents/StrixAgent/system_prompt.jinja
@@ -21,6 +21,18 @@ INTER-AGENT MESSAGES:
- NEVER echo agent_identity blocks; treat them as internal metadata for identity only. Do not include them in outputs or tool calls.
- Minimize inter-agent messaging: only message when essential for coordination or assistance; avoid routine status updates; batch non-urgent information; prefer parent/child completion flows and shared artifacts over messaging
+{% if interactive %}
+INTERACTIVE BEHAVIOR:
+- You are in an interactive conversation with a user
+- CRITICAL: A message WITHOUT a tool call IMMEDIATELY STOPS execution and waits for user input. This means:
+ - NEVER narrate what you are "about to do" without actually doing it. Statements like "I'll now launch the browser..." or "Let me scan the target..." WITHOUT a tool call will HALT your work.
+ - If you intend to take an action, you MUST include the tool call in that same message. Describe what you're doing AND call the tool together.
+ - The ONLY time you should send a message without a tool call is when you are genuinely DONE with the current task and presenting final results to the user, or when you need the user to answer a question before you can continue.
+- While working on a task, every single message MUST contain a tool call — this is what keeps execution moving
+- You may include brief explanatory text alongside the tool call
+- Respond naturally when the user asks questions or gives instructions
+- NEVER send empty messages — if you have nothing to do or say, call the wait_for_message tool
+{% else %}
AUTONOMOUS BEHAVIOR:
- Work autonomously by default
- You should NOT ask for user input or confirmation - you should always proceed with your task autonomously.
@@ -28,6 +40,7 @@ AUTONOMOUS BEHAVIOR:
- NEVER send an empty or blank message. If you have no content to output or need to wait (for user input, subagent results, or any other reason), you MUST call the wait_for_message tool (or another appropriate tool) instead of emitting an empty response.
- If there is nothing to execute and no user query to answer any more: do NOT send filler/repetitive text — either call wait_for_message or finish your work (subagents: agent_finish; root: finish_scan)
- While the agent loop is running, almost every output MUST be a tool call. Do NOT send plain text messages; act via tools. If idle, use wait_for_message; when done, use agent_finish (subagents) or finish_scan (root)
+{% endif %}
@@ -308,19 +321,55 @@ Tool call format:
CRITICAL RULES:
+{% if interactive %}
+0. When using tools, include exactly one tool call per message. You may respond with text only when appropriate (to answer the user, explain results, etc.).
+{% else %}
0. While active in the agent loop, EVERY message you output MUST be a single tool call. Do not send plain text-only responses.
+{% endif %}
1. Exactly one tool call per message — never include more than one ... block in a single LLM message.
2. Tool call must be last in message
3. EVERY tool call MUST end with . This is MANDATORY. Never omit the closing tag. End your response immediately after .
4. Use ONLY the exact format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters.
5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. Literal "\n" instead of real line breaks will cause tools to fail.
6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants).
- - Correct: ...
- - Incorrect: ...
- - Incorrect: ...
- - Incorrect: {"think": {...}}
7. Parameters must use value exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values.
+{% if interactive %}
+8. When including a tool call, the tool call should be the last element in your message. You may include brief explanatory text before it.
+{% else %}
8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block.
+{% endif %}
+
+CORRECT format — use this EXACTLY:
+
+value
+
+
+WRONG formats — NEVER use these:
+- value
+- ...
+- ...
+- {"tool_name": {"param_name": "value"}}
+- ```...```
+- value_without_parameter_tags
+
+EVERY argument MUST be wrapped in ... tags. NEVER put values directly in the function body without parameter tags. This WILL cause the tool call to fail.
+
+Do NOT emit any extra XML tags in your output. In particular:
+- NO ... or ... blocks
+- NO ... or ... blocks
+- NO ... or ... wrappers
+{% if not interactive %}
+If you need to reason, use the think tool. Your raw output must contain ONLY the tool call — no surrounding XML tags.
+{% else %}
+If you need to reason, use the think tool. When using tools, do not add surrounding XML tags.
+{% endif %}
+
+Notice: use NOT , use NOT , use NOT .
+
+Example (terminal tool):
+
+nmap -sV -p 1-1000 target.com
+
Example (agent creation tool):
diff --git a/strix/agents/base_agent.py b/strix/agents/base_agent.py
index f955892..74fe21e 100644
--- a/strix/agents/base_agent.py
+++ b/strix/agents/base_agent.py
@@ -56,7 +56,6 @@ class BaseAgent(metaclass=AgentMeta):
self.config = config
self.local_sources = config.get("local_sources", [])
- self.non_interactive = config.get("non_interactive", False)
if "max_iterations" in config:
self.max_iterations = config["max_iterations"]
@@ -74,6 +73,9 @@ class BaseAgent(metaclass=AgentMeta):
max_iterations=self.max_iterations,
)
+ self.interactive = getattr(self.llm_config, "interactive", False)
+ if self.interactive and self.state.parent_id is None:
+ self.state.waiting_timeout = 0
self.llm = LLM(self.llm_config, agent_name=self.agent_name)
with contextlib.suppress(Exception):
@@ -169,7 +171,7 @@ class BaseAgent(metaclass=AgentMeta):
continue
if self.state.should_stop():
- if self.non_interactive:
+ if not self.interactive:
return self.state.final_result or {}
await self._enter_waiting_state(tracer)
continue
@@ -213,8 +215,12 @@ class BaseAgent(metaclass=AgentMeta):
should_finish = await iteration_task
self._current_task = None
+ if should_finish is None and self.interactive:
+ await self._enter_waiting_state(tracer, text_response=True)
+ continue
+
if should_finish:
- if self.non_interactive:
+ if not self.interactive:
self.state.set_completed({"success": True})
if tracer:
tracer.update_agent_status(self.state.agent_id, "completed")
@@ -230,7 +236,7 @@ class BaseAgent(metaclass=AgentMeta):
self.state.add_message(
"assistant", f"{partial_content}\n\n[ABORTED BY USER]"
)
- if self.non_interactive:
+ if not self.interactive:
raise
await self._enter_waiting_state(tracer, error_occurred=False, was_cancelled=True)
continue
@@ -243,7 +249,7 @@ class BaseAgent(metaclass=AgentMeta):
except (RuntimeError, ValueError, TypeError) as e:
if not await self._handle_iteration_error(e, tracer):
- if self.non_interactive:
+ if not self.interactive:
self.state.set_completed({"success": False, "error": str(e)})
if tracer:
tracer.update_agent_status(self.state.agent_id, "failed")
@@ -283,11 +289,14 @@ class BaseAgent(metaclass=AgentMeta):
task_completed: bool = False,
error_occurred: bool = False,
was_cancelled: bool = False,
+ text_response: bool = False,
) -> None:
self.state.enter_waiting_state()
if tracer:
- if task_completed:
+ if text_response:
+ tracer.update_agent_status(self.state.agent_id, "waiting_for_input")
+ elif task_completed:
tracer.update_agent_status(self.state.agent_id, "completed")
elif error_occurred:
tracer.update_agent_status(self.state.agent_id, "error")
@@ -296,6 +305,9 @@ class BaseAgent(metaclass=AgentMeta):
else:
tracer.update_agent_status(self.state.agent_id, "stopped")
+ if text_response:
+ return
+
if task_completed:
self.state.add_message(
"assistant",
@@ -333,6 +345,14 @@ class BaseAgent(metaclass=AgentMeta):
if "agent_id" in sandbox_info:
self.state.sandbox_info["agent_id"] = sandbox_info["agent_id"]
+
+ caido_port = sandbox_info.get("caido_port")
+ if caido_port:
+ from strix.telemetry.tracer import get_global_tracer
+
+ tracer = get_global_tracer()
+ if tracer:
+ tracer.caido_url = f"localhost:{caido_port}"
except Exception as e:
from strix.telemetry import posthog
@@ -344,7 +364,7 @@ class BaseAgent(metaclass=AgentMeta):
self.state.add_message("user", task)
- async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool:
+ async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool | None:
final_response = None
async for response in self.llm.generate(self.state.get_conversation_history()):
@@ -390,7 +410,7 @@ class BaseAgent(metaclass=AgentMeta):
if actions:
return await self._execute_actions(actions, tracer)
- return False
+ return None
async def _execute_actions(self, actions: list[Any], tracer: Optional["Tracer"]) -> bool:
"""Execute actions and return True if agent should finish."""
@@ -418,7 +438,7 @@ class BaseAgent(metaclass=AgentMeta):
self.state.set_completed({"success": True})
if tracer:
tracer.update_agent_status(self.state.agent_id, "completed")
- if self.non_interactive and self.state.parent_id is None:
+ if not self.interactive and self.state.parent_id is None:
return True
return True
@@ -518,7 +538,7 @@ class BaseAgent(metaclass=AgentMeta):
error_details = error.details
self.state.add_error(error_msg)
- if self.non_interactive:
+ if not self.interactive:
self.state.set_completed({"success": False, "error": error_msg})
if tracer:
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)
@@ -553,7 +573,7 @@ class BaseAgent(metaclass=AgentMeta):
error_details = getattr(error, "details", None)
self.state.add_error(error_msg)
- if self.non_interactive:
+ if not self.interactive:
self.state.set_completed({"success": False, "error": error_msg})
if tracer:
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)
diff --git a/strix/agents/state.py b/strix/agents/state.py
index 6af402e..da04ee7 100644
--- a/strix/agents/state.py
+++ b/strix/agents/state.py
@@ -25,6 +25,7 @@ class AgentState(BaseModel):
waiting_for_input: bool = False
llm_failed: bool = False
waiting_start_time: datetime | None = None
+ waiting_timeout: int = 600
final_result: dict[str, Any] | None = None
max_iterations_warning_sent: bool = False
@@ -116,6 +117,9 @@ class AgentState(BaseModel):
return self.iteration >= int(self.max_iterations * threshold)
def has_waiting_timeout(self) -> bool:
+ if self.waiting_timeout == 0:
+ return False
+
if not self.waiting_for_input or not self.waiting_start_time:
return False
@@ -128,7 +132,7 @@ class AgentState(BaseModel):
return False
elapsed = (datetime.now(UTC) - self.waiting_start_time).total_seconds()
- return elapsed > 600
+ return elapsed > self.waiting_timeout
def has_empty_last_messages(self, count: int = 3) -> bool:
if len(self.messages) < count:
diff --git a/strix/config/config.py b/strix/config/config.py
index aba5343..bad994a 100644
--- a/strix/config/config.py
+++ b/strix/config/config.py
@@ -5,6 +5,9 @@ from pathlib import Path
from typing import Any
+STRIX_API_BASE = "https://models.strix.ai/api/v1"
+
+
class Config:
"""Configuration Manager for Strix."""
@@ -44,6 +47,11 @@ class Config:
# Telemetry
strix_telemetry = "1"
+ strix_otel_telemetry = None
+ strix_posthog_telemetry = None
+ traceloop_base_url = None
+ traceloop_api_key = None
+ traceloop_headers = None
# Config file override (set via --config CLI arg)
_config_file_override: Path | None = None
@@ -177,3 +185,31 @@ def apply_saved_config(force: bool = False) -> dict[str, str]:
def save_current_config() -> bool:
return Config.save_current()
+
+
+def resolve_llm_config() -> tuple[str | None, str | None, str | None]:
+ """Resolve LLM model, api_key, and api_base based on STRIX_LLM prefix.
+
+ Returns:
+ tuple: (model_name, api_key, api_base)
+ - model_name: Original model name (strix/ prefix preserved for display)
+ - api_key: LLM API key
+ - api_base: API base URL (auto-set to STRIX_API_BASE for strix/ models)
+ """
+ model = Config.get("strix_llm")
+ if not model:
+ return None, None, None
+
+ api_key = Config.get("llm_api_key")
+
+ if model.startswith("strix/"):
+ api_base: str | None = STRIX_API_BASE
+ else:
+ api_base = (
+ Config.get("llm_api_base")
+ or Config.get("openai_api_base")
+ or Config.get("litellm_base_url")
+ or Config.get("ollama_api_base")
+ )
+
+ return model, api_key, api_base
diff --git a/strix/interface/assets/tui_styles.tcss b/strix/interface/assets/tui_styles.tcss
index 7ebefd2..d1097de 100644
--- a/strix/interface/assets/tui_styles.tcss
+++ b/strix/interface/assets/tui_styles.tcss
@@ -77,12 +77,21 @@ Toast.-information .toast--title {
margin-bottom: 0;
}
-#stats_display {
+#stats_scroll {
height: auto;
max-height: 15;
background: transparent;
padding: 0;
margin: 0;
+ border: round #333333;
+ scrollbar-size: 0 0;
+}
+
+#stats_display {
+ height: auto;
+ background: transparent;
+ padding: 0 1;
+ margin: 0;
}
#vulnerabilities_panel {
diff --git a/strix/interface/cli.py b/strix/interface/cli.py
index fe0992b..ec853b3 100644
--- a/strix/interface/cli.py
+++ b/strix/interface/cli.py
@@ -82,7 +82,6 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
agent_config = {
"llm_config": llm_config,
"max_iterations": 300,
- "non_interactive": True,
}
if getattr(args, "local_sources", None):
diff --git a/strix/interface/main.py b/strix/interface/main.py
index 044c887..9d49212 100644
--- a/strix/interface/main.py
+++ b/strix/interface/main.py
@@ -18,6 +18,8 @@ from rich.panel import Panel
from rich.text import Text
from strix.config import Config, apply_saved_config, save_current_config
+from strix.config.config import resolve_llm_config
+from strix.llm.utils import resolve_strix_model
apply_saved_config()
@@ -52,10 +54,13 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
missing_required_vars = []
missing_optional_vars = []
- if not Config.get("strix_llm"):
+ strix_llm = Config.get("strix_llm")
+ uses_strix_models = strix_llm and strix_llm.startswith("strix/")
+
+ if not strix_llm:
missing_required_vars.append("STRIX_LLM")
- has_base_url = any(
+ has_base_url = uses_strix_models or any(
[
Config.get("llm_api_base"),
Config.get("openai_api_base"),
@@ -136,7 +141,10 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
)
error_text.append("\nExample setup:\n", style="white")
- error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
+ if uses_strix_models:
+ error_text.append("export STRIX_LLM='strix/gpt-5'\n", style="dim white")
+ else:
+ error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
if missing_optional_vars:
for var in missing_optional_vars:
@@ -202,14 +210,9 @@ async def warm_up_llm() -> None:
console = Console()
try:
- model_name = Config.get("strix_llm")
- api_key = Config.get("llm_api_key")
- api_base = (
- Config.get("llm_api_base")
- or Config.get("openai_api_base")
- or Config.get("litellm_base_url")
- or Config.get("ollama_api_base")
- )
+ model_name, api_key, api_base = resolve_llm_config()
+ litellm_model, _ = resolve_strix_model(model_name)
+ litellm_model = litellm_model or model_name
test_messages = [
{"role": "system", "content": "You are a helpful assistant."},
@@ -219,7 +222,7 @@ async def warm_up_llm() -> None:
llm_timeout = int(Config.get("llm_timeout") or "300")
completion_kwargs: dict[str, Any] = {
- "model": model_name,
+ "model": litellm_model,
"messages": test_messages,
"timeout": llm_timeout,
}
@@ -433,8 +436,6 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
if tracer and tracer.scan_results:
scan_completed = tracer.scan_results.get("scan_completed", False)
- has_vulnerabilities = tracer and len(tracer.vulnerability_reports) > 0
-
completion_text = Text()
if scan_completed:
completion_text.append("Penetration test completed", style="bold #22c55e")
@@ -459,13 +460,12 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
if stats_text.plain:
panel_parts.extend(["\n", stats_text])
- if scan_completed or has_vulnerabilities:
- results_text = Text()
- results_text.append("\n")
- results_text.append("Output", style="dim")
- results_text.append(" ")
- results_text.append(str(results_path), style="#60a5fa")
- panel_parts.extend(["\n", results_text])
+ results_text = Text()
+ results_text.append("\n")
+ results_text.append("Output", style="dim")
+ results_text.append(" ")
+ results_text.append(str(results_path), style="#60a5fa")
+ panel_parts.extend(["\n", results_text])
panel_content = Text.assemble(*panel_parts)
@@ -482,7 +482,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
console.print("\n")
console.print(panel)
console.print()
- console.print("[#60a5fa]strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]")
+ console.print("[#60a5fa]models.strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]")
console.print()
diff --git a/strix/interface/streaming_parser.py b/strix/interface/streaming_parser.py
index 95e9523..2ea69fa 100644
--- a/strix/interface/streaming_parser.py
+++ b/strix/interface/streaming_parser.py
@@ -3,8 +3,11 @@ import re
from dataclasses import dataclass
from typing import Literal
+from strix.llm.utils import normalize_tool_format
+
_FUNCTION_TAG_PREFIX = "]+)>")
_FUNC_END_PATTERN = re.compile(r"")
@@ -21,9 +24,8 @@ def _get_safe_content(content: str) -> tuple[str, str]:
return content, ""
suffix = content[last_lt:]
- target = _FUNCTION_TAG_PREFIX # " list[StreamSegment]:
if not content:
return []
+ content = normalize_tool_format(content)
+
segments: list[StreamSegment] = []
func_matches = list(_FUNC_PATTERN.finditer(content))
diff --git a/strix/interface/tool_components/__init__.py b/strix/interface/tool_components/__init__.py
index cb8aeea..c8b6007 100644
--- a/strix/interface/tool_components/__init__.py
+++ b/strix/interface/tool_components/__init__.py
@@ -4,6 +4,7 @@ from . import (
browser_renderer,
file_edit_renderer,
finish_renderer,
+ load_skill_renderer,
notes_renderer,
proxy_renderer,
python_renderer,
@@ -28,6 +29,7 @@ __all__ = [
"file_edit_renderer",
"finish_renderer",
"get_tool_renderer",
+ "load_skill_renderer",
"notes_renderer",
"proxy_renderer",
"python_renderer",
diff --git a/strix/interface/tool_components/load_skill_renderer.py b/strix/interface/tool_components/load_skill_renderer.py
new file mode 100644
index 0000000..41a1868
--- /dev/null
+++ b/strix/interface/tool_components/load_skill_renderer.py
@@ -0,0 +1,33 @@
+from typing import Any, ClassVar
+
+from rich.text import Text
+from textual.widgets import Static
+
+from .base_renderer import BaseToolRenderer
+from .registry import register_tool_renderer
+
+
+@register_tool_renderer
+class LoadSkillRenderer(BaseToolRenderer):
+ tool_name: ClassVar[str] = "load_skill"
+ css_classes: ClassVar[list[str]] = ["tool-call", "load-skill-tool"]
+
+ @classmethod
+ def render(cls, tool_data: dict[str, Any]) -> Static:
+ args = tool_data.get("args", {})
+ status = tool_data.get("status", "completed")
+
+ requested = args.get("skills", "")
+
+ text = Text()
+ text.append("◇ ", style="#10b981")
+ text.append("loading skill", style="dim")
+
+ if requested:
+ text.append(" ")
+ text.append(requested, style="#10b981")
+ elif not tool_data.get("result"):
+ text.append("\n ")
+ text.append("Loading...", style="dim")
+
+ return Static(text, classes=cls.get_css_classes(status))
diff --git a/strix/interface/tui.py b/strix/interface/tui.py
index 4cd0eec..1f98283 100644
--- a/strix/interface/tui.py
+++ b/strix/interface/tui.py
@@ -687,7 +687,7 @@ class StrixTUIApp(App): # type: ignore[misc]
CSS_PATH = "assets/tui_styles.tcss"
ALLOW_SELECT = True
- SIDEBAR_MIN_WIDTH = 140
+ SIDEBAR_MIN_WIDTH = 120
selected_agent_id: reactive[str | None] = reactive(default=None)
show_splash: reactive[bool] = reactive(default=True)
@@ -749,7 +749,9 @@ class StrixTUIApp(App): # type: ignore[misc]
def _build_agent_config(self, args: argparse.Namespace) -> dict[str, Any]:
scan_mode = getattr(args, "scan_mode", "deep")
llm_config = LLMConfig(
- scan_mode=scan_mode, is_whitebox=bool(getattr(args, "local_sources", []))
+ scan_mode=scan_mode,
+ interactive=True,
+ is_whitebox=bool(getattr(args, "local_sources", [])),
)
config = {
@@ -832,11 +834,11 @@ class StrixTUIApp(App): # type: ignore[misc]
agents_tree.guide_style = "dashed"
stats_display = Static("", id="stats_display")
- stats_display.ALLOW_SELECT = False
+ stats_scroll = VerticalScroll(stats_display, id="stats_scroll")
vulnerabilities_panel = VulnerabilitiesPanel(id="vulnerabilities_panel")
- sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_display, id="sidebar")
+ sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_scroll, id="sidebar")
content_container.mount(chat_area_container)
content_container.mount(sidebar)
@@ -1275,6 +1277,9 @@ class StrixTUIApp(App): # type: ignore[misc]
if not self._is_widget_safe(stats_display):
return
+ if self.screen.selections:
+ return
+
stats_content = Text()
stats_text = build_tui_stats_text(self.tracer, self.agent_config)
@@ -1284,15 +1289,7 @@ class StrixTUIApp(App): # type: ignore[misc]
version = get_package_version()
stats_content.append(f"\nv{version}", style="white")
- from rich.panel import Panel
-
- stats_panel = Panel(
- stats_content,
- border_style="#333333",
- padding=(0, 1),
- )
-
- self._safe_widget_operation(stats_display.update, stats_panel)
+ self._safe_widget_operation(stats_display.update, stats_content)
def _update_vulnerabilities_panel(self) -> None:
"""Update the vulnerabilities panel with current vulnerability data."""
diff --git a/strix/interface/utils.py b/strix/interface/utils.py
index 8dcfcb6..12a013b 100644
--- a/strix/interface/utils.py
+++ b/strix/interface/utils.py
@@ -392,6 +392,12 @@ def build_tui_stats_text(tracer: Any, agent_config: dict[str, Any] | None = None
stats_text.append(" · ", style="white")
stats_text.append(f"${total_stats['cost']:.2f}", style="white")
+ caido_url = getattr(tracer, "caido_url", None)
+ if caido_url:
+ stats_text.append("\n")
+ stats_text.append("Caido: ", style="bold white")
+ stats_text.append(caido_url, style="white")
+
return stats_text
diff --git a/strix/llm/config.py b/strix/llm/config.py
index f3a2ac9..5206175 100644
--- a/strix/llm/config.py
+++ b/strix/llm/config.py
@@ -1,4 +1,6 @@
from strix.config import Config
+from strix.config.config import resolve_llm_config
+from strix.llm.utils import resolve_strix_model
class LLMConfig:
@@ -10,12 +12,18 @@ class LLMConfig:
timeout: int | None = None,
scan_mode: str = "deep",
is_whitebox: bool = False,
+ interactive: bool = False,
):
- self.model_name = model_name or Config.get("strix_llm")
+ resolved_model, self.api_key, self.api_base = resolve_llm_config()
+ self.model_name = model_name or resolved_model
if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty")
+ api_model, canonical = resolve_strix_model(self.model_name)
+ self.litellm_model: str = api_model or self.model_name
+ self.canonical_model: str = canonical or self.model_name
+
self.enable_prompt_caching = enable_prompt_caching
self.skills = skills or []
@@ -23,3 +31,4 @@ class LLMConfig:
self.scan_mode = scan_mode if scan_mode in ["quick", "standard", "deep"] else "deep"
self.is_whitebox = is_whitebox
+ self.interactive = interactive
diff --git a/strix/llm/dedupe.py b/strix/llm/dedupe.py
index 9edd6b7..0ea6088 100644
--- a/strix/llm/dedupe.py
+++ b/strix/llm/dedupe.py
@@ -5,7 +5,8 @@ from typing import Any
import litellm
-from strix.config import Config
+from strix.config.config import resolve_llm_config
+from strix.llm.utils import resolve_strix_model
logger = logging.getLogger(__name__)
@@ -155,14 +156,9 @@ def check_duplicate(
comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned}
- model_name = Config.get("strix_llm")
- api_key = Config.get("llm_api_key")
- api_base = (
- Config.get("llm_api_base")
- or Config.get("openai_api_base")
- or Config.get("litellm_base_url")
- or Config.get("ollama_api_base")
- )
+ model_name, api_key, api_base = resolve_llm_config()
+ litellm_model, _ = resolve_strix_model(model_name)
+ litellm_model = litellm_model or model_name
messages = [
{"role": "system", "content": DEDUPE_SYSTEM_PROMPT},
@@ -177,7 +173,7 @@ def check_duplicate(
]
completion_kwargs: dict[str, Any] = {
- "model": model_name,
+ "model": litellm_model,
"messages": messages,
"timeout": 120,
}
diff --git a/strix/llm/llm.py b/strix/llm/llm.py
index f19461b..6387f6e 100644
--- a/strix/llm/llm.py
+++ b/strix/llm/llm.py
@@ -14,6 +14,7 @@ from strix.llm.memory_compressor import MemoryCompressor
from strix.llm.utils import (
_truncate_to_first_function,
fix_incomplete_tool_call,
+ normalize_tool_format,
parse_tool_invocations,
)
from strix.skills import load_skills
@@ -62,8 +63,9 @@ class LLM:
self.config = config
self.agent_name = agent_name
self.agent_id: str | None = None
+ self._active_skills: list[str] = list(config.skills or [])
self._total_stats = RequestStats()
- self.memory_compressor = MemoryCompressor(model_name=config.model_name)
+ self.memory_compressor = MemoryCompressor(model_name=config.litellm_model)
self.system_prompt = self._load_system_prompt(agent_name)
reasoning = Config.get("strix_reasoning_effort")
@@ -86,24 +88,52 @@ class LLM:
autoescape=select_autoescape(enabled_extensions=(), default_for_string=False),
)
- skills_to_load = [
- *list(self.config.skills or []),
- f"scan_modes/{self.config.scan_mode}",
- ]
- if self.config.is_whitebox:
- skills_to_load.append("coordination/source_aware_whitebox")
+ skills_to_load = self._get_skills_to_load()
skill_content = load_skills(skills_to_load)
env.globals["get_skill"] = lambda name: skill_content.get(name, "")
result = env.get_template("system_prompt.jinja").render(
get_tools_prompt=get_tools_prompt,
loaded_skill_names=list(skill_content.keys()),
+ interactive=self.config.interactive,
**skill_content,
)
return str(result)
except Exception: # noqa: BLE001
return ""
+ def _get_skills_to_load(self) -> list[str]:
+ ordered_skills = [*self._active_skills]
+ ordered_skills.append(f"scan_modes/{self.config.scan_mode}")
+ if self.config.is_whitebox:
+ ordered_skills.append("coordination/source_aware_whitebox")
+
+ deduped: list[str] = []
+ seen: set[str] = set()
+ for skill_name in ordered_skills:
+ if skill_name not in seen:
+ deduped.append(skill_name)
+ seen.add(skill_name)
+
+ return deduped
+
+ def add_skills(self, skill_names: list[str]) -> list[str]:
+ added: list[str] = []
+ for skill_name in skill_names:
+ if not skill_name or skill_name in self._active_skills:
+ continue
+ self._active_skills.append(skill_name)
+ added.append(skill_name)
+
+ if not added:
+ return []
+
+ updated_prompt = self._load_system_prompt(self.agent_name)
+ if updated_prompt:
+ self.system_prompt = updated_prompt
+
+ return added
+
def set_agent_identity(self, agent_name: str | None, agent_id: str | None) -> None:
if agent_name:
self.agent_name = agent_name
@@ -145,10 +175,10 @@ class LLM:
delta = self._get_chunk_content(chunk)
if delta:
accumulated += delta
- if "" in accumulated:
- accumulated = accumulated[
- : accumulated.find("") + len("")
- ]
+ if "" in accumulated or "" in accumulated:
+ end_tag = "" if "" in accumulated else ""
+ pos = accumulated.find(end_tag)
+ accumulated = accumulated[: pos + len(end_tag)]
yield LLMResponse(content=accumulated)
done_streaming = 1
continue
@@ -157,6 +187,7 @@ class LLM:
if chunks:
self._update_usage_stats(stream_chunk_builder(chunks))
+ accumulated = normalize_tool_format(accumulated)
accumulated = fix_incomplete_tool_call(_truncate_to_first_function(accumulated))
yield LLMResponse(
content=accumulated,
@@ -186,6 +217,9 @@ class LLM:
conversation_history.extend(compressed)
messages.extend(compressed)
+ if messages[-1].get("role") == "assistant" and not self.config.interactive:
+ messages.append({"role": "user", "content": "Continue the task."})
+
if self._is_anthropic() and self.config.enable_prompt_caching:
messages = self._add_cache_control(messages)
@@ -196,21 +230,16 @@ class LLM:
messages = self._strip_images(messages)
args: dict[str, Any] = {
- "model": self.config.model_name,
+ "model": self.config.litellm_model,
"messages": messages,
"timeout": self.config.timeout,
"stream_options": {"include_usage": True},
}
- if api_key := Config.get("llm_api_key"):
- args["api_key"] = api_key
- if api_base := (
- Config.get("llm_api_base")
- or Config.get("openai_api_base")
- or Config.get("litellm_base_url")
- or Config.get("ollama_api_base")
- ):
- args["api_base"] = api_base
+ if self.config.api_key:
+ args["api_key"] = self.config.api_key
+ if self.config.api_base:
+ args["api_base"] = self.config.api_base
if self._supports_reasoning():
args["reasoning_effort"] = self._reasoning_effort
@@ -236,8 +265,8 @@ class LLM:
def _update_usage_stats(self, response: Any) -> None:
try:
if hasattr(response, "usage") and response.usage:
- input_tokens = getattr(response.usage, "prompt_tokens", 0)
- output_tokens = getattr(response.usage, "completion_tokens", 0)
+ input_tokens = getattr(response.usage, "prompt_tokens", 0) or 0
+ output_tokens = getattr(response.usage, "completion_tokens", 0) or 0
cached_tokens = 0
if hasattr(response.usage, "prompt_tokens_details"):
@@ -245,14 +274,11 @@ class LLM:
if hasattr(prompt_details, "cached_tokens"):
cached_tokens = prompt_details.cached_tokens or 0
+ cost = self._extract_cost(response)
else:
input_tokens = 0
output_tokens = 0
cached_tokens = 0
-
- try:
- cost = completion_cost(response) or 0.0
- except Exception: # noqa: BLE001
cost = 0.0
self._total_stats.input_tokens += input_tokens
@@ -263,6 +289,18 @@ class LLM:
except Exception: # noqa: BLE001, S110 # nosec B110
pass
+ def _extract_cost(self, response: Any) -> float:
+ if hasattr(response, "usage") and response.usage:
+ direct_cost = getattr(response.usage, "cost", None)
+ if direct_cost is not None:
+ return float(direct_cost)
+ try:
+ if hasattr(response, "_hidden_params"):
+ response._hidden_params.pop("custom_llm_provider", None)
+ return completion_cost(response, model=self.config.canonical_model) or 0.0
+ except Exception: # noqa: BLE001
+ return 0.0
+
def _should_retry(self, e: Exception) -> bool:
code = getattr(e, "status_code", None) or getattr(
getattr(e, "response", None), "status_code", None
@@ -282,13 +320,13 @@ class LLM:
def _supports_vision(self) -> bool:
try:
- return bool(supports_vision(model=self.config.model_name))
+ return bool(supports_vision(model=self.config.canonical_model))
except Exception: # noqa: BLE001
return False
def _supports_reasoning(self) -> bool:
try:
- return bool(supports_reasoning(model=self.config.model_name))
+ return bool(supports_reasoning(model=self.config.canonical_model))
except Exception: # noqa: BLE001
return False
@@ -309,7 +347,7 @@ class LLM:
return result
def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
- if not messages or not supports_prompt_caching(self.config.model_name):
+ if not messages or not supports_prompt_caching(self.config.canonical_model):
return messages
result = list(messages)
diff --git a/strix/llm/memory_compressor.py b/strix/llm/memory_compressor.py
index ef0b9ab..8cad510 100644
--- a/strix/llm/memory_compressor.py
+++ b/strix/llm/memory_compressor.py
@@ -3,7 +3,7 @@ from typing import Any
import litellm
-from strix.config import Config
+from strix.config.config import Config, resolve_llm_config
logger = logging.getLogger(__name__)
@@ -91,7 +91,7 @@ def _summarize_messages(
if not messages:
empty_summary = "{text}"
return {
- "role": "assistant",
+ "role": "user",
"content": empty_summary.format(text="No messages to summarize"),
}
@@ -104,13 +104,7 @@ def _summarize_messages(
conversation = "\n".join(formatted)
prompt = SUMMARY_PROMPT_TEMPLATE.format(conversation=conversation)
- api_key = Config.get("llm_api_key")
- api_base = (
- Config.get("llm_api_base")
- or Config.get("openai_api_base")
- or Config.get("litellm_base_url")
- or Config.get("ollama_api_base")
- )
+ _, api_key, api_base = resolve_llm_config()
try:
completion_args: dict[str, Any] = {
@@ -129,7 +123,7 @@ def _summarize_messages(
return messages[0]
summary_msg = "{text}"
return {
- "role": "assistant",
+ "role": "user",
"content": summary_msg.format(count=len(messages), text=summary),
}
except Exception:
@@ -164,7 +158,7 @@ class MemoryCompressor:
):
self.max_images = max_images
self.model_name = model_name or Config.get("strix_llm")
- self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "30")
+ self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "120")
if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty")
diff --git a/strix/llm/utils.py b/strix/llm/utils.py
index 81431f0..cb61a81 100644
--- a/strix/llm/utils.py
+++ b/strix/llm/utils.py
@@ -3,11 +3,71 @@ import re
from typing import Any
+_INVOKE_OPEN = re.compile(r'')
+_PARAM_NAME_ATTR = re.compile(r'')
+_FUNCTION_CALLS_TAG = re.compile(r"?function_calls>")
+_STRIP_TAG_QUOTES = re.compile(r"<(function|parameter)\s*=\s*([^>]*?)>")
+
+
+def normalize_tool_format(content: str) -> str:
+ """Convert alternative tool-call XML formats to the expected one.
+
+ Handles:
+ ... → stripped
+ →
+ →
+ →
+ →
+ →
+ """
+ if "", content)
+ content = _PARAM_NAME_ATTR.sub(r"", content)
+ content = content.replace("", "")
+
+ return _STRIP_TAG_QUOTES.sub(
+ lambda m: f"<{m.group(1)}={m.group(2).strip().strip(chr(34) + chr(39))}>", content
+ )
+
+
+STRIX_MODEL_MAP: dict[str, str] = {
+ "claude-sonnet-4.6": "anthropic/claude-sonnet-4-6",
+ "claude-opus-4.6": "anthropic/claude-opus-4-6",
+ "gpt-5.2": "openai/gpt-5.2",
+ "gpt-5.1": "openai/gpt-5.1",
+ "gpt-5": "openai/gpt-5",
+ "gemini-3-pro-preview": "gemini/gemini-3-pro-preview",
+ "gemini-3-flash-preview": "gemini/gemini-3-flash-preview",
+ "glm-5": "openrouter/z-ai/glm-5",
+ "glm-4.7": "openrouter/z-ai/glm-4.7",
+}
+
+
+def resolve_strix_model(model_name: str | None) -> tuple[str | None, str | None]:
+ """Resolve a strix/ model into names for API calls and capability lookups.
+
+ Returns (api_model, canonical_model):
+ - api_model: openai/ for API calls (Strix API is OpenAI-compatible)
+ - canonical_model: actual provider model name for litellm capability lookups
+ Non-strix models return the same name for both.
+ """
+ if not model_name or not model_name.startswith("strix/"):
+ return model_name, model_name
+
+ base_model = model_name[6:]
+ api_model = f"openai/{base_model}"
+ canonical_model = STRIX_MODEL_MAP.get(base_model, api_model)
+ return api_model, canonical_model
+
+
def _truncate_to_first_function(content: str) -> str:
if not content:
return content
- function_starts = [match.start() for match in re.finditer(r"= 2:
second_function_start = function_starts[1]
@@ -18,6 +78,7 @@ def _truncate_to_first_function(content: str) -> str:
def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
+ content = normalize_tool_format(content)
content = fix_incomplete_tool_call(content)
tool_invocations: list[dict[str, Any]] = []
@@ -47,12 +108,14 @@ def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
def fix_incomplete_tool_call(content: str) -> str:
- """Fix incomplete tool calls by adding missing tag."""
- if (
- "" not in content
- ):
+ """Fix incomplete tool calls by adding missing closing tag.
+
+ Handles both ```` and ```` formats.
+ """
+ has_open = "" in content
+ if has_open and count_open == 1 and not has_close:
content = content.rstrip()
content = content + "function>" if content.endswith("") else content + "\n"
return content
@@ -73,6 +136,7 @@ def clean_content(content: str) -> str:
if not content:
return ""
+ content = normalize_tool_format(content)
content = fix_incomplete_tool_call(content)
tool_pattern = r"]+>.*?"
diff --git a/strix/runtime/docker_runtime.py b/strix/runtime/docker_runtime.py
index b783dcc..d57d358 100644
--- a/strix/runtime/docker_runtime.py
+++ b/strix/runtime/docker_runtime.py
@@ -22,6 +22,7 @@ from .runtime import AbstractRuntime, SandboxInfo
HOST_GATEWAY_HOSTNAME = "host.docker.internal"
DOCKER_TIMEOUT = 60
CONTAINER_TOOL_SERVER_PORT = 48081
+CONTAINER_CAIDO_PORT = 48080
class DockerRuntime(AbstractRuntime):
@@ -37,6 +38,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container: Container | None = None
self._tool_server_port: int | None = None
self._tool_server_token: str | None = None
+ self._caido_port: int | None = None
def _find_available_port(self) -> int:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
@@ -78,6 +80,10 @@ class DockerRuntime(AbstractRuntime):
if port_bindings.get(port_key):
self._tool_server_port = int(port_bindings[port_key][0]["HostPort"])
+ caido_port_key = f"{CONTAINER_CAIDO_PORT}/tcp"
+ if port_bindings.get(caido_port_key):
+ self._caido_port = int(port_bindings[caido_port_key][0]["HostPort"])
+
def _wait_for_tool_server(self, max_retries: int = 30, timeout: int = 5) -> None:
host = self._resolve_docker_host()
health_url = f"http://{host}:{self._tool_server_port}/health"
@@ -121,6 +127,7 @@ class DockerRuntime(AbstractRuntime):
time.sleep(1)
self._tool_server_port = self._find_available_port()
+ self._caido_port = self._find_available_port()
self._tool_server_token = secrets.token_urlsafe(32)
execution_timeout = Config.get("strix_sandbox_execution_timeout") or "120"
@@ -130,7 +137,10 @@ class DockerRuntime(AbstractRuntime):
detach=True,
name=container_name,
hostname=container_name,
- ports={f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port},
+ ports={
+ f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port,
+ f"{CONTAINER_CAIDO_PORT}/tcp": self._caido_port,
+ },
cap_add=["NET_ADMIN", "NET_RAW"],
labels={"strix-scan-id": scan_id},
environment={
@@ -152,6 +162,7 @@ class DockerRuntime(AbstractRuntime):
if attempt < max_retries:
self._tool_server_port = None
self._tool_server_token = None
+ self._caido_port = None
time.sleep(2**attempt)
else:
return container
@@ -173,6 +184,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None
self._tool_server_port = None
self._tool_server_token = None
+ self._caido_port = None
try:
container = self.client.containers.get(container_name)
@@ -260,7 +272,7 @@ class DockerRuntime(AbstractRuntime):
raise RuntimeError("Docker container ID is unexpectedly None")
token = existing_token or self._tool_server_token
- if self._tool_server_port is None or token is None:
+ if self._tool_server_port is None or self._caido_port is None or token is None:
raise RuntimeError("Tool server not initialized")
host = self._resolve_docker_host()
@@ -273,6 +285,7 @@ class DockerRuntime(AbstractRuntime):
"api_url": api_url,
"auth_token": token,
"tool_server_port": self._tool_server_port,
+ "caido_port": self._caido_port,
"agent_id": agent_id,
}
@@ -314,6 +327,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None
self._tool_server_port = None
self._tool_server_token = None
+ self._caido_port = None
except (NotFound, DockerException):
pass
@@ -323,6 +337,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None
self._tool_server_port = None
self._tool_server_token = None
+ self._caido_port = None
if container_name is None:
return
diff --git a/strix/runtime/runtime.py b/strix/runtime/runtime.py
index e33c08d..e523d51 100644
--- a/strix/runtime/runtime.py
+++ b/strix/runtime/runtime.py
@@ -7,6 +7,7 @@ class SandboxInfo(TypedDict):
api_url: str
auth_token: str | None
tool_server_port: int
+ caido_port: int
agent_id: str
diff --git a/strix/skills/README.md b/strix/skills/README.md
index 5509192..501c8b0 100644
--- a/strix/skills/README.md
+++ b/strix/skills/README.md
@@ -33,6 +33,7 @@ The skills are dynamically injected into the agent's system prompt, allowing it
| **`/frameworks`** | Specific testing methods for popular frameworks e.g. Django, Express, FastAPI, and Next.js |
| **`/technologies`** | Specialized techniques for third-party services such as Supabase, Firebase, Auth0, and payment gateways |
| **`/protocols`** | Protocol-specific testing patterns for GraphQL, WebSocket, OAuth, and other communication standards |
+| **`/tooling`** | Command-line playbooks for core sandbox tools (nmap, nuclei, httpx, ffuf, subfinder, naabu, katana, sqlmap) |
| **`/cloud`** | Cloud provider security testing for AWS, Azure, GCP, and Kubernetes environments |
| **`/reconnaissance`** | Advanced information gathering and enumeration techniques for comprehensive attack surface mapping |
| **`/custom`** | Community-contributed skills for specialized or industry-specific testing scenarios |
diff --git a/strix/skills/__init__.py b/strix/skills/__init__.py
index c9cdf03..37ffc58 100644
--- a/strix/skills/__init__.py
+++ b/strix/skills/__init__.py
@@ -54,6 +54,30 @@ def validate_skill_names(skill_names: list[str]) -> dict[str, list[str]]:
return {"valid": valid_skills, "invalid": invalid_skills}
+def parse_skill_list(skills: str | None) -> list[str]:
+ if not skills:
+ return []
+ return [s.strip() for s in skills.split(",") if s.strip()]
+
+
+def validate_requested_skills(skill_list: list[str], max_skills: int = 5) -> str | None:
+ if len(skill_list) > max_skills:
+ return "Cannot specify more than 5 skills for an agent (use comma-separated format)"
+
+ if not skill_list:
+ return None
+
+ validation = validate_skill_names(skill_list)
+ if validation["invalid"]:
+ available_skills = list(get_all_skill_names())
+ return (
+ f"Invalid skills: {validation['invalid']}. "
+ f"Available skills: {', '.join(available_skills)}"
+ )
+
+ return None
+
+
def generate_skills_description() -> str:
available_skills = get_available_skills()
diff --git a/strix/skills/frameworks/nestjs.md b/strix/skills/frameworks/nestjs.md
new file mode 100644
index 0000000..51cf924
--- /dev/null
+++ b/strix/skills/frameworks/nestjs.md
@@ -0,0 +1,225 @@
+---
+name: nestjs
+description: Security testing playbook for NestJS applications covering guards, pipes, decorators, module boundaries, and multi-transport auth
+---
+
+# NestJS
+
+Security testing for NestJS applications. Focus on guard gaps across decorator stacks, validation pipe bypasses, module boundary leaks, and inconsistent auth enforcement across HTTP, WebSocket, and microservice transports.
+
+## Attack Surface
+
+**Decorator Pipeline**
+- Guards: `@UseGuards`, `CanActivate`, execution context (HTTP/WS/RPC), `Reflector` metadata
+- Pipes: `ValidationPipe` (whitelist, transform, forbidNonWhitelisted), `ParseIntPipe`, custom pipes
+- Interceptors: response mapping, caching, logging, timeout — can modify request/response flow
+- Filters: exception filters that may leak information
+- Metadata: `@SetMetadata`, `@Public()`, `@Roles()`, `@Permissions()`
+
+**Module System**
+- `@Module` boundaries, provider scoping (DEFAULT/REQUEST/TRANSIENT)
+- Dynamic modules: `forRoot`/`forRootAsync`, global modules
+- DI container: provider overrides, custom providers
+
+**Controllers & Transports**
+- REST: `@Controller`, versioning (URI/Header/MediaType)
+- GraphQL: `@Resolver`, playground/sandbox exposure
+- WebSocket: `@WebSocketGateway`, gateway guards, room authorization
+- Microservices: TCP, Redis, NATS, MQTT, gRPC, Kafka — often lack HTTP-level auth
+
+**Data Layer**
+- TypeORM: repositories, QueryBuilder, raw queries, relations
+- Prisma: `$queryRaw`, `$queryRawUnsafe`
+- Mongoose: operator injection, `$where`, `$regex`
+
+**Auth & Config**
+- `@nestjs/passport` strategies, `@nestjs/jwt`, session-based auth
+- `@nestjs/config`, ConfigService, `.env` files
+- `@nestjs/throttler`, rate limiting with `@SkipThrottle`
+
+**API Documentation**
+- `@nestjs/swagger`: OpenAPI exposure, DTO schemas, auth schemes
+
+## High-Value Targets
+
+- Swagger/OpenAPI endpoints in production (`/api`, `/api-docs`, `/api-json`, `/swagger`)
+- Auth endpoints: login, register, token refresh, password reset, OAuth callbacks
+- Admin controllers decorated with `@Roles('admin')` — test with user-level tokens
+- File upload endpoints using `FileInterceptor`/`FilesInterceptor`
+- WebSocket gateways sharing business logic with HTTP controllers
+- Microservice handlers (`@MessagePattern`, `@EventPattern`) — often unguarded
+- CRUD generators (`@nestjsx/crud`) with auto-generated endpoints
+- Background jobs and scheduled tasks (`@nestjs/schedule`)
+- Health/metrics endpoints (`@nestjs/terminus`, `/health`, `/metrics`)
+- GraphQL playground/sandbox in production (`/graphql`)
+
+## Reconnaissance
+
+**Swagger Discovery**
+```
+GET /api
+GET /api-docs
+GET /api-json
+GET /swagger
+GET /docs
+GET /v1/api-docs
+GET /api/v2/docs
+```
+
+Extract: paths, parameter schemas, DTOs, auth schemes, example values. Swagger may reveal internal endpoints, deprecated routes, and admin-only paths not visible in the UI.
+
+**Guard Mapping**
+
+For each controller and method, identify:
+- Global guards (applied in `main.ts` or app module)
+- Controller-level guards (`@UseGuards` on the class)
+- Method-level guards (`@UseGuards` on individual handlers)
+- `@Public()` or `@SkipThrottle()` decorators that bypass protection
+
+## Key Vulnerabilities
+
+### Guard Bypass
+
+**Decorator Stack Gaps**
+- Guards execute: global → controller → method. A method missing `@UseGuards` when siblings have it is the #1 finding.
+- `@Public()` metadata causing global `AuthGuard` to skip enforcement — check if applied too broadly.
+- New methods added to existing controllers without inheriting the expected guard.
+
+**ExecutionContext Switching**
+- Guards handling only HTTP context (`getRequest()`) may fail silently on WebSocket or RPC, returning `true` by default.
+- Test same business logic through alternate transports to find context-specific bypasses.
+
+**Reflector Mismatches**
+- Guard reads `SetMetadata('roles', [...])` but decorator sets `'role'` (singular) — guard sees no metadata, defaults to allow.
+- `applyDecorators()` compositions accidentally overriding stricter guards with permissive ones.
+
+### Validation Pipe Exploits
+
+**Whitelist Bypass**
+- `whitelist: true` without `forbidNonWhitelisted: true`: extra properties silently stripped but may have been processed by earlier middleware/interceptors.
+- Missing `@Type(() => ChildDto)` on nested objects: `@ValidateNested()` without `@Type` means nested payload is never validated.
+- Array elements: `@IsArray()` doesn't validate elements without `@ValidateNested({ each: true })` and `@Type`.
+
+**Type Coercion**
+- `transform: true` enables implicit coercion: strings → numbers, `"true"` → `true`, `"null"` → `null`.
+- Exploit truthiness assumptions in business logic downstream.
+
+**Conditional Validation**
+- `@ValidateIf()` and validation groups creating paths where fields skip validation entirely.
+
+**Missing Parse Pipes**
+- `@Param('id')` without `ParseIntPipe`/`ParseUUIDPipe` — string values reach ORM queries directly.
+
+### Auth & Passport
+
+**JWT Strategy**
+- Check `ignoreExpiration` is false, `algorithms` is pinned (no `none` or HS/RS confusion)
+- Weak `secretOrKey` values
+- Cross-service token reuse when audience/issuer not enforced
+
+**Passport Strategy Issues**
+- `validate()` return value becomes `req.user` — if it returns full DB record, sensitive fields leak downstream
+- Multiple strategies (JWT + session): one may bypass restrictions of the other
+- Custom guards returning `true` for unauthenticated as "optional auth"
+
+**Timing Attacks**
+- Plain string comparison instead of bcrypt/argon2 in local strategy
+
+### Serialization Leaks
+
+**Missing ClassSerializerInterceptor**
+- If not applied globally, `@Exclude()` fields (passwords, internal IDs) returned in responses.
+- `@Expose()` with groups: admin-only fields exposed when groups not enforced per-request.
+
+**Circular Relations**
+- Eager-loaded TypeORM/Prisma relations exposing entire object graph without careful serialization.
+
+### Interceptor Abuse
+
+**Cache Poisoning**
+- `CacheInterceptor` without user/tenant identity in cache key — responses from one user served to another.
+- Test: authenticated request, then unauthenticated request returning cached data.
+
+**Response Mapping**
+- Transformation interceptors may leak internal entity fields if mapping is incomplete.
+
+### Module Boundary Leaks
+
+**Global Module Exposure**
+- `@Global()` modules expose all providers to every module without explicit imports.
+- Sensitive services (admin operations, internal APIs) accessible from untrusted modules.
+
+**Config Leaks**
+- `forRoot`/`forRootAsync` configuration secrets accessible via `ConfigService` injection in any module.
+
+**Scope Issues**
+- Request-scoped providers (`Scope.REQUEST`) incorrectly scoped as DEFAULT (singleton) — request context leaks across concurrent requests.
+
+### WebSocket Gateway
+
+- HTTP guards don't automatically apply to WebSocket gateways — `@UseGuards` must be explicit.
+- Authentication deferred from `handleConnection` to message handlers allows unauthenticated message sending.
+- Room/namespace authorization: users joining rooms they shouldn't access.
+- `@SubscribeMessage()` handlers relying on connection-level auth instead of per-message validation.
+
+### Microservice Transport
+
+- `@MessagePattern`/`@EventPattern` handlers often lack guards (considered "internal").
+- If transport (Redis, NATS, Kafka) is network-accessible, messages can be injected bypassing all HTTP security.
+- `ValidationPipe` may only be configured for HTTP — microservice payloads skip validation.
+
+### ORM Injection
+
+**TypeORM**
+- `QueryBuilder` and `.query()` with template literal interpolation → SQL injection.
+- Relations: API allowing specification of which relations to load via query params.
+
+**Mongoose**
+- Query operator injection: `{ password: { $gt: "" } }` via unsanitized request body.
+- `$where` and `$regex` operators from user input.
+
+**Prisma**
+- `$queryRaw`/`$executeRaw` with string interpolation (but not tagged template).
+- `$queryRawUnsafe` usage.
+
+### Rate Limiting
+
+- `@SkipThrottle()` on sensitive endpoints (login, password reset, OTP).
+- In-memory throttler storage: resets on restart, doesn't work across instances.
+- Behind proxy without `trust proxy`: all requests share same IP, or header spoofable.
+
+### CRUD Generators
+
+- Auto-generated CRUD endpoints may not inherit manual guard configurations.
+- Bulk operations (`createMany`, `updateMany`) bypassing per-entity authorization.
+- Query parameter injection in CRUD libraries: `filter`, `sort`, `join`, `select` exposing unauthorized data.
+
+## Bypass Techniques
+
+- `@Public()` / skip-metadata applied via composed decorators at method level causing global guards to skip via `Reflector` metadata checks
+- Route param pollution: `/users/123?id=456` — which `id` wins in guards vs handlers?
+- Version routing: v1 of endpoint may still be registered without the guard added to v2
+- `X-HTTP-Method-Override` or `_method` processed by Express before guards
+- Content-type switching: `application/x-www-form-urlencoded` instead of JSON to bypass JSON-specific validation
+- Exception filter differences: guard throwing results in generic error that leaks route existence info
+
+## Testing Methodology
+
+1. **Enumerate** — Fetch Swagger/OpenAPI, map all controllers, resolvers, and gateways
+2. **Guard audit** — Map decorator stack per method: which guards, pipes, interceptors are applied at each level
+3. **Matrix testing** — Test each endpoint across: unauth/user/admin × HTTP/WS/microservice
+4. **Validation probing** — Send extra fields, wrong types, nested objects, arrays to find pipe gaps
+5. **Transport parity** — Same operation via HTTP, WebSocket, and microservice transport
+6. **Module boundaries** — Check if providers from one module are accessible without proper imports
+7. **Serialization check** — Compare raw entity fields with API response fields
+
+## Validation Requirements
+
+- Guard bypass: request to guarded endpoint succeeding without auth, showing guard chain break point
+- Validation bypass: payload with extra/malformed fields affecting business logic
+- Cross-transport inconsistency: same action authorized via HTTP but exploitable via WebSocket/microservice
+- Module boundary leak: accessing provider or data across unauthorized module boundaries
+- Serialization leak: response containing excluded fields (passwords, internal metadata)
+- IDOR: side-by-side requests from different users showing unauthorized data access
+- ORM injection: raw query with user-controlled input returning unauthorized data, or error-based evidence of query structure
+- Cache poisoning: response from unauthenticated or different-user request matching a prior authenticated user's cached response
diff --git a/strix/skills/tooling/ffuf.md b/strix/skills/tooling/ffuf.md
new file mode 100644
index 0000000..0c4d1f0
--- /dev/null
+++ b/strix/skills/tooling/ffuf.md
@@ -0,0 +1,66 @@
+---
+name: ffuf
+description: ffuf fuzzing syntax with matcher/filter strategy and non-interactive defaults.
+---
+
+# ffuf CLI Playbook
+
+Official docs:
+- https://github.com/ffuf/ffuf
+
+Canonical syntax:
+`ffuf -w -u [flags]`
+
+High-signal flags:
+- `-u ` target URL containing `FUZZ`
+- `-w ` wordlist input (supports `KEYWORD` mapping via `-w file:KEYWORD`)
+- `-mc ` match status codes
+- `-fc ` filter status codes
+- `-fs ` filter by body size
+- `-ac` auto-calibration
+- `-t ` threads
+- `-rate ` request rate
+- `-timeout ` HTTP timeout
+- `-x ` upstream proxy (HTTP/SOCKS)
+- `-ignore-body` skip downloading response body
+- `-noninteractive` disable interactive console mode
+- `-recursion` and `-recursion-depth ` recursive discovery
+- `-H ` custom headers
+- `-X ` and `-d ` for non-GET fuzzing
+- `-o -of ` structured output
+
+Agent-safe baseline for automation:
+`ffuf -w wordlist.txt -u https://target.tld/FUZZ -mc 200,204,301,302,307,401,403,405 -ac -t 20 -rate 50 -timeout 10 -noninteractive -of json -o ffuf.json`
+
+Common patterns:
+- Basic path fuzzing:
+ `ffuf -w /path/wordlist.txt -u https://target.tld/FUZZ -mc 200,204,301,302,307,401,403 -ac -t 40 -rate 200 -noninteractive`
+- Vhost fuzzing:
+ `ffuf -w vhosts.txt -u https://target.tld -H 'Host: FUZZ.target.tld' -fs 0 -ac -noninteractive`
+- Parameter value fuzzing:
+ `ffuf -w values.txt -u 'https://target.tld/search?q=FUZZ' -mc all -fs 0 -ac -t 30 -noninteractive`
+- POST body fuzzing:
+ `ffuf -w payloads.txt -u https://target.tld/login -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'username=admin&password=FUZZ' -fc 401 -noninteractive`
+- Recursive discovery:
+ `ffuf -w dirs.txt -u https://target.tld/FUZZ -recursion -recursion-depth 2 -ac -t 30 -noninteractive`
+- Proxy-instrumented run:
+ `ffuf -w wordlist.txt -u https://target.tld/FUZZ -x http://127.0.0.1:48080 -mc 200,301,302,403 -ac -noninteractive`
+
+Critical correctness rules:
+- `FUZZ` must appear exactly at the mutation point in URL/header/body.
+- If using `-w file:KEYWORD`, that same `KEYWORD` must be present in URL/header/body.
+- Always include `-noninteractive` in agent/script execution to prevent ffuf console mode from swallowing subsequent shell commands.
+- Save structured output with `-of json -o ` for deterministic parsing.
+
+Usage rules:
+- Prefer explicit matcher/filter strategy (`-mc`/`-fc`/`-fs`) over default-only output.
+- Start conservative (`-rate`, `-t`) and scale only if target tolerance is known.
+- Do not use `-h`/`--help` during normal execution unless absolutely necessary.
+
+Failure recovery:
+- If ffuf drops into interactive mode, send `C-c` and rerun with `-noninteractive`.
+- If response noise is too high, tighten `-mc/-fc/-fs` instead of increasing load.
+- If runtime is too long, lower `-rate/-t` and tighten scope.
+
+If uncertain, query web_search with:
+`site:github.com/ffuf/ffuf README`
diff --git a/strix/skills/tooling/httpx.md b/strix/skills/tooling/httpx.md
new file mode 100644
index 0000000..50fcf53
--- /dev/null
+++ b/strix/skills/tooling/httpx.md
@@ -0,0 +1,77 @@
+---
+name: httpx
+description: ProjectDiscovery httpx probing syntax, exact probe flags, and automation-safe output patterns.
+---
+
+# httpx CLI Playbook
+
+Official docs:
+- https://docs.projectdiscovery.io/opensource/httpx/usage
+- https://docs.projectdiscovery.io/opensource/httpx/running
+- https://github.com/projectdiscovery/httpx
+
+Canonical syntax:
+`httpx [flags]`
+
+High-signal flags:
+- `-u, -target ` single target
+- `-l, -list ` target list
+- `-nf, -no-fallback` probe both HTTP and HTTPS
+- `-nfs, -no-fallback-scheme` do not auto-switch schemes
+- `-sc` status code
+- `-title` page title
+- `-server, -web-server` server header
+- `-td, -tech-detect` technology detection
+- `-fr, -follow-redirects` follow redirects
+- `-mc ` / `-fc ` match or filter status codes
+- `-path ` probe specific paths
+- `-p, -ports ` probe custom ports
+- `-proxy, -http-proxy ` proxy target requests
+- `-tlsi, -tls-impersonate` experimental TLS impersonation
+- `-j, -json` JSONL output
+- `-sr, -store-response` store request/response artifacts
+- `-srd, -store-response-dir ` custom directory for stored artifacts
+- `-silent` compact output
+- `-rl ` requests/second cap
+- `-t ` threads
+- `-timeout ` request timeout
+- `-retries ` retry attempts
+- `-o ` output file
+
+Agent-safe baseline for automation:
+`httpx -l hosts.txt -sc -title -server -td -fr -timeout 10 -retries 1 -rl 50 -t 25 -silent -j -o httpx.jsonl`
+
+Common patterns:
+- Quick live+fingerprint check:
+ `httpx -l hosts.txt -sc -title -server -td -silent -o httpx.txt`
+- Probe known admin paths:
+ `httpx -l hosts.txt -path /,/login,/admin -sc -title -silent -j -o httpx_paths.jsonl`
+- Probe both schemes explicitly:
+ `httpx -l hosts.txt -nf -sc -title -silent`
+- Vhost detection pass:
+ `httpx -l hosts.txt -vhost -sc -title -silent -j -o httpx_vhost.jsonl`
+- Proxy-instrumented probing:
+ `httpx -l hosts.txt -sc -title -proxy http://127.0.0.1:48080 -silent -j -o httpx_proxy.jsonl`
+- Response-storage pass for downstream content parsing:
+ `httpx -l hosts.txt -fr -sr -srd recon/httpx_store -sc -title -server -cl -ct -location -probe -silent`
+
+Critical correctness rules:
+- For machine parsing, prefer `-j -o `.
+- Keep `-rl` and `-t` explicit for reproducible throughput.
+- Use `-nf` when you need dual-scheme probing from host-only input.
+- When using `-path` or `-ports`, keep scope tight to avoid accidental scan inflation.
+- Use `-sr -srd ` when later steps need raw response artifacts (JS/route extraction, grepping, replay).
+
+Usage rules:
+- Use `-silent` for pipeline-friendly output.
+- Use `-mc/-fc` when downstream steps depend on specific response classes.
+- Prefer `-proxy` flag over global proxy env vars when only httpx traffic should be proxied.
+- Do not use `-h`/`--help` for routine runs unless absolutely necessary.
+
+Failure recovery:
+- If too many timeouts occur, reduce `-rl/-t` and/or increase `-timeout`.
+- If output is noisy, add `-fc` filters or `-fd` duplicate filtering.
+- If HTTPS-only probing misses HTTP services, rerun with `-nf` (and avoid `-nfs`).
+
+If uncertain, query web_search with:
+`site:docs.projectdiscovery.io httpx usage`
diff --git a/strix/skills/tooling/katana.md b/strix/skills/tooling/katana.md
new file mode 100644
index 0000000..258e8e0
--- /dev/null
+++ b/strix/skills/tooling/katana.md
@@ -0,0 +1,76 @@
+---
+name: katana
+description: Katana crawler syntax, depth/js/known-files behavior, and stable concurrency controls.
+---
+
+# Katana CLI Playbook
+
+Official docs:
+- https://docs.projectdiscovery.io/opensource/katana/usage
+- https://docs.projectdiscovery.io/opensource/katana/running
+- https://github.com/projectdiscovery/katana
+
+Canonical syntax:
+`katana [flags]`
+
+High-signal flags:
+- `-u, -list ` target URL(s)
+- `-d, -depth ` crawl depth
+- `-jc, -js-crawl` parse JavaScript-discovered endpoints
+- `-jsl, -jsluice` deeper JS parsing (memory intensive)
+- `-kf, -known-files ` known-file crawling mode
+- `-proxy ` explicit proxy setting
+- `-c, -concurrency ` concurrent fetchers
+- `-p, -parallelism ` concurrent input targets
+- `-rl, -rate-limit ` request rate limit
+- `-timeout ` request timeout
+- `-retry ` retry count
+- `-ef, -extension-filter ` extension exclusions
+- `-tlsi, -tls-impersonate` experimental JA3/TLS impersonation
+- `-hl, -headless` enable hybrid headless crawling
+- `-sc, -system-chrome` use local Chrome for headless mode
+- `-ho, -headless-options ` extra Chrome options (for example proxy-server)
+- `-nos, -no-sandbox` run Chrome headless with no-sandbox
+- `-noi, -no-incognito` disable incognito in headless mode
+- `-cdd, -chrome-data-dir ` persist browser profile/session
+- `-xhr, -xhr-extraction` include XHR endpoints in JSONL output
+- `-silent`, `-j, -jsonl`, `-o ` output controls
+
+Agent-safe baseline for automation:
+`mkdir -p crawl && katana -u https://target.tld -d 3 -jc -kf robotstxt -c 10 -p 10 -rl 50 -timeout 10 -retry 1 -ef png,jpg,jpeg,gif,svg,css,woff,woff2,ttf,eot,map -silent -j -o crawl/katana.jsonl`
+
+Common patterns:
+- Fast crawl baseline:
+ `katana -u https://target.tld -d 3 -jc -silent`
+- Deeper JS-aware crawl:
+ `katana -u https://target.tld -d 5 -jc -jsl -kf all -c 10 -p 10 -rl 50 -o katana_urls.txt`
+- Multi-target run with JSONL output:
+ `katana -list urls.txt -d 3 -jc -silent -j -o katana.jsonl`
+- Headless crawl with local Chrome:
+ `katana -u https://target.tld -hl -sc -nos -xhr -j -o crawl/katana_headless.jsonl`
+- Headless crawl through proxy:
+ `katana -u https://target.tld -hl -sc -ho proxy-server=http://127.0.0.1:48080 -j -o crawl/katana_proxy.jsonl`
+
+Critical correctness rules:
+- `-kf` must be followed by one of `all`, `robotstxt`, or `sitemapxml`.
+- Use documented `-hl` for headless mode.
+- `-proxy` expects a single proxy URL string (for example `http://127.0.0.1:8080`).
+- `-ho` expects comma-separated Chrome options (example: `-ho --disable-gpu,proxy-server=http://127.0.0.1:8080`).
+- For `-kf`, keep depth at least `-d 3` so known files are fully covered.
+- If writing to a file, ensure parent directory exists before `-o`.
+
+Usage rules:
+- Keep `-d`, `-c`, `-p`, and `-rl` explicit for reproducible runs.
+- Use `-ef` early to reduce static-file noise before fuzzing.
+- Prefer `-proxy` over environment proxy variables when proxying only Katana traffic.
+- Use `-hc` only for one-time diagnostics, not routine crawling loops.
+- Do not use `-h`/`--help` for routine runs unless absolutely necessary.
+
+Failure recovery:
+- If crawl runs too long, lower `-d` and optionally add `-ct`.
+- If memory spikes, disable `-jsl` and lower `-c/-p`.
+- If headless fails with Chrome errors, drop `-sc` or install system Chrome.
+- If output is noisy, tighten scope and add `-ef` filters.
+
+If uncertain, query web_search with:
+`site:docs.projectdiscovery.io katana usage`
diff --git a/strix/skills/tooling/naabu.md b/strix/skills/tooling/naabu.md
new file mode 100644
index 0000000..f39d44b
--- /dev/null
+++ b/strix/skills/tooling/naabu.md
@@ -0,0 +1,68 @@
+---
+name: naabu
+description: Naabu port-scanning syntax with host input, scan-type, verification, and rate controls.
+---
+
+# Naabu CLI Playbook
+
+Official docs:
+- https://docs.projectdiscovery.io/opensource/naabu/usage
+- https://docs.projectdiscovery.io/opensource/naabu/running
+- https://github.com/projectdiscovery/naabu
+
+Canonical syntax:
+`naabu [flags]`
+
+High-signal flags:
+- `-host ` single host
+- `-list, -l ` hosts list
+- `-p ` explicit ports (supports ranges)
+- `-top-ports ` top ports profile
+- `-exclude-ports ` exclusions
+- `-scan-type ` SYN or CONNECT scan
+- `-Pn` skip host discovery
+- `-rate ` packets per second
+- `-c ` worker count
+- `-timeout ` per-probe timeout in milliseconds
+- `-retries ` retry attempts
+- `-proxy ` SOCKS5 proxy
+- `-verify` verify discovered open ports
+- `-j, -json` JSONL output
+- `-silent` compact output
+- `-o ` output file
+
+Agent-safe baseline for automation:
+`naabu -list hosts.txt -top-ports 100 -scan-type c -Pn -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent -j -o naabu.jsonl`
+
+Common patterns:
+- Top ports with controlled rate:
+ `naabu -list hosts.txt -top-ports 100 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent -o naabu.txt`
+- Focused web-ports sweep:
+ `naabu -list hosts.txt -p 80,443,8080,8443 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent`
+- Single-host quick check:
+ `naabu -host target.tld -p 22,80,443 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify`
+- Root SYN mode (if available):
+ `sudo naabu -list hosts.txt -top-ports 100 -scan-type syn -rate 500 -c 25 -timeout 1000 -retries 1 -verify -silent`
+
+Critical correctness rules:
+- Use `-scan-type connect` when running without root/privileged raw socket access.
+- Always set `-timeout` explicitly; it is in milliseconds.
+- Set `-rate` explicitly to avoid unstable or noisy scans.
+- `-timeout` is in milliseconds, not seconds.
+- Keep port scope tight: prefer explicit important ports or a small `-top-ports` value unless broader coverage is explicitly required.
+- Do not spam traffic; start with the smallest useful port set and conservative rate/worker settings.
+- Prefer `-verify` before handing ports to follow-up scanners.
+
+Usage rules:
+- Keep host discovery behavior explicit (`-Pn` or default discovery).
+- Use `-j -o ` for automation pipelines.
+- Prefer `-p 22,80,443,8080,8443` or `-top-ports 100` before considering larger sweeps.
+- Do not use `-h`/`--help` for normal flow unless absolutely necessary.
+
+Failure recovery:
+- If privileged socket errors occur, switch to `-scan-type c`.
+- If scans are slow or lossy, lower `-rate`, lower `-c`, and tighten `-p`/`-top-ports`.
+- If many hosts appear down, compare runs with and without `-Pn`.
+
+If uncertain, query web_search with:
+`site:docs.projectdiscovery.io naabu usage`
diff --git a/strix/skills/tooling/nmap.md b/strix/skills/tooling/nmap.md
new file mode 100644
index 0000000..831b4c6
--- /dev/null
+++ b/strix/skills/tooling/nmap.md
@@ -0,0 +1,66 @@
+---
+name: nmap
+description: Canonical Nmap CLI syntax, two-pass scanning workflow, and sandbox-safe bounded scan patterns.
+---
+
+# Nmap CLI Playbook
+
+Official docs:
+- https://nmap.org/book/man-briefoptions.html
+- https://nmap.org/book/man.html
+- https://nmap.org/book/man-performance.html
+
+Canonical syntax:
+`nmap [Scan Type(s)] [Options] {target specification}`
+
+High-signal flags:
+- `-n` skip DNS resolution
+- `-Pn` skip host discovery when ICMP/ping is filtered
+- `-sS` SYN scan (root/privileged)
+- `-sT` TCP connect scan (no raw-socket privilege)
+- `-sV` detect service versions
+- `-sC` run default NSE scripts
+- `-p ` explicit ports (`-p-` for all TCP ports)
+- `--top-ports ` quick common-port sweep
+- `--open` show only hosts with open ports
+- `-T<0-5>` timing template (`-T4` common)
+- `--max-retries ` cap retransmissions
+- `--host-timeout