Compare commits
87 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5104eb93a | ||
|
|
d8a08e9a8c | ||
|
|
f6475cec07 | ||
|
|
31baa0dfc0 | ||
|
|
56526cbf90 | ||
|
|
47faeb1ef3 | ||
|
|
435ac82d9e | ||
|
|
f08014cf51 | ||
|
|
bc8e14f68a | ||
|
|
eae2b783c0 | ||
|
|
058cf1abdb | ||
|
|
d16bdb277a | ||
|
|
d7f712581d | ||
|
|
4818a854d6 | ||
|
|
9bcb43e713 | ||
|
|
5672925736 | ||
|
|
61c94189c6 | ||
|
|
f539e5aafd | ||
|
|
1ffeedcf55 | ||
|
|
c059f47d01 | ||
|
|
7dab26cdd5 | ||
|
|
498032e279 | ||
|
|
b80bb165b9 | ||
|
|
fe456d57fe | ||
|
|
13e804b7e3 | ||
|
|
2e3dc0d276 | ||
|
|
83efe3816f | ||
|
|
52aa763d47 | ||
|
|
d932602a6b | ||
|
|
6f4ca95338 | ||
|
|
fb6f6295c5 | ||
|
|
f56f56a7f7 | ||
|
|
86a687ede8 | ||
|
|
7b7ea59a37 | ||
|
|
226678f3f2 | ||
|
|
49421f50d5 | ||
|
|
b6b0778956 | ||
|
|
4a58226c9a | ||
|
|
94bb97143e | ||
|
|
bcd6b8a715 | ||
|
|
c53a0f6b64 | ||
|
|
dc5043452e | ||
|
|
13ba8746dd | ||
|
|
a31ed36778 | ||
|
|
740fb3ed40 | ||
|
|
c327ce621f | ||
|
|
e8662fbda9 | ||
|
|
cdf3cca3b7 | ||
|
|
0159d431ea | ||
|
|
bf04b304e6 | ||
|
|
a1d7c0f810 | ||
|
|
47e07c8a04 | ||
|
|
ea31e0cc9d | ||
|
|
9bb8475e2f | ||
|
|
a09d2795e2 | ||
|
|
17ee6e6e6f | ||
|
|
01ae348da8 | ||
|
|
0e9cd9b2a4 | ||
|
|
2ea5ff6695 | ||
|
|
06659d98ba | ||
|
|
7af1180a30 | ||
|
|
f48def1f9e | ||
|
|
af8eeef4ac | ||
|
|
16c9b05121 | ||
|
|
6422bfa0b4 | ||
|
|
dd7767c847 | ||
|
|
2777ae3fe8 | ||
|
|
45bb0ae8d8 | ||
|
|
67cfe994be | ||
|
|
878d6ebf57 | ||
|
|
48fb48dba3 | ||
|
|
0954ac208f | ||
|
|
a6dcb7756e | ||
|
|
a2142cc985 | ||
|
|
7bcdedfb18 | ||
|
|
e6ddcb1801 | ||
|
|
daba3d8b61 | ||
|
|
e6c1aae38d | ||
|
|
1089aab89e | ||
|
|
706bb193c0 | ||
|
|
2ba1d0fe59 | ||
|
|
8b0bb521ba | ||
|
|
a90082bc53 | ||
|
|
6fc592b4e8 | ||
|
|
62cca3f149 | ||
|
|
f25cf9b23d | ||
|
|
2472d590d5 |
@@ -39,14 +39,14 @@ Thank you for your interest in contributing to Strix! This guide will help you g
|
||||
poetry run strix --target https://example.com
|
||||
```
|
||||
|
||||
## 📚 Contributing Prompt Modules
|
||||
## 📚 Contributing Skills
|
||||
|
||||
Prompt modules are specialized knowledge packages that enhance agent capabilities. See [strix/prompts/README.md](strix/prompts/README.md) for detailed guidelines.
|
||||
Skills are specialized knowledge packages that enhance agent capabilities. See [strix/skills/README.md](strix/skills/README.md) for detailed guidelines.
|
||||
|
||||
### Quick Guide
|
||||
|
||||
1. **Choose the right category** (`/vulnerabilities`, `/frameworks`, `/technologies`, etc.)
|
||||
2. **Create a** `.jinja` file with your prompts
|
||||
2. **Create a** `.jinja` file with your skill content
|
||||
3. **Include practical examples** - Working payloads, commands, or test cases
|
||||
4. **Provide validation methods** - How to confirm findings and avoid false positives
|
||||
5. **Submit via PR** with clear description
|
||||
|
||||
101
README.md
101
README.md
@@ -1,55 +1,61 @@
|
||||
<p align="center">
|
||||
<a href="https://usestrix.com/">
|
||||
<img src=".github/logo.png" width="150" alt="Strix Logo">
|
||||
<a href="https://strix.ai/">
|
||||
<img src="https://github.com/usestrix/.github/raw/main/imgs/cover.png" alt="Strix Banner" width="100%">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<h1 align="center">Strix</h1>
|
||||
|
||||
<h2 align="center">Open-source AI Hackers to secure your Apps</h2>
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://pypi.org/project/strix-agent/)
|
||||
[](https://pypi.org/project/strix-agent/)
|
||||

|
||||
[](LICENSE)
|
||||
# Strix
|
||||
|
||||
[](https://github.com/usestrix/strix)
|
||||
[](https://discord.gg/YjKFvEZSdZ)
|
||||
[](https://usestrix.com)
|
||||
### Open-source AI hackers to find and fix your app’s vulnerabilities.
|
||||
|
||||
<a href="https://trendshift.io/repositories/15362" target="_blank"><img src="https://trendshift.io/api/badge/repositories/15362" alt="usestrix%2Fstrix | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
<br/>
|
||||
|
||||
|
||||
[](https://deepwiki.com/usestrix/strix)
|
||||
<a href="https://docs.strix.ai"><img src="https://img.shields.io/badge/Docs-docs.strix.ai-2b9246?style=for-the-badge&logo=gitbook&logoColor=white" alt="Docs"></a>
|
||||
<a href="https://strix.ai"><img src="https://img.shields.io/badge/Website-strix.ai-3b82f6?style=for-the-badge&logoColor=white" alt="Website"></a>
|
||||
<a href="https://pypi.org/project/strix-agent/"><img src="https://img.shields.io/badge/PyPI-strix--agent-f59e0b?style=for-the-badge&logo=pypi&logoColor=white" alt="PyPI"></a>
|
||||
|
||||
<a href="https://deepwiki.com/usestrix/strix"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
|
||||
<a href="https://github.com/usestrix/strix"><img src="https://img.shields.io/github/stars/usestrix/strix?style=flat-square" alt="GitHub Stars"></a>
|
||||
<a href="LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-3b82f6?style=flat-square" alt="License"></a>
|
||||
<a href="https://pypi.org/project/strix-agent/"><img src="https://img.shields.io/pypi/v/strix-agent?style=flat-square" alt="PyPI Version"></a>
|
||||
|
||||
|
||||
<a href="https://discord.gg/YjKFvEZSdZ"><img src="https://github.com/usestrix/.github/raw/main/imgs/Discord.png" height="40" alt="Join Discord"></a>
|
||||
<a href="https://x.com/strix_ai"><img src="https://github.com/usestrix/.github/raw/main/imgs/X.png" height="40" alt="Follow on X"></a>
|
||||
|
||||
|
||||
<a href="https://trendshift.io/repositories/15362" target="_blank"><img src="https://trendshift.io/api/badge/repositories/15362" alt="usestrix/strix | Trendshift" width="250" height="55"/></a>
|
||||
|
||||
</div>
|
||||
|
||||
<br>
|
||||
<br/>
|
||||
|
||||
<div align="center">
|
||||
<img src=".github/screenshot.png" alt="Strix Demo" width="800" style="border-radius: 16px;">
|
||||
<img src=".github/screenshot.png" alt="Strix Demo" width="900" style="border-radius: 16px;">
|
||||
</div>
|
||||
|
||||
<br>
|
||||
|
||||
> [!TIP]
|
||||
> **New!** Strix now integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production!
|
||||
> **New!** Strix integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production!
|
||||
|
||||
---
|
||||
|
||||
## 🦉 Strix Overview
|
||||
|
||||
## Strix Overview
|
||||
|
||||
Strix are autonomous AI agents that act just like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual proof-of-concepts. Built for developers and security teams who need fast, accurate security testing without the overhead of manual pentesting or the false positives of static analysis tools.
|
||||
|
||||
**Key Capabilities:**
|
||||
|
||||
- 🔧 **Full hacker toolkit** out of the box
|
||||
- 🤝 **Teams of agents** that collaborate and scale
|
||||
- ✅ **Real validation** with PoCs, not false positives
|
||||
- 💻 **Developer‑first** CLI with actionable reports
|
||||
- 🔄 **Auto‑fix & reporting** to accelerate remediation
|
||||
- **Full hacker toolkit** out of the box
|
||||
- **Teams of agents** that collaborate and scale
|
||||
- **Real validation** with PoCs, not false positives
|
||||
- **Developer‑first** CLI with actionable reports
|
||||
- **Auto‑fix & reporting** to accelerate remediation
|
||||
|
||||
|
||||
## 🎯 Use Cases
|
||||
@@ -87,9 +93,9 @@ strix --target ./app-directory
|
||||
> [!NOTE]
|
||||
> First run automatically pulls the sandbox Docker image. Results are saved to `strix_runs/<run-name>`
|
||||
|
||||
## ☁️ Run Strix in Cloud
|
||||
## Run Strix in Cloud
|
||||
|
||||
Want to skip the local setup, API keys, and unpredictable LLM costs? Run the hosted cloud version of Strix at **[app.usestrix.com](https://usestrix.com)**.
|
||||
Want to skip the local setup, API keys, and unpredictable LLM costs? Run the hosted cloud version of Strix at **[app.strix.ai](https://strix.ai)**.
|
||||
|
||||
Launch a scan in just a few minutes—no setup or configuration required—and you’ll get:
|
||||
|
||||
@@ -98,13 +104,13 @@ Launch a scan in just a few minutes—no setup or configuration required—and y
|
||||
- **CI/CD and GitHub integrations** to block risky changes before production
|
||||
- **Continuous monitoring** so new vulnerabilities are caught quickly
|
||||
|
||||
[**Run your first pentest now →**](https://usestrix.com)
|
||||
[**Run your first pentest now →**](https://strix.ai)
|
||||
|
||||
---
|
||||
|
||||
## ✨ Features
|
||||
|
||||
### 🛠️ Agentic Security Tools
|
||||
### Agentic Security Tools
|
||||
|
||||
Strix agents come equipped with a comprehensive security testing toolkit:
|
||||
|
||||
@@ -116,7 +122,7 @@ Strix agents come equipped with a comprehensive security testing toolkit:
|
||||
- **Code Analysis** - Static and dynamic analysis capabilities
|
||||
- **Knowledge Management** - Structured findings and attack documentation
|
||||
|
||||
### 🎯 Comprehensive Vulnerability Detection
|
||||
### Comprehensive Vulnerability Detection
|
||||
|
||||
Strix can identify and validate a wide range of security vulnerabilities:
|
||||
|
||||
@@ -128,7 +134,7 @@ Strix can identify and validate a wide range of security vulnerabilities:
|
||||
- **Authentication** - JWT vulnerabilities, session management
|
||||
- **Infrastructure** - Misconfigurations, exposed services
|
||||
|
||||
### 🕸️ Graph of Agents
|
||||
### Graph of Agents
|
||||
|
||||
Advanced multi-agent orchestration for comprehensive security testing:
|
||||
|
||||
@@ -138,7 +144,7 @@ Advanced multi-agent orchestration for comprehensive security testing:
|
||||
|
||||
---
|
||||
|
||||
## 💻 Usage Examples
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
|
||||
@@ -169,7 +175,7 @@ strix --target api.your-app.com --instruction "Focus on business logic flaws and
|
||||
strix --target api.your-app.com --instruction-file ./instruction.md
|
||||
```
|
||||
|
||||
### 🤖 Headless Mode
|
||||
### Headless Mode
|
||||
|
||||
Run Strix programmatically without interactive UI using the `-n/--non-interactive` flag—perfect for servers and automated jobs. The CLI prints real-time vulnerability findings, and the final report before exiting. Exits with non-zero code when vulnerabilities are found.
|
||||
|
||||
@@ -177,7 +183,7 @@ Run Strix programmatically without interactive UI using the `-n/--non-interactiv
|
||||
strix -n --target https://your-app.com
|
||||
```
|
||||
|
||||
### 🔄 CI/CD (GitHub Actions)
|
||||
### CI/CD (GitHub Actions)
|
||||
|
||||
Strix can be added to your pipeline to run a security test on pull requests with a lightweight GitHub Actions workflow:
|
||||
|
||||
@@ -204,7 +210,7 @@ jobs:
|
||||
run: strix -n -t ./ --scan-mode quick
|
||||
```
|
||||
|
||||
### ⚙️ Configuration
|
||||
### Configuration
|
||||
|
||||
```bash
|
||||
export STRIX_LLM="openai/gpt-5"
|
||||
@@ -213,22 +219,37 @@ export LLM_API_KEY="your-api-key"
|
||||
# Optional
|
||||
export LLM_API_BASE="your-api-base-url" # if using a local model, e.g. Ollama, LMStudio
|
||||
export PERPLEXITY_API_KEY="your-api-key" # for search capabilities
|
||||
export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high, quick scan: medium)
|
||||
```
|
||||
|
||||
[OpenAI's GPT-5](https://openai.com/api/) (`openai/gpt-5`) and [Anthropic's Claude Sonnet 4.5](https://claude.com/platform/api) (`anthropic/claude-sonnet-4-5`) are the recommended models for best results with Strix. We also support many [other options](https://docs.litellm.ai/docs/providers), including cloud and local models, though their performance and reliability may vary.
|
||||
> [!NOTE]
|
||||
> Strix automatically saves your configuration to `~/.strix/cli-config.json`, so you don't have to re-enter it on every run.
|
||||
|
||||
## 🤝 Contributing
|
||||
**Recommended models for best results:**
|
||||
|
||||
We welcome contributions of code, docs, and new prompt modules - check out our [Contributing Guide](CONTRIBUTING.md) to get started or open a [pull request](https://github.com/usestrix/strix/pulls)/[issue](https://github.com/usestrix/strix/issues).
|
||||
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
|
||||
- [Anthropic Claude Sonnet 4.5](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-5`
|
||||
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
|
||||
|
||||
## 👥 Join Our Community
|
||||
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.
|
||||
|
||||
## Documentation
|
||||
|
||||
Full documentation is available at **[docs.strix.ai](https://docs.strix.ai)** — including detailed guides for usage, CI/CD integrations, skills, and advanced configuration.
|
||||
|
||||
## Contributing
|
||||
|
||||
We welcome contributions of code, docs, and new skills - check out our [Contributing Guide](https://docs.strix.ai/contributing) to get started or open a [pull request](https://github.com/usestrix/strix/pulls)/[issue](https://github.com/usestrix/strix/issues).
|
||||
|
||||
## Join Our Community
|
||||
|
||||
Have questions? Found a bug? Want to contribute? **[Join our Discord!](https://discord.gg/YjKFvEZSdZ)**
|
||||
|
||||
## 🌟 Support the Project
|
||||
## Support the Project
|
||||
|
||||
**Love Strix?** Give us a ⭐ on GitHub!
|
||||
## 🙏 Acknowledgements
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
Strix builds on the incredible work of open-source projects like [LiteLLM](https://github.com/BerriAI/litellm), [Caido](https://github.com/caido/caido), [ProjectDiscovery](https://github.com/projectdiscovery), [Playwright](https://github.com/microsoft/playwright), and [Textual](https://github.com/Textualize/textual). Huge thanks to their maintainers!
|
||||
|
||||
|
||||
@@ -40,10 +40,11 @@ RUN apt-get update && \
|
||||
gdb \
|
||||
tmux \
|
||||
libnss3 libnspr4 libdbus-1-3 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libatspi2.0-0 \
|
||||
libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1 libxkbcommon0 libpango-1.0-0 libcairo2 libasound2 \
|
||||
libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1 libxkbcommon0 libpango-1.0-0 libcairo2 libasound2t64 \
|
||||
fonts-unifont fonts-noto-color-emoji fonts-freefont-ttf fonts-dejavu-core ttf-bitstream-vera \
|
||||
libnss3-tools
|
||||
|
||||
|
||||
RUN setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip $(which nmap)
|
||||
|
||||
USER pentester
|
||||
|
||||
474
poetry.lock
generated
474
poetry.lock
generated
@@ -14,98 +14,132 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "aiohttp"
|
||||
version = "3.12.15"
|
||||
version = "3.13.3"
|
||||
description = "Async http client/server framework (asyncio)"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:b6fc902bff74d9b1879ad55f5404153e2b33a82e72a95c89cec5eb6cc9e92fbc"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:098e92835b8119b54c693f2f88a1dec690e20798ca5f5fe5f0520245253ee0af"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:40b3fee496a47c3b4a39a731954c06f0bd9bd3e8258c059a4beb76ac23f8e421"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2ce13fcfb0bb2f259fb42106cdc63fa5515fb85b7e87177267d89a771a660b79"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:3beb14f053222b391bf9cf92ae82e0171067cc9c8f52453a0f1ec7c37df12a77"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4c39e87afe48aa3e814cac5f535bc6199180a53e38d3f51c5e2530f5aa4ec58c"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d5f1b4ce5bc528a6ee38dbf5f39bbf11dd127048726323b72b8e85769319ffc4"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1004e67962efabbaf3f03b11b4c43b834081c9e3f9b32b16a7d97d4708a9abe6"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8faa08fcc2e411f7ab91d1541d9d597d3a90e9004180edb2072238c085eac8c2"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:fe086edf38b2222328cdf89af0dde2439ee173b8ad7cb659b4e4c6f385b2be3d"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:79b26fe467219add81d5e47b4a4ba0f2394e8b7c7c3198ed36609f9ba161aecb"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:b761bac1192ef24e16706d761aefcb581438b34b13a2f069a6d343ec8fb693a5"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:e153e8adacfe2af562861b72f8bc47f8a5c08e010ac94eebbe33dc21d677cd5b"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:fc49c4de44977aa8601a00edbf157e9a421f227aa7eb477d9e3df48343311065"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:2776c7ec89c54a47029940177e75c8c07c29c66f73464784971d6a81904ce9d1"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-win32.whl", hash = "sha256:2c7d81a277fa78b2203ab626ced1487420e8c11a8e373707ab72d189fcdad20a"},
|
||||
{file = "aiohttp-3.12.15-cp310-cp310-win_amd64.whl", hash = "sha256:83603f881e11f0f710f8e2327817c82e79431ec976448839f3cd05d7afe8f830"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:d3ce17ce0220383a0f9ea07175eeaa6aa13ae5a41f30bc61d84df17f0e9b1117"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:010cc9bbd06db80fe234d9003f67e97a10fe003bfbedb40da7d71c1008eda0fe"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3f9d7c55b41ed687b9d7165b17672340187f87a773c98236c987f08c858145a9"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc4fbc61bb3548d3b482f9ac7ddd0f18c67e4225aaa4e8552b9f1ac7e6bda9e5"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7fbc8a7c410bb3ad5d595bb7118147dfbb6449d862cc1125cf8867cb337e8728"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74dad41b3458dbb0511e760fb355bb0b6689e0630de8a22b1b62a98777136e16"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b6f0af863cf17e6222b1735a756d664159e58855da99cfe965134a3ff63b0b0"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b5b7fe4972d48a4da367043b8e023fb70a04d1490aa7d68800e465d1b97e493b"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6443cca89553b7a5485331bc9bedb2342b08d073fa10b8c7d1c60579c4a7b9bd"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6c5f40ec615e5264f44b4282ee27628cea221fcad52f27405b80abb346d9f3f8"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:2abbb216a1d3a2fe86dbd2edce20cdc5e9ad0be6378455b05ec7f77361b3ab50"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:db71ce547012a5420a39c1b744d485cfb823564d01d5d20805977f5ea1345676"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:ced339d7c9b5030abad5854aa5413a77565e5b6e6248ff927d3e174baf3badf7"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:7c7dd29c7b5bda137464dc9bfc738d7ceea46ff70309859ffde8c022e9b08ba7"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:421da6fd326460517873274875c6c5a18ff225b40da2616083c5a34a7570b685"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-win32.whl", hash = "sha256:4420cf9d179ec8dfe4be10e7d0fe47d6d606485512ea2265b0d8c5113372771b"},
|
||||
{file = "aiohttp-3.12.15-cp311-cp311-win_amd64.whl", hash = "sha256:edd533a07da85baa4b423ee8839e3e91681c7bfa19b04260a469ee94b778bf6d"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:802d3868f5776e28f7bf69d349c26fc0efadb81676d0afa88ed00d98a26340b7"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f2800614cd560287be05e33a679638e586a2d7401f4ddf99e304d98878c29444"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8466151554b593909d30a0a125d638b4e5f3836e5aecde85b66b80ded1cb5b0d"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e5a495cb1be69dae4b08f35a6c4579c539e9b5706f606632102c0f855bcba7c"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:6404dfc8cdde35c69aaa489bb3542fb86ef215fc70277c892be8af540e5e21c0"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3ead1c00f8521a5c9070fcb88f02967b1d8a0544e6d85c253f6968b785e1a2ab"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6990ef617f14450bc6b34941dba4f12d5613cbf4e33805932f853fbd1cf18bfb"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fd736ed420f4db2b8148b52b46b88ed038d0354255f9a73196b7bbce3ea97545"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c5092ce14361a73086b90c6efb3948ffa5be2f5b6fbcf52e8d8c8b8848bb97c"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:aaa2234bb60c4dbf82893e934d8ee8dea30446f0647e024074237a56a08c01bd"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:6d86a2fbdd14192e2f234a92d3b494dd4457e683ba07e5905a0b3ee25389ac9f"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:a041e7e2612041a6ddf1c6a33b883be6a421247c7afd47e885969ee4cc58bd8d"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:5015082477abeafad7203757ae44299a610e89ee82a1503e3d4184e6bafdd519"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:56822ff5ddfd1b745534e658faba944012346184fbfe732e0d6134b744516eea"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b2acbbfff69019d9014508c4ba0401822e8bae5a5fdc3b6814285b71231b60f3"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-win32.whl", hash = "sha256:d849b0901b50f2185874b9a232f38e26b9b3d4810095a7572eacea939132d4e1"},
|
||||
{file = "aiohttp-3.12.15-cp312-cp312-win_amd64.whl", hash = "sha256:b390ef5f62bb508a9d67cb3bba9b8356e23b3996da7062f1a57ce1a79d2b3d34"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:9f922ffd05034d439dde1c77a20461cf4a1b0831e6caa26151fe7aa8aaebc315"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2ee8a8ac39ce45f3e55663891d4b1d15598c157b4d494a4613e704c8b43112cd"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:3eae49032c29d356b94eee45a3f39fdf4b0814b397638c2f718e96cfadf4c4e4"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b97752ff12cc12f46a9b20327104448042fce5c33a624f88c18f66f9368091c7"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:894261472691d6fe76ebb7fcf2e5870a2ac284c7406ddc95823c8598a1390f0d"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5fa5d9eb82ce98959fc1031c28198b431b4d9396894f385cb63f1e2f3f20ca6b"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f0fa751efb11a541f57db59c1dd821bec09031e01452b2b6217319b3a1f34f3d"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5346b93e62ab51ee2a9d68e8f73c7cf96ffb73568a23e683f931e52450e4148d"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:049ec0360f939cd164ecbfd2873eaa432613d5e77d6b04535e3d1fbae5a9e645"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b52dcf013b57464b6d1e51b627adfd69a8053e84b7103a7cd49c030f9ca44461"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:9b2af240143dd2765e0fb661fd0361a1b469cab235039ea57663cda087250ea9"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:ac77f709a2cde2cc71257ab2d8c74dd157c67a0558a0d2799d5d571b4c63d44d"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:47f6b962246f0a774fbd3b6b7be25d59b06fdb2f164cf2513097998fc6a29693"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:760fb7db442f284996e39cf9915a94492e1896baac44f06ae551974907922b64"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ad702e57dc385cae679c39d318def49aef754455f237499d5b99bea4ef582e51"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-win32.whl", hash = "sha256:f813c3e9032331024de2eb2e32a88d86afb69291fbc37a3a3ae81cc9917fb3d0"},
|
||||
{file = "aiohttp-3.12.15-cp313-cp313-win_amd64.whl", hash = "sha256:1a649001580bdb37c6fdb1bebbd7e3bc688e8ec2b5c6f52edbb664662b17dc84"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:691d203c2bdf4f4637792efbbcdcd157ae11e55eaeb5e9c360c1206fb03d4d98"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8e995e1abc4ed2a454c731385bf4082be06f875822adc4c6d9eaadf96e20d406"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:bd44d5936ab3193c617bfd6c9a7d8d1085a8dc8c3f44d5f1dcf554d17d04cf7d"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:46749be6e89cd78d6068cdf7da51dbcfa4321147ab8e4116ee6678d9a056a0cf"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:0c643f4d75adea39e92c0f01b3fb83d57abdec8c9279b3078b68a3a52b3933b6"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0a23918fedc05806966a2438489dcffccbdf83e921a1170773b6178d04ade142"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:74bdd8c864b36c3673741023343565d95bfbd778ffe1eb4d412c135a28a8dc89"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0a146708808c9b7a988a4af3821379e379e0f0e5e466ca31a73dbdd0325b0263"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7011a70b56facde58d6d26da4fec3280cc8e2a78c714c96b7a01a87930a9530"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:3bdd6e17e16e1dbd3db74d7f989e8af29c4d2e025f9828e6ef45fbdee158ec75"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:57d16590a351dfc914670bd72530fd78344b885a00b250e992faea565b7fdc05"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:bc9a0f6569ff990e0bbd75506c8d8fe7214c8f6579cca32f0546e54372a3bb54"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:536ad7234747a37e50e7b6794ea868833d5220b49c92806ae2d7e8a9d6b5de02"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:f0adb4177fa748072546fb650d9bd7398caaf0e15b370ed3317280b13f4083b0"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:14954a2988feae3987f1eb49c706bff39947605f4b6fa4027c1d75743723eb09"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-win32.whl", hash = "sha256:b784d6ed757f27574dca1c336f968f4e81130b27595e458e69457e6878251f5d"},
|
||||
{file = "aiohttp-3.12.15-cp39-cp39-win_amd64.whl", hash = "sha256:86ceded4e78a992f835209e236617bffae649371c4a50d5e5a3987f237db84b8"},
|
||||
{file = "aiohttp-3.12.15.tar.gz", hash = "sha256:4fc61385e9c98d72fcdf47e6dd81833f47b2f77c114c29cd64a361be57a763a2"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d5a372fd5afd301b3a89582817fdcdb6c34124787c70dbcc616f259013e7eef7"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:147e422fd1223005c22b4fe080f5d93ced44460f5f9c105406b753612b587821"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:859bd3f2156e81dd01432f5849fc73e2243d4a487c4fd26609b1299534ee1845"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:dca68018bf48c251ba17c72ed479f4dafe9dbd5a73707ad8d28a38d11f3d42af"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:fee0c6bc7db1de362252affec009707a17478a00ec69f797d23ca256e36d5940"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c048058117fd649334d81b4b526e94bde3ccaddb20463a815ced6ecbb7d11160"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:215a685b6fbbfcf71dfe96e3eba7a6f58f10da1dfdf4889c7dd856abe430dca7"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:de2c184bb1fe2cbd2cefba613e9db29a5ab559323f994b6737e370d3da0ac455"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:75ca857eba4e20ce9f546cd59c7007b33906a4cd48f2ff6ccf1ccfc3b646f279"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:81e97251d9298386c2b7dbeb490d3d1badbdc69107fb8c9299dd04eb39bddc0e"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:c0e2d366af265797506f0283487223146af57815b388623f0357ef7eac9b209d"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:4e239d501f73d6db1522599e14b9b321a7e3b1de66ce33d53a765d975e9f4808"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:0db318f7a6f065d84cb1e02662c526294450b314a02bd9e2a8e67f0d8564ce40"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:bfc1cc2fe31a6026a8a88e4ecfb98d7f6b1fec150cfd708adbfd1d2f42257c29"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af71fff7bac6bb7508956696dce8f6eec2bbb045eceb40343944b1ae62b5ef11"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-win32.whl", hash = "sha256:37da61e244d1749798c151421602884db5270faf479cf0ef03af0ff68954c9dd"},
|
||||
{file = "aiohttp-3.13.3-cp310-cp310-win_amd64.whl", hash = "sha256:7e63f210bc1b57ef699035f2b4b6d9ce096b5914414a49b0997c839b2bd2223c"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5b6073099fb654e0a068ae678b10feff95c5cae95bbfcbfa7af669d361a8aa6b"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1cb93e166e6c28716c8c6aeb5f99dfb6d5ccf482d29fe9bf9a794110e6d0ab64"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:28e027cf2f6b641693a09f631759b4d9ce9165099d2b5d92af9bd4e197690eea"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3b61b7169ababd7802f9568ed96142616a9118dd2be0d1866e920e77ec8fa92a"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:80dd4c21b0f6237676449c6baaa1039abae86b91636b6c91a7f8e61c87f89540"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:65d2ccb7eabee90ce0503c17716fc77226be026dcc3e65cce859a30db715025b"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5b179331a481cb5529fca8b432d8d3c7001cb217513c94cd72d668d1248688a3"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9d4c940f02f49483b18b079d1c27ab948721852b281f8b015c058100e9421dd1"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:f9444f105664c4ce47a2a7171a2418bce5b7bae45fb610f4e2c36045d85911d3"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:694976222c711d1d00ba131904beb60534f93966562f64440d0c9d41b8cdb440"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:f33ed1a2bf1997a36661874b017f5c4b760f41266341af36febaf271d179f6d7"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e636b3c5f61da31a92bf0d91da83e58fdfa96f178ba682f11d24f31944cdd28c"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:5d2d94f1f5fcbe40838ac51a6ab5704a6f9ea42e72ceda48de5e6b898521da51"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:2be0e9ccf23e8a94f6f0650ce06042cefc6ac703d0d7ab6c7a917289f2539ad4"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9af5e68ee47d6534d36791bbe9b646d2a7c7deb6fc24d7943628edfbb3581f29"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-win32.whl", hash = "sha256:a2212ad43c0833a873d0fb3c63fa1bacedd4cf6af2fee62bf4b739ceec3ab239"},
|
||||
{file = "aiohttp-3.13.3-cp311-cp311-win_amd64.whl", hash = "sha256:642f752c3eb117b105acbd87e2c143de710987e09860d674e068c4c2c441034f"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:b903a4dfee7d347e2d87697d0713be59e0b87925be030c9178c5faa58ea58d5c"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:a45530014d7a1e09f4a55f4f43097ba0fd155089372e105e4bff4ca76cb1b168"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:27234ef6d85c914f9efeb77ff616dbf4ad2380be0cda40b4db086ffc7ddd1b7d"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d32764c6c9aafb7fb55366a224756387cd50bfa720f32b88e0e6fa45b27dcf29"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:b1a6102b4d3ebc07dad44fbf07b45bb600300f15b552ddf1851b5390202ea2e3"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c014c7ea7fb775dd015b2d3137378b7be0249a448a1612268b5a90c2d81de04d"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2b8d8ddba8f95ba17582226f80e2de99c7a7948e66490ef8d947e272a93e9463"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9ae8dd55c8e6c4257eae3a20fd2c8f41edaea5992ed67156642493b8daf3cecc"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:01ad2529d4b5035578f5081606a465f3b814c542882804e2e8cda61adf5c71bf"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bb4f7475e359992b580559e008c598091c45b5088f28614e855e42d39c2f1033"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:c19b90316ad3b24c69cd78d5c9b4f3aa4497643685901185b65166293d36a00f"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:96d604498a7c782cb15a51c406acaea70d8c027ee6b90c569baa6e7b93073679"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:084911a532763e9d3dd95adf78a78f4096cd5f58cdc18e6fdbc1b58417a45423"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:7a4a94eb787e606d0a09404b9c38c113d3b099d508021faa615d70a0131907ce"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:87797e645d9d8e222e04160ee32aa06bc5c163e8499f24db719e7852ec23093a"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-win32.whl", hash = "sha256:b04be762396457bef43f3597c991e192ee7da460a4953d7e647ee4b1c28e7046"},
|
||||
{file = "aiohttp-3.13.3-cp312-cp312-win_amd64.whl", hash = "sha256:e3531d63d3bdfa7e3ac5e9b27b2dd7ec9df3206a98e0b3445fa906f233264c57"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:5dff64413671b0d3e7d5918ea490bdccb97a4ad29b3f311ed423200b2203e01c"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:87b9aab6d6ed88235aa2970294f496ff1a1f9adcd724d800e9b952395a80ffd9"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:425c126c0dc43861e22cb1c14ba4c8e45d09516d0a3ae0a3f7494b79f5f233a3"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7f9120f7093c2a32d9647abcaf21e6ad275b4fbec5b55969f978b1a97c7c86bf"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:697753042d57f4bf7122cab985bf15d0cef23c770864580f5af4f52023a56bd6"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6de499a1a44e7de70735d0b39f67c8f25eb3d91eb3103be99ca0fa882cdd987d"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:37239e9f9a7ea9ac5bf6b92b0260b01f8a22281996da609206a84df860bc1261"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f76c1e3fe7d7c8afad7ed193f89a292e1999608170dcc9751a7462a87dfd5bc0"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:fc290605db2a917f6e81b0e1e0796469871f5af381ce15c604a3c5c7e51cb730"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:4021b51936308aeea0367b8f006dc999ca02bc118a0cc78c303f50a2ff6afb91"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:49a03727c1bba9a97d3e93c9f93ca03a57300f484b6e935463099841261195d3"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:3d9908a48eb7416dc1f4524e69f1d32e5d90e3981e4e37eb0aa1cd18f9cfa2a4"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:2712039939ec963c237286113c68dbad80a82a4281543f3abf766d9d73228998"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:7bfdc049127717581866fa4708791220970ce291c23e28ccf3922c700740fdc0"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:8057c98e0c8472d8846b9c79f56766bcc57e3e8ac7bfd510482332366c56c591"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-win32.whl", hash = "sha256:1449ceddcdbcf2e0446957863af03ebaaa03f94c090f945411b61269e2cb5daf"},
|
||||
{file = "aiohttp-3.13.3-cp313-cp313-win_amd64.whl", hash = "sha256:693781c45a4033d31d4187d2436f5ac701e7bbfe5df40d917736108c1cc7436e"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:ea37047c6b367fd4bd632bff8077449b8fa034b69e812a18e0132a00fae6e808"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:6fc0e2337d1a4c3e6acafda6a78a39d4c14caea625124817420abceed36e2415"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c685f2d80bb67ca8c3837823ad76196b3694b0159d232206d1e461d3d434666f"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:48e377758516d262bde50c2584fc6c578af272559c409eecbdd2bae1601184d6"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:34749271508078b261c4abb1767d42b8d0c0cc9449c73a4df494777dc55f0687"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:82611aeec80eb144416956ec85b6ca45a64d76429c1ed46ae1b5f86c6e0c9a26"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2fff83cfc93f18f215896e3a190e8e5cb413ce01553901aca925176e7568963a"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bbe7d4cecacb439e2e2a8a1a7b935c25b812af7a5fd26503a66dadf428e79ec1"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:b928f30fe49574253644b1ca44b1b8adbd903aa0da4b9054a6c20fc7f4092a25"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7b5e8fe4de30df199155baaf64f2fcd604f4c678ed20910db8e2c66dc4b11603"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:8542f41a62bcc58fc7f11cf7c90e0ec324ce44950003feb70640fc2a9092c32a"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:5e1d8c8b8f1d91cd08d8f4a3c2b067bfca6ec043d3ff36de0f3a715feeedf926"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:90455115e5da1c3c51ab619ac57f877da8fd6d73c05aacd125c5ae9819582aba"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:042e9e0bcb5fba81886c8b4fbb9a09d6b8a00245fd8d88e4d989c1f96c74164c"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:2eb752b102b12a76ca02dff751a801f028b4ffbbc478840b473597fc91a9ed43"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-win32.whl", hash = "sha256:b556c85915d8efaed322bf1bdae9486aa0f3f764195a0fb6ee962e5c71ef5ce1"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314-win_amd64.whl", hash = "sha256:9bf9f7a65e7aa20dd764151fb3d616c81088f91f8df39c3893a536e279b4b984"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:05861afbbec40650d8a07ea324367cb93e9e8cc7762e04dd4405df99fa65159c"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:2fc82186fadc4a8316768d61f3722c230e2c1dcab4200d52d2ebdf2482e47592"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:0add0900ff220d1d5c5ebbf99ed88b0c1bbf87aa7e4262300ed1376a6b13414f"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:568f416a4072fbfae453dcf9a99194bbb8bdeab718e08ee13dfa2ba0e4bebf29"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:add1da70de90a2569c5e15249ff76a631ccacfe198375eead4aadf3b8dc849dc"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:10b47b7ba335d2e9b1239fa571131a87e2d8ec96b333e68b2a305e7a98b0bae2"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3dd4dce1c718e38081c8f35f323209d4c1df7d4db4bab1b5c88a6b4d12b74587"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:34bac00a67a812570d4a460447e1e9e06fae622946955f939051e7cc895cfab8"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:a19884d2ee70b06d9204b2727a7b9f983d0c684c650254679e716b0b77920632"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:5f8ca7f2bb6ba8348a3614c7918cc4bb73268c5ac2a207576b7afea19d3d9f64"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-musllinux_1_2_armv7l.whl", hash = "sha256:b0d95340658b9d2f11d9697f59b3814a9d3bb4b7a7c20b131df4bcef464037c0"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:a1e53262fd202e4b40b70c3aff944a8155059beedc8a89bba9dc1f9ef06a1b56"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:d60ac9663f44168038586cab2157e122e46bdef09e9368b37f2d82d354c23f72"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:90751b8eed69435bac9ff4e3d2f6b3af1f57e37ecb0fbeee59c0174c9e2d41df"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:fc353029f176fd2b3ec6cfc71be166aba1936fe5d73dd1992ce289ca6647a9aa"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-win32.whl", hash = "sha256:2e41b18a58da1e474a057b3d35248d8320029f61d70a37629535b16a0c8f3767"},
|
||||
{file = "aiohttp-3.13.3-cp314-cp314t-win_amd64.whl", hash = "sha256:44531a36aa2264a1860089ffd4dce7baf875ee5a6079d5fb42e261c704ef7344"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:31a83ea4aead760dfcb6962efb1d861db48c34379f2ff72db9ddddd4cda9ea2e"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:988a8c5e317544fdf0d39871559e67b6341065b87fceac641108c2096d5506b7"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9b174f267b5cfb9a7dba9ee6859cecd234e9a681841eb85068059bc867fb8f02"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:947c26539750deeaee933b000fb6517cc770bbd064bad6033f1cff4803881e43"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-manylinux2014_armv7l.manylinux_2_17_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:9ebf57d09e131f5323464bd347135a88622d1c0976e88ce15b670e7ad57e4bd6"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:4ae5b5a0e1926e504c81c5b84353e7a5516d8778fbbff00429fe7b05bb25cbce"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2ba0eea45eb5cc3172dbfc497c066f19c41bac70963ea1a67d51fc92e4cf9a80"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:bae5c2ed2eae26cc382020edad80d01f36cb8e746da40b292e68fec40421dc6a"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8a60e60746623925eab7d25823329941aee7242d559baa119ca2b253c88a7bd6"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:e50a2e1404f063427c9d027378472316201a2290959a295169bcf25992d04558"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:9a9dc347e5a3dc7dfdbc1f82da0ef29e388ddb2ed281bfce9dd8248a313e62b7"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:b46020d11d23fe16551466c77823df9cc2f2c1e63cc965daf67fa5eec6ca1877"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-musllinux_1_2_riscv64.whl", hash = "sha256:69c56fbc1993fa17043e24a546959c0178fe2b5782405ad4559e6c13975c15e3"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:b99281b0704c103d4e11e72a76f1b543d4946fea7dd10767e7e1b5f00d4e5704"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:40c5e40ecc29ba010656c18052b877a1c28f84344825efa106705e835c28530f"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-win32.whl", hash = "sha256:56339a36b9f1fc708260c76c87e593e2afb30d26de9ae1eb445b5e051b98a7a1"},
|
||||
{file = "aiohttp-3.13.3-cp39-cp39-win_amd64.whl", hash = "sha256:c6b8568a3bb5819a0ad087f16d40e5a3fb6099f39ea1d5625a3edc1e923fc538"},
|
||||
{file = "aiohttp-3.13.3.tar.gz", hash = "sha256:a949eee43d3782f2daae4f4a2819b2cb9b0c5d3b7f7a927067cc84dafdbb9f88"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -118,7 +152,7 @@ propcache = ">=0.2.0"
|
||||
yarl = ">=1.17.0,<2.0"
|
||||
|
||||
[package.extras]
|
||||
speedups = ["Brotli ; platform_python_implementation == \"CPython\"", "aiodns (>=3.3.0)", "brotlicffi ; platform_python_implementation != \"CPython\""]
|
||||
speedups = ["Brotli (>=1.2) ; platform_python_implementation == \"CPython\"", "aiodns (>=3.3.0)", "backports.zstd ; platform_python_implementation == \"CPython\" and python_version < \"3.14\"", "brotlicffi (>=1.2) ; platform_python_implementation != \"CPython\""]
|
||||
|
||||
[[package]]
|
||||
name = "aiosignal"
|
||||
@@ -345,19 +379,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "azure-core"
|
||||
version = "1.35.0"
|
||||
version = "1.38.0"
|
||||
description = "Microsoft Azure Core Library for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "azure_core-1.35.0-py3-none-any.whl", hash = "sha256:8db78c72868a58f3de8991eb4d22c4d368fae226dac1002998d6c50437e7dad1"},
|
||||
{file = "azure_core-1.35.0.tar.gz", hash = "sha256:c0be528489485e9ede59b6971eb63c1eaacf83ef53001bfe3904e475e972be5c"},
|
||||
{file = "azure_core-1.38.0-py3-none-any.whl", hash = "sha256:ab0c9b2cd71fecb1842d52c965c95285d3cfb38902f6766e4a471f1cd8905335"},
|
||||
{file = "azure_core-1.38.0.tar.gz", hash = "sha256:8194d2682245a3e4e3151a667c686464c3786fed7918b394d035bdcd61bb5993"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
requests = ">=2.21.0"
|
||||
six = ">=1.11.0"
|
||||
typing-extensions = ">=4.6.0"
|
||||
|
||||
[package.extras]
|
||||
@@ -609,83 +642,100 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "cffi"
|
||||
version = "1.17.1"
|
||||
version = "2.0.0"
|
||||
description = "Foreign Function Interface for Python calling C code."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
|
||||
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
|
||||
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
|
||||
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
|
||||
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
|
||||
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
|
||||
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
|
||||
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-win32.whl", hash = "sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a"},
|
||||
{file = "cffi-2.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5"},
|
||||
{file = "cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5"},
|
||||
{file = "cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75"},
|
||||
{file = "cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25"},
|
||||
{file = "cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6"},
|
||||
{file = "cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-win32.whl", hash = "sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a"},
|
||||
{file = "cffi-2.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9"},
|
||||
{file = "cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
pycparser = "*"
|
||||
pycparser = {version = "*", markers = "implementation_name != \"PyPy\""}
|
||||
|
||||
[[package]]
|
||||
name = "cfgv"
|
||||
@@ -1106,6 +1156,18 @@ ssh = ["bcrypt (>=3.1.5)"]
|
||||
test = ["certifi (>=2024)", "cryptography-vectors (==44.0.1)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
test-randomorder = ["pytest-randomly"]
|
||||
|
||||
[[package]]
|
||||
name = "cvss"
|
||||
version = "3.6"
|
||||
description = "CVSS2/3/4 library with interactive calculator for Python 2 and Python 3"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "cvss-3.6-py2.py3-none-any.whl", hash = "sha256:e342c6ad9c7eb69d2aebbbc2768a03cabd57eb947c806e145de5b936219833ea"},
|
||||
{file = "cvss-3.6.tar.gz", hash = "sha256:f21d18224efcd3c01b44ff1b37dec2e3208d29a6d0ce6c87a599c73c21ee1a99"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cycler"
|
||||
version = "0.12.1"
|
||||
@@ -1136,6 +1198,18 @@ files = [
|
||||
{file = "decorator-5.2.1.tar.gz", hash = "sha256:65f266143752f734b0a7cc83c46f4618af75b8c5911b00ccb61d0ac9b6da0360"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "defusedxml"
|
||||
version = "0.7.1"
|
||||
description = "XML bomb protection for Python stdlib modules"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
|
||||
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dill"
|
||||
version = "0.4.0"
|
||||
@@ -1427,14 +1501,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "filelock"
|
||||
version = "3.19.1"
|
||||
version = "3.20.3"
|
||||
description = "A platform independent file lock."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
python-versions = ">=3.10"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "filelock-3.19.1-py3-none-any.whl", hash = "sha256:d38e30481def20772f5baf097c122c3babc4fcdb7e14e57049eb9d88c6dc017d"},
|
||||
{file = "filelock-3.19.1.tar.gz", hash = "sha256:66eda1888b0171c998b35be2bcc0f6d75c388a7ce20c3f3f37aa8e96c2dddf58"},
|
||||
{file = "filelock-3.20.3-py3-none-any.whl", hash = "sha256:4b0dda527ee31078689fc205ec4f1c1bf7d56cf88b6dc9426c4f230e46c2dce1"},
|
||||
{file = "filelock-3.20.3.tar.gz", hash = "sha256:18c57ee915c7ec61cff0ecf7f0f869936c7c30191bb0cf406f1341778d0834e1"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -4829,6 +4903,7 @@ description = "C parser in Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
markers = "implementation_name != \"PyPy\""
|
||||
files = [
|
||||
{file = "pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc"},
|
||||
{file = "pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6"},
|
||||
@@ -5154,30 +5229,45 @@ testutils = ["gitpython (>3)"]
|
||||
|
||||
[[package]]
|
||||
name = "pynacl"
|
||||
version = "1.5.0"
|
||||
version = "1.6.2"
|
||||
description = "Python binding to the Networking and Cryptography (NaCl) library"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:401002a4aaa07c9414132aaed7f6836ff98f59277a234704ff66878c2ee4a0d1"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cb72a79269189d4e0dc537556f4740f7f0a9ec41c1322598799b0bdad4ef92"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a36d4a9dda1f19ce6e03c9a784a2921a4b726b02e1c736600ca9c22029474394"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0c84947a22519e013607c9be43706dd42513f9e6ae5d39d3613ca1e142fba44d"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06b8f6fa7f5de8d5d2f7573fe8c863c051225a27b61e6860fd047b1775807858"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:a422368fc821589c228f4c49438a368831cb5bbc0eab5ebe1d7fac9dded6567b"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:61f642bf2378713e2c2e1de73444a3778e5f0a38be6fee0fe532fe30060282ff"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-win32.whl", hash = "sha256:e46dae94e34b085175f8abb3b0aaa7da40767865ac82c928eeb9e57e1ea8a543"},
|
||||
{file = "PyNaCl-1.5.0-cp36-abi3-win_amd64.whl", hash = "sha256:20f42270d27e1b6a29f54032090b972d97f0a1b0948cc52392041ef7831fee93"},
|
||||
{file = "PyNaCl-1.5.0.tar.gz", hash = "sha256:8ac7448f09ab85811607bdd21ec2464495ac8b7c66d146bf545b0f08fb9220ba"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-macosx_10_10_universal2.whl", hash = "sha256:622d7b07cc5c02c666795792931b50c91f3ce3c2649762efb1ef0d5684c81594"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d071c6a9a4c94d79eb665db4ce5cedc537faf74f2355e4d502591d850d3913c0"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fe9847ca47d287af41e82be1dd5e23023d3c31a951da134121ab02e42ac218c9"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:04316d1fc625d860b6c162fff704eb8426b1a8bcd3abacea11142cbd99a6b574"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44081faff368d6c5553ccf55322ef2819abb40e25afaec7e740f159f74813634"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:a9f9932d8d2811ce1a8ffa79dcbdf3970e7355b5c8eb0c1a881a57e7f7d96e88"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:bc4a36b28dd72fb4845e5d8f9760610588a96d5a51f01d84d8c6ff9849968c14"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:3bffb6d0f6becacb6526f8f42adfb5efb26337056ee0831fb9a7044d1a964444"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:2fef529ef3ee487ad8113d287a593fa26f48ee3620d92ecc6f1d09ea38e0709b"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-win32.whl", hash = "sha256:a84bf1c20339d06dc0c85d9aea9637a24f718f375d861b2668b2f9f96fa51145"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-win_amd64.whl", hash = "sha256:320ef68a41c87547c91a8b58903c9caa641ab01e8512ce291085b5fe2fcb7590"},
|
||||
{file = "pynacl-1.6.2-cp314-cp314t-win_arm64.whl", hash = "sha256:d29bfe37e20e015a7d8b23cfc8bd6aa7909c92a1b8f41ee416bbb3e79ef182b2"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-macosx_10_10_universal2.whl", hash = "sha256:c949ea47e4206af7c8f604b8278093b674f7c79ed0d4719cc836902bf4517465"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8845c0631c0be43abdd865511c41eab235e0be69c81dc66a50911594198679b0"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:22de65bb9010a725b0dac248f353bb072969c94fa8d6b1f34b87d7953cf7bbe4"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:46065496ab748469cdd999246d17e301b2c24ae2fdf739132e580a0e94c94a87"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a66d6fb6ae7661c58995f9c6435bda2b1e68b54b598a6a10247bfcdadac996c"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:26bfcd00dcf2cf160f122186af731ae30ab120c18e8375684ec2670dccd28130"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:c8a231e36ec2cab018c4ad4358c386e36eede0319a0c41fed24f840b1dac59f6"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:68be3a09455743ff9505491220b64440ced8973fe930f270c8e07ccfa25b1f9e"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:8b097553b380236d51ed11356c953bf8ce36a29a3e596e934ecabe76c985a577"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-win32.whl", hash = "sha256:5811c72b473b2f38f7e2a3dc4f8642e3a3e9b5e7317266e4ced1fba85cae41aa"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-win_amd64.whl", hash = "sha256:62985f233210dee6548c223301b6c25440852e13d59a8b81490203c3227c5ba0"},
|
||||
{file = "pynacl-1.6.2-cp38-abi3-win_arm64.whl", hash = "sha256:834a43af110f743a754448463e8fd61259cd4ab5bbedcf70f9dabad1d28a394c"},
|
||||
{file = "pynacl-1.6.2.tar.gz", hash = "sha256:018494d6d696ae03c7e656e5e74cdfd8ea1326962cc401bcf018f1ed8436811c"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cffi = ">=1.4.1"
|
||||
cffi = {version = ">=2.0.0", markers = "platform_python_implementation != \"PyPy\" and python_version >= \"3.9\""}
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"]
|
||||
tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
|
||||
docs = ["sphinx (<7)", "sphinx_rtd_theme"]
|
||||
tests = ["hypothesis (>=3.27.0)", "pytest (>=7.4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "pyparsing"
|
||||
@@ -5197,15 +5287,15 @@ diagrams = ["jinja2", "railroad-diagrams"]
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "6.4.0"
|
||||
version = "6.6.0"
|
||||
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
|
||||
optional = true
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"sandbox\""
|
||||
files = [
|
||||
{file = "pypdf-6.4.0-py3-none-any.whl", hash = "sha256:55ab9837ed97fd7fcc5c131d52fcc2223bc5c6b8a1488bbf7c0e27f1f0023a79"},
|
||||
{file = "pypdf-6.4.0.tar.gz", hash = "sha256:4769d471f8ddc3341193ecc5d6560fa44cf8cd0abfabf21af4e195cc0c224072"},
|
||||
{file = "pypdf-6.6.0-py3-none-any.whl", hash = "sha256:bca9091ef6de36c7b1a81e09327c554b7ce51e88dad68f5890c2b4a4417f1fd7"},
|
||||
{file = "pypdf-6.6.0.tar.gz", hash = "sha256:4c887ef2ea38d86faded61141995a3c7d068c9d6ae8477be7ae5de8a8e16592f"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -6926,14 +7016,14 @@ test = ["coverage", "pytest", "pytest-cov"]
|
||||
|
||||
[[package]]
|
||||
name = "urllib3"
|
||||
version = "2.6.0"
|
||||
version = "2.6.3"
|
||||
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "urllib3-2.6.0-py3-none-any.whl", hash = "sha256:c90f7a39f716c572c4e3e58509581ebd83f9b59cced005b7db7ad2d22b0db99f"},
|
||||
{file = "urllib3-2.6.0.tar.gz", hash = "sha256:cb9bcef5a4b345d5da5d145dc3e30834f58e8018828cbc724d30b4cb7d4d49f1"},
|
||||
{file = "urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4"},
|
||||
{file = "urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -7016,19 +7106,19 @@ test = ["aiohttp (>=3.10.5)", "flake8 (>=5.0,<6.0)", "mypy (>=0.800)", "psutil",
|
||||
|
||||
[[package]]
|
||||
name = "virtualenv"
|
||||
version = "20.34.0"
|
||||
version = "20.36.1"
|
||||
description = "Virtual Python Environment builder"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["dev"]
|
||||
files = [
|
||||
{file = "virtualenv-20.34.0-py3-none-any.whl", hash = "sha256:341f5afa7eee943e4984a9207c025feedd768baff6753cd660c857ceb3e36026"},
|
||||
{file = "virtualenv-20.34.0.tar.gz", hash = "sha256:44815b2c9dee7ed86e387b842a84f20b93f7f417f95886ca1996a72a4138eb1a"},
|
||||
{file = "virtualenv-20.36.1-py3-none-any.whl", hash = "sha256:575a8d6b124ef88f6f51d56d656132389f961062a9177016a50e4f507bbcc19f"},
|
||||
{file = "virtualenv-20.36.1.tar.gz", hash = "sha256:8befb5c81842c641f8ee658481e42641c68b5eab3521d8e092d18320902466ba"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
distlib = ">=0.3.7,<1"
|
||||
filelock = ">=3.12.2,<4"
|
||||
filelock = {version = ">=3.20.1,<4", markers = "python_version >= \"3.10\""}
|
||||
platformdirs = ">=3.9.1,<5"
|
||||
|
||||
[package.extras]
|
||||
@@ -7345,4 +7435,4 @@ vertex = ["google-cloud-aiplatform"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = "^3.12"
|
||||
content-hash = "c33d9ef61601de836c80517ccff66cc57837baaebf22f929c766416c0b0fd818"
|
||||
content-hash = "0424a0e82fe49501f3a80166676e257a9dae97093d9bc730489789195f523735"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "strix-agent"
|
||||
version = "0.5.0"
|
||||
version = "0.6.1"
|
||||
description = "Open-source AI Hackers for your apps"
|
||||
authors = ["Strix <hi@usestrix.com>"]
|
||||
readme = "README.md"
|
||||
@@ -54,6 +54,7 @@ docker = "^7.1.0"
|
||||
textual = "^4.0.0"
|
||||
xmltodict = "^0.13.0"
|
||||
requests = "^2.32.0"
|
||||
cvss = "^3.2"
|
||||
|
||||
# Optional LLM provider dependencies
|
||||
google-cloud-aiplatform = { version = ">=1.38", optional = true }
|
||||
@@ -68,6 +69,7 @@ gql = { version = "^3.5.3", extras = ["requests"], optional = true }
|
||||
pyte = { version = "^0.8.1", optional = true }
|
||||
libtmux = { version = "^0.46.2", optional = true }
|
||||
numpydoc = { version = "^1.8.0", optional = true }
|
||||
defusedxml = "^0.7.1"
|
||||
|
||||
[tool.poetry.extras]
|
||||
vertex = ["google-cloud-aiplatform"]
|
||||
@@ -144,6 +146,7 @@ module = [
|
||||
"pyte.*",
|
||||
"libtmux.*",
|
||||
"pytest.*",
|
||||
"cvss.*",
|
||||
]
|
||||
ignore_missing_imports = true
|
||||
|
||||
|
||||
@@ -318,7 +318,7 @@ echo ""
|
||||
echo -e " ${CYAN}2.${NC} Run a penetration test:"
|
||||
echo -e " ${MUTED}strix --target https://example.com${NC}"
|
||||
echo ""
|
||||
echo -e "${MUTED}For more information visit ${NC}https://usestrix.com"
|
||||
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
|
||||
echo -e "${MUTED}Join our community ${NC}https://discord.gg/YjKFvEZSdZ"
|
||||
echo ""
|
||||
|
||||
|
||||
@@ -111,7 +111,6 @@ hiddenimports = [
|
||||
'strix.llm.llm',
|
||||
'strix.llm.config',
|
||||
'strix.llm.utils',
|
||||
'strix.llm.request_queue',
|
||||
'strix.llm.memory_compressor',
|
||||
'strix.runtime',
|
||||
'strix.runtime.runtime',
|
||||
@@ -122,7 +121,7 @@ hiddenimports = [
|
||||
'strix.tools.registry',
|
||||
'strix.tools.executor',
|
||||
'strix.tools.argument_parser',
|
||||
'strix.prompts',
|
||||
'strix.skills',
|
||||
]
|
||||
|
||||
hiddenimports += collect_submodules('litellm')
|
||||
|
||||
@@ -8,13 +8,13 @@ class StrixAgent(BaseAgent):
|
||||
max_iterations = 300
|
||||
|
||||
def __init__(self, config: dict[str, Any]):
|
||||
default_modules = []
|
||||
default_skills = []
|
||||
|
||||
state = config.get("state")
|
||||
if state is None or (hasattr(state, "parent_id") and state.parent_id is None):
|
||||
default_modules = ["root_agent"]
|
||||
default_skills = ["root_agent"]
|
||||
|
||||
self.default_llm_config = LLMConfig(prompt_modules=default_modules)
|
||||
self.default_llm_config = LLMConfig(skills=default_skills)
|
||||
|
||||
super().__init__(config)
|
||||
|
||||
|
||||
@@ -134,6 +134,7 @@ VALIDATION REQUIREMENTS:
|
||||
- Keep going until you find something that matters
|
||||
- A vulnerability is ONLY considered reported when a reporting agent uses create_vulnerability_report with full details. Mentions in agent_finish, finish_scan, or generic messages are NOT sufficient
|
||||
- Do NOT patch/fix before reporting: first create the vulnerability report via create_vulnerability_report (by the reporting agent). Only after reporting is completed should fixing/patching proceed
|
||||
- DEDUPLICATION: The create_vulnerability_report tool uses LLM-based deduplication. If it rejects your report as a duplicate, DO NOT attempt to re-submit the same vulnerability. Accept the rejection and move on to testing other areas. The vulnerability has already been reported by another agent
|
||||
</execution_guidelines>
|
||||
|
||||
<vulnerability_focus>
|
||||
@@ -263,25 +264,25 @@ CRITICAL RULES:
|
||||
- **ONE AGENT = ONE TASK** - Don't let agents do multiple unrelated jobs
|
||||
- **SPAWN REACTIVELY** - Create new agents based on what you discover
|
||||
- **ONLY REPORTING AGENTS** can use create_vulnerability_report tool
|
||||
- **AGENT SPECIALIZATION MANDATORY** - Each agent must be highly specialized; prefer 1–3 prompt modules, up to 5 for complex contexts
|
||||
- **AGENT SPECIALIZATION MANDATORY** - Each agent must be highly specialized; prefer 1–3 skills, up to 5 for complex contexts
|
||||
- **NO GENERIC AGENTS** - Avoid creating broad, multi-purpose agents that dilute focus
|
||||
|
||||
AGENT SPECIALIZATION EXAMPLES:
|
||||
|
||||
GOOD SPECIALIZATION:
|
||||
- "SQLi Validation Agent" with prompt_modules: sql_injection
|
||||
- "XSS Discovery Agent" with prompt_modules: xss
|
||||
- "Auth Testing Agent" with prompt_modules: authentication_jwt, business_logic
|
||||
- "SSRF + XXE Agent" with prompt_modules: ssrf, xxe, rce (related attack vectors)
|
||||
- "SQLi Validation Agent" with skills: sql_injection
|
||||
- "XSS Discovery Agent" with skills: xss
|
||||
- "Auth Testing Agent" with skills: authentication_jwt, business_logic
|
||||
- "SSRF + XXE Agent" with skills: ssrf, xxe, rce (related attack vectors)
|
||||
|
||||
BAD SPECIALIZATION:
|
||||
- "General Web Testing Agent" with prompt_modules: sql_injection, xss, csrf, ssrf, authentication_jwt (too broad)
|
||||
- "Everything Agent" with prompt_modules: all available modules (completely unfocused)
|
||||
- Any agent with more than 5 prompt modules (violates constraints)
|
||||
- "General Web Testing Agent" with skills: sql_injection, xss, csrf, ssrf, authentication_jwt (too broad)
|
||||
- "Everything Agent" with skills: all available skills (completely unfocused)
|
||||
- Any agent with more than 5 skills (violates constraints)
|
||||
|
||||
FOCUS PRINCIPLES:
|
||||
- Each agent should have deep expertise in 1-3 related vulnerability types
|
||||
- Agents with single modules have the deepest specialization
|
||||
- Agents with single skills have the deepest specialization
|
||||
- Related vulnerabilities (like SSRF+XXE or Auth+Business Logic) can be combined
|
||||
- Never create "kitchen sink" agents that try to do everything
|
||||
|
||||
@@ -307,29 +308,32 @@ Tool calls use XML format:
|
||||
|
||||
CRITICAL RULES:
|
||||
0. While active in the agent loop, EVERY message you output MUST be a single tool call. Do not send plain text-only responses.
|
||||
1. One tool call per message
|
||||
1. Exactly one tool call per message — never include more than one <function>...</function> block in a single LLM message.
|
||||
2. Tool call must be last in message
|
||||
3. End response after </function> tag. It's your stop word. Do not continue after it.
|
||||
3. EVERY tool call MUST end with </function>. This is MANDATORY. Never omit the closing tag. End your response immediately after </function>.
|
||||
4. Use ONLY the exact XML format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters.
|
||||
5. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants).
|
||||
5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. If you send "\n" instead of real line breaks inside the XML parameter value, tools may fail or behave incorrectly.
|
||||
6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants).
|
||||
- Correct: <function=think> ... </function>
|
||||
- Incorrect: <thinking_tools.think> ... </function>
|
||||
- Incorrect: <think> ... </think>
|
||||
- Incorrect: {"think": {...}}
|
||||
6. Parameters must use <parameter=param_name>value</parameter> exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values.
|
||||
7. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block.
|
||||
7. Parameters must use <parameter=param_name>value</parameter> exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values.
|
||||
8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block.
|
||||
|
||||
Example (agent creation tool):
|
||||
<function=create_agent>
|
||||
<parameter=task>Perform targeted XSS testing on the search endpoint</parameter>
|
||||
<parameter=name>XSS Discovery Agent</parameter>
|
||||
<parameter=prompt_modules>xss</parameter>
|
||||
<parameter=skills>xss</parameter>
|
||||
</function>
|
||||
|
||||
SPRAYING EXECUTION NOTE:
|
||||
- When performing large payload sprays or fuzzing, encapsulate the entire spraying loop inside a single python or terminal tool call (e.g., a Python script using asyncio/aiohttp). Do not issue one tool call per payload.
|
||||
- Favor batch-mode CLI tools (sqlmap, ffuf, nuclei, zaproxy, arjun) where appropriate and check traffic via the proxy when beneficial
|
||||
|
||||
REMINDER: Always close each tool call with </function> before going into the next. Incomplete tool calls will fail.
|
||||
|
||||
{{ get_tools_prompt() }}
|
||||
</tool_usage>
|
||||
|
||||
@@ -392,12 +396,12 @@ Directories:
|
||||
Default user: pentester (sudo available)
|
||||
</environment>
|
||||
|
||||
{% if loaded_module_names %}
|
||||
{% if loaded_skill_names %}
|
||||
<specialized_knowledge>
|
||||
{# Dynamic prompt modules loaded based on agent specialization #}
|
||||
{# Dynamic skills loaded based on agent specialization #}
|
||||
|
||||
{% for module_name in loaded_module_names %}
|
||||
{{ get_module(module_name) }}
|
||||
{% for skill_name in loaded_skill_names %}
|
||||
{{ get_skill(skill_name) }}
|
||||
|
||||
{% endfor %}
|
||||
</specialized_knowledge>
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import asyncio
|
||||
import contextlib
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, Optional
|
||||
|
||||
|
||||
@@ -16,7 +15,9 @@ from jinja2 import (
|
||||
|
||||
from strix.llm import LLM, LLMConfig, LLMRequestFailedError
|
||||
from strix.llm.utils import clean_content
|
||||
from strix.runtime import SandboxInitializationError
|
||||
from strix.tools import process_tool_invocations
|
||||
from strix.utils.resource_paths import get_strix_resource_path
|
||||
|
||||
from .state import AgentState
|
||||
|
||||
@@ -34,8 +35,7 @@ class AgentMeta(type):
|
||||
if name == "BaseAgent":
|
||||
return new_cls
|
||||
|
||||
agents_dir = Path(__file__).parent
|
||||
prompt_dir = agents_dir / name
|
||||
prompt_dir = get_strix_resource_path("agents", name)
|
||||
|
||||
new_cls.agent_name = name
|
||||
new_cls.jinja_env = Environment(
|
||||
@@ -65,20 +65,21 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
self.llm_config = config.get("llm_config", self.default_llm_config)
|
||||
if self.llm_config is None:
|
||||
raise ValueError("llm_config is required but not provided")
|
||||
self.llm = LLM(self.llm_config, agent_name=self.agent_name)
|
||||
|
||||
state_from_config = config.get("state")
|
||||
if state_from_config is not None:
|
||||
self.state = state_from_config
|
||||
else:
|
||||
self.state = AgentState(
|
||||
agent_name=self.agent_name,
|
||||
agent_name="Root Agent",
|
||||
max_iterations=self.max_iterations,
|
||||
)
|
||||
|
||||
self.llm = LLM(self.llm_config, agent_name=self.agent_name)
|
||||
|
||||
with contextlib.suppress(Exception):
|
||||
self.llm.set_agent_identity(self.agent_name, self.state.agent_id)
|
||||
self.llm.set_agent_identity(self.state.agent_name, self.state.agent_id)
|
||||
self._current_task: asyncio.Task[Any] | None = None
|
||||
self._force_stop = False
|
||||
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
|
||||
@@ -145,19 +146,22 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
if self.state.parent_id is None and agents_graph_actions._root_agent_id is None:
|
||||
agents_graph_actions._root_agent_id = self.state.agent_id
|
||||
|
||||
def cancel_current_execution(self) -> None:
|
||||
if self._current_task and not self._current_task.done():
|
||||
self._current_task.cancel()
|
||||
self._current_task = None
|
||||
|
||||
async def agent_loop(self, task: str) -> dict[str, Any]: # noqa: PLR0912, PLR0915
|
||||
await self._initialize_sandbox_and_state(task)
|
||||
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
|
||||
tracer = get_global_tracer()
|
||||
|
||||
try:
|
||||
await self._initialize_sandbox_and_state(task)
|
||||
except SandboxInitializationError as e:
|
||||
return self._handle_sandbox_error(e, tracer)
|
||||
|
||||
while True:
|
||||
if self._force_stop:
|
||||
self._force_stop = False
|
||||
await self._enter_waiting_state(tracer, was_cancelled=True)
|
||||
continue
|
||||
|
||||
self._check_agent_messages(self.state)
|
||||
|
||||
if self.state.is_waiting_for_input():
|
||||
@@ -204,7 +208,11 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
self.state.add_message("user", final_warning_msg)
|
||||
|
||||
try:
|
||||
should_finish = await self._process_iteration(tracer)
|
||||
iteration_task = asyncio.create_task(self._process_iteration(tracer))
|
||||
self._current_task = iteration_task
|
||||
should_finish = await iteration_task
|
||||
self._current_task = None
|
||||
|
||||
if should_finish:
|
||||
if self.non_interactive:
|
||||
self.state.set_completed({"success": True})
|
||||
@@ -215,43 +223,22 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
continue
|
||||
|
||||
except asyncio.CancelledError:
|
||||
self._current_task = None
|
||||
if tracer:
|
||||
partial_content = tracer.finalize_streaming_as_interrupted(self.state.agent_id)
|
||||
if partial_content and partial_content.strip():
|
||||
self.state.add_message(
|
||||
"assistant", f"{partial_content}\n\n[ABORTED BY USER]"
|
||||
)
|
||||
if self.non_interactive:
|
||||
raise
|
||||
await self._enter_waiting_state(tracer, error_occurred=False, was_cancelled=True)
|
||||
continue
|
||||
|
||||
except LLMRequestFailedError as e:
|
||||
error_msg = str(e)
|
||||
error_details = getattr(e, "details", None)
|
||||
self.state.add_error(error_msg)
|
||||
|
||||
if self.non_interactive:
|
||||
self.state.set_completed({"success": False, "error": error_msg})
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)
|
||||
if error_details:
|
||||
tracer.log_tool_execution_start(
|
||||
self.state.agent_id,
|
||||
"llm_error_details",
|
||||
{"error": error_msg, "details": error_details},
|
||||
)
|
||||
tracer.update_tool_execution(
|
||||
tracer._next_execution_id - 1, "failed", error_details
|
||||
)
|
||||
return {"success": False, "error": error_msg}
|
||||
|
||||
self.state.enter_waiting_state(llm_failed=True)
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "llm_failed", error_msg)
|
||||
if error_details:
|
||||
tracer.log_tool_execution_start(
|
||||
self.state.agent_id,
|
||||
"llm_error_details",
|
||||
{"error": error_msg, "details": error_details},
|
||||
)
|
||||
tracer.update_tool_execution(
|
||||
tracer._next_execution_id - 1, "failed", error_details
|
||||
)
|
||||
result = self._handle_llm_error(e, tracer)
|
||||
if result is not None:
|
||||
return result
|
||||
continue
|
||||
|
||||
except (RuntimeError, ValueError, TypeError) as e:
|
||||
@@ -265,11 +252,12 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
continue
|
||||
|
||||
async def _wait_for_input(self) -> None:
|
||||
import asyncio
|
||||
if self._force_stop:
|
||||
return
|
||||
|
||||
if self.state.has_waiting_timeout():
|
||||
self.state.resume_from_waiting()
|
||||
self.state.add_message("assistant", "Waiting timeout reached. Resuming execution.")
|
||||
self.state.add_message("user", "Waiting timeout reached. Resuming execution.")
|
||||
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
|
||||
@@ -334,6 +322,7 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
if not sandbox_mode and self.state.sandbox_id is None:
|
||||
from strix.runtime import get_runtime
|
||||
|
||||
try:
|
||||
runtime = get_runtime()
|
||||
sandbox_info = await runtime.create_sandbox(
|
||||
self.state.agent_id, self.state.sandbox_token, self.local_sources
|
||||
@@ -344,6 +333,11 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
|
||||
if "agent_id" in sandbox_info:
|
||||
self.state.sandbox_info["agent_id"] = sandbox_info["agent_id"]
|
||||
except Exception as e:
|
||||
from strix.telemetry import posthog
|
||||
|
||||
posthog.error("sandbox_init_error", str(e))
|
||||
raise
|
||||
|
||||
if not self.state.task:
|
||||
self.state.task = task
|
||||
@@ -351,9 +345,17 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
self.state.add_message("user", task)
|
||||
|
||||
async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool:
|
||||
response = await self.llm.generate(self.state.get_conversation_history())
|
||||
final_response = None
|
||||
|
||||
content_stripped = (response.content or "").strip()
|
||||
async for response in self.llm.generate(self.state.get_conversation_history()):
|
||||
final_response = response
|
||||
if tracer and response.content:
|
||||
tracer.update_streaming_content(self.state.agent_id, response.content)
|
||||
|
||||
if final_response is None:
|
||||
return False
|
||||
|
||||
content_stripped = (final_response.content or "").strip()
|
||||
|
||||
if not content_stripped:
|
||||
corrective_message = (
|
||||
@@ -369,17 +371,19 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
self.state.add_message("user", corrective_message)
|
||||
return False
|
||||
|
||||
self.state.add_message("assistant", response.content)
|
||||
thinking_blocks = getattr(final_response, "thinking_blocks", None)
|
||||
self.state.add_message("assistant", final_response.content, thinking_blocks=thinking_blocks)
|
||||
if tracer:
|
||||
tracer.clear_streaming_content(self.state.agent_id)
|
||||
tracer.log_chat_message(
|
||||
content=clean_content(response.content),
|
||||
content=clean_content(final_response.content),
|
||||
role="assistant",
|
||||
agent_id=self.state.agent_id,
|
||||
)
|
||||
|
||||
actions = (
|
||||
response.tool_invocations
|
||||
if hasattr(response, "tool_invocations") and response.tool_invocations
|
||||
final_response.tool_invocations
|
||||
if hasattr(final_response, "tool_invocations") and final_response.tool_invocations
|
||||
else []
|
||||
)
|
||||
|
||||
@@ -420,18 +424,6 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
|
||||
return False
|
||||
|
||||
async def _handle_iteration_error(
|
||||
self,
|
||||
error: RuntimeError | ValueError | TypeError | asyncio.CancelledError,
|
||||
tracer: Optional["Tracer"],
|
||||
) -> bool:
|
||||
error_msg = f"Error in iteration {self.state.iteration}: {error!s}"
|
||||
logger.exception(error_msg)
|
||||
self.state.add_error(error_msg)
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "error")
|
||||
return True
|
||||
|
||||
def _check_agent_messages(self, state: AgentState) -> None: # noqa: PLR0912
|
||||
try:
|
||||
from strix.tools.agents_graph.agents_graph_actions import _agent_graph, _agent_messages
|
||||
@@ -516,3 +508,95 @@ class BaseAgent(metaclass=AgentMeta):
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.warning(f"Error checking agent messages: {e}")
|
||||
return
|
||||
|
||||
def _handle_sandbox_error(
|
||||
self,
|
||||
error: SandboxInitializationError,
|
||||
tracer: Optional["Tracer"],
|
||||
) -> dict[str, Any]:
|
||||
error_msg = str(error.message)
|
||||
error_details = error.details
|
||||
self.state.add_error(error_msg)
|
||||
|
||||
if self.non_interactive:
|
||||
self.state.set_completed({"success": False, "error": error_msg})
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)
|
||||
if error_details:
|
||||
exec_id = tracer.log_tool_execution_start(
|
||||
self.state.agent_id,
|
||||
"sandbox_error_details",
|
||||
{"error": error_msg, "details": error_details},
|
||||
)
|
||||
tracer.update_tool_execution(exec_id, "failed", {"details": error_details})
|
||||
return {"success": False, "error": error_msg, "details": error_details}
|
||||
|
||||
self.state.enter_waiting_state()
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "sandbox_failed", error_msg)
|
||||
if error_details:
|
||||
exec_id = tracer.log_tool_execution_start(
|
||||
self.state.agent_id,
|
||||
"sandbox_error_details",
|
||||
{"error": error_msg, "details": error_details},
|
||||
)
|
||||
tracer.update_tool_execution(exec_id, "failed", {"details": error_details})
|
||||
|
||||
return {"success": False, "error": error_msg, "details": error_details}
|
||||
|
||||
def _handle_llm_error(
|
||||
self,
|
||||
error: LLMRequestFailedError,
|
||||
tracer: Optional["Tracer"],
|
||||
) -> dict[str, Any] | None:
|
||||
error_msg = str(error)
|
||||
error_details = getattr(error, "details", None)
|
||||
self.state.add_error(error_msg)
|
||||
|
||||
if self.non_interactive:
|
||||
self.state.set_completed({"success": False, "error": error_msg})
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)
|
||||
if error_details:
|
||||
exec_id = tracer.log_tool_execution_start(
|
||||
self.state.agent_id,
|
||||
"llm_error_details",
|
||||
{"error": error_msg, "details": error_details},
|
||||
)
|
||||
tracer.update_tool_execution(exec_id, "failed", {"details": error_details})
|
||||
return {"success": False, "error": error_msg}
|
||||
|
||||
self.state.enter_waiting_state(llm_failed=True)
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "llm_failed", error_msg)
|
||||
if error_details:
|
||||
exec_id = tracer.log_tool_execution_start(
|
||||
self.state.agent_id,
|
||||
"llm_error_details",
|
||||
{"error": error_msg, "details": error_details},
|
||||
)
|
||||
tracer.update_tool_execution(exec_id, "failed", {"details": error_details})
|
||||
|
||||
return None
|
||||
|
||||
async def _handle_iteration_error(
|
||||
self,
|
||||
error: RuntimeError | ValueError | TypeError | asyncio.CancelledError,
|
||||
tracer: Optional["Tracer"],
|
||||
) -> bool:
|
||||
error_msg = f"Error in iteration {self.state.iteration}: {error!s}"
|
||||
logger.exception(error_msg)
|
||||
self.state.add_error(error_msg)
|
||||
if tracer:
|
||||
tracer.update_agent_status(self.state.agent_id, "error")
|
||||
return True
|
||||
|
||||
def cancel_current_execution(self) -> None:
|
||||
self._force_stop = True
|
||||
if self._current_task and not self._current_task.done():
|
||||
try:
|
||||
loop = self._current_task.get_loop()
|
||||
loop.call_soon_threadsafe(self._current_task.cancel)
|
||||
except RuntimeError:
|
||||
self._current_task.cancel()
|
||||
self._current_task = None
|
||||
|
||||
@@ -43,8 +43,11 @@ class AgentState(BaseModel):
|
||||
self.iteration += 1
|
||||
self.last_updated = datetime.now(UTC).isoformat()
|
||||
|
||||
def add_message(self, role: str, content: Any) -> None:
|
||||
self.messages.append({"role": role, "content": content})
|
||||
def add_message(self, role: str, content: Any, thinking_blocks: list[dict[str, Any]] | None = None) -> None:
|
||||
message = {"role": role, "content": content}
|
||||
if thinking_blocks:
|
||||
message["thinking_blocks"] = thinking_blocks
|
||||
self.messages.append(message)
|
||||
self.last_updated = datetime.now(UTC).isoformat()
|
||||
|
||||
def add_action(self, action: dict[str, Any]) -> None:
|
||||
|
||||
12
strix/config/__init__.py
Normal file
12
strix/config/__init__.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from strix.config.config import (
|
||||
Config,
|
||||
apply_saved_config,
|
||||
save_current_config,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Config",
|
||||
"apply_saved_config",
|
||||
"save_current_config",
|
||||
]
|
||||
131
strix/config/config.py
Normal file
131
strix/config/config.py
Normal file
@@ -0,0 +1,131 @@
|
||||
import contextlib
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
|
||||
class Config:
|
||||
"""Configuration Manager for Strix."""
|
||||
|
||||
# LLM Configuration
|
||||
strix_llm = None
|
||||
llm_api_key = None
|
||||
llm_api_base = None
|
||||
openai_api_base = None
|
||||
litellm_base_url = None
|
||||
ollama_api_base = None
|
||||
strix_reasoning_effort = "high"
|
||||
strix_llm_max_retries = "5"
|
||||
strix_memory_compressor_timeout = "30"
|
||||
llm_timeout = "300"
|
||||
|
||||
# Tool & Feature Configuration
|
||||
perplexity_api_key = None
|
||||
strix_disable_browser = "false"
|
||||
|
||||
# Runtime Configuration
|
||||
strix_image = "ghcr.io/usestrix/strix-sandbox:0.1.10"
|
||||
strix_runtime_backend = "docker"
|
||||
strix_sandbox_execution_timeout = "120"
|
||||
strix_sandbox_connect_timeout = "10"
|
||||
|
||||
# Telemetry
|
||||
strix_telemetry = "1"
|
||||
|
||||
@classmethod
|
||||
def _tracked_names(cls) -> list[str]:
|
||||
return [
|
||||
k
|
||||
for k, v in vars(cls).items()
|
||||
if not k.startswith("_") and k[0].islower() and (v is None or isinstance(v, str))
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def tracked_vars(cls) -> list[str]:
|
||||
return [name.upper() for name in cls._tracked_names()]
|
||||
|
||||
@classmethod
|
||||
def get(cls, name: str) -> str | None:
|
||||
env_name = name.upper()
|
||||
default = getattr(cls, name, None)
|
||||
return os.getenv(env_name, default)
|
||||
|
||||
@classmethod
|
||||
def config_dir(cls) -> Path:
|
||||
return Path.home() / ".strix"
|
||||
|
||||
@classmethod
|
||||
def config_file(cls) -> Path:
|
||||
return cls.config_dir() / "cli-config.json"
|
||||
|
||||
@classmethod
|
||||
def load(cls) -> dict[str, Any]:
|
||||
path = cls.config_file()
|
||||
if not path.exists():
|
||||
return {}
|
||||
try:
|
||||
with path.open("r", encoding="utf-8") as f:
|
||||
data: dict[str, Any] = json.load(f)
|
||||
return data
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return {}
|
||||
|
||||
@classmethod
|
||||
def save(cls, config: dict[str, Any]) -> bool:
|
||||
try:
|
||||
cls.config_dir().mkdir(parents=True, exist_ok=True)
|
||||
config_path = cls.config_file()
|
||||
with config_path.open("w", encoding="utf-8") as f:
|
||||
json.dump(config, f, indent=2)
|
||||
except OSError:
|
||||
return False
|
||||
with contextlib.suppress(OSError):
|
||||
config_path.chmod(0o600) # may fail on Windows
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def apply_saved(cls) -> dict[str, str]:
|
||||
saved = cls.load()
|
||||
env_vars = saved.get("env", {})
|
||||
applied = {}
|
||||
|
||||
for var_name, var_value in env_vars.items():
|
||||
if var_name in cls.tracked_vars() and not os.getenv(var_name):
|
||||
os.environ[var_name] = var_value
|
||||
applied[var_name] = var_value
|
||||
|
||||
return applied
|
||||
|
||||
@classmethod
|
||||
def capture_current(cls) -> dict[str, Any]:
|
||||
env_vars = {}
|
||||
for var_name in cls.tracked_vars():
|
||||
value = os.getenv(var_name)
|
||||
if value:
|
||||
env_vars[var_name] = value
|
||||
return {"env": env_vars}
|
||||
|
||||
@classmethod
|
||||
def save_current(cls) -> bool:
|
||||
existing = cls.load().get("env", {})
|
||||
merged = dict(existing)
|
||||
|
||||
for var_name in cls.tracked_vars():
|
||||
value = os.getenv(var_name)
|
||||
if value is None:
|
||||
pass
|
||||
elif value == "":
|
||||
merged.pop(var_name, None)
|
||||
else:
|
||||
merged[var_name] = value
|
||||
|
||||
return cls.save({"env": merged})
|
||||
|
||||
|
||||
def apply_saved_config() -> dict[str, str]:
|
||||
return Config.apply_saved()
|
||||
|
||||
|
||||
def save_current_config() -> bool:
|
||||
return Config.save_current()
|
||||
@@ -1,13 +1,14 @@
|
||||
Screen {
|
||||
background: #1a1a1a;
|
||||
background: #000000;
|
||||
color: #d4d4d4;
|
||||
}
|
||||
|
||||
#splash_screen {
|
||||
height: 100%;
|
||||
width: 100%;
|
||||
background: #1a1a1a;
|
||||
background: #000000;
|
||||
color: #22c55e;
|
||||
align: center middle;
|
||||
content-align: center middle;
|
||||
text-align: center;
|
||||
}
|
||||
@@ -17,6 +18,7 @@ Screen {
|
||||
height: auto;
|
||||
background: transparent;
|
||||
text-align: center;
|
||||
content-align: center middle;
|
||||
padding: 2;
|
||||
}
|
||||
|
||||
@@ -24,7 +26,7 @@ Screen {
|
||||
height: 100%;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
background: #1a1a1a;
|
||||
background: #000000;
|
||||
}
|
||||
|
||||
#content_container {
|
||||
@@ -39,10 +41,14 @@ Screen {
|
||||
margin-left: 1;
|
||||
}
|
||||
|
||||
#sidebar.-hidden {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#agents_tree {
|
||||
height: 1fr;
|
||||
background: transparent;
|
||||
border: round #262626;
|
||||
border: round #333333;
|
||||
border-title-color: #a8a29e;
|
||||
border-title-style: bold;
|
||||
padding: 1;
|
||||
@@ -57,21 +63,135 @@ Screen {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
#vulnerabilities_panel {
|
||||
height: auto;
|
||||
max-height: 12;
|
||||
background: transparent;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
border: round #333333;
|
||||
overflow-y: auto;
|
||||
scrollbar-background: #000000;
|
||||
scrollbar-color: #333333;
|
||||
scrollbar-corner-color: #000000;
|
||||
scrollbar-size-vertical: 1;
|
||||
}
|
||||
|
||||
#vulnerabilities_panel.hidden {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.vuln-item {
|
||||
height: auto;
|
||||
width: 100%;
|
||||
padding: 0 1;
|
||||
background: transparent;
|
||||
color: #d4d4d4;
|
||||
}
|
||||
|
||||
.vuln-item:hover {
|
||||
background: #1a1a1a;
|
||||
color: #fafaf9;
|
||||
}
|
||||
|
||||
VulnerabilityDetailScreen {
|
||||
align: center middle;
|
||||
background: #000000 80%;
|
||||
}
|
||||
|
||||
#vuln_detail_dialog {
|
||||
grid-size: 1;
|
||||
grid-gutter: 1;
|
||||
grid-rows: 1fr auto;
|
||||
padding: 2 3;
|
||||
width: 85%;
|
||||
max-width: 110;
|
||||
height: 85%;
|
||||
max-height: 45;
|
||||
border: solid #262626;
|
||||
background: #0a0a0a;
|
||||
}
|
||||
|
||||
#vuln_detail_scroll {
|
||||
height: 1fr;
|
||||
background: transparent;
|
||||
scrollbar-background: #0a0a0a;
|
||||
scrollbar-color: #404040;
|
||||
scrollbar-corner-color: #0a0a0a;
|
||||
scrollbar-size: 1 1;
|
||||
padding-right: 1;
|
||||
}
|
||||
|
||||
#vuln_detail_content {
|
||||
width: 100%;
|
||||
background: transparent;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
#vuln_detail_buttons {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
align: right middle;
|
||||
padding-top: 1;
|
||||
margin: 0;
|
||||
border-top: solid #1a1a1a;
|
||||
}
|
||||
|
||||
#copy_vuln_detail {
|
||||
width: auto;
|
||||
min-width: 12;
|
||||
height: auto;
|
||||
background: transparent;
|
||||
color: #525252;
|
||||
border: none;
|
||||
text-style: none;
|
||||
margin: 0 1;
|
||||
padding: 0 2;
|
||||
}
|
||||
|
||||
#close_vuln_detail {
|
||||
width: auto;
|
||||
min-width: 10;
|
||||
height: auto;
|
||||
background: transparent;
|
||||
color: #a3a3a3;
|
||||
border: none;
|
||||
text-style: none;
|
||||
margin: 0;
|
||||
padding: 0 2;
|
||||
}
|
||||
|
||||
#copy_vuln_detail:hover, #copy_vuln_detail:focus {
|
||||
background: transparent;
|
||||
color: #22c55e;
|
||||
border: none;
|
||||
}
|
||||
|
||||
#close_vuln_detail:hover, #close_vuln_detail:focus {
|
||||
background: transparent;
|
||||
color: #ffffff;
|
||||
border: none;
|
||||
}
|
||||
|
||||
#chat_area_container {
|
||||
width: 75%;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
#chat_area_container.-full-width {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
#chat_history {
|
||||
height: 1fr;
|
||||
background: transparent;
|
||||
border: round #1a1a1a;
|
||||
border: round #0a0a0a;
|
||||
padding: 0;
|
||||
margin-bottom: 0;
|
||||
margin-right: 0;
|
||||
scrollbar-background: #0f0f0f;
|
||||
scrollbar-color: #262626;
|
||||
scrollbar-corner-color: #0f0f0f;
|
||||
scrollbar-background: #000000;
|
||||
scrollbar-color: #1a1a1a;
|
||||
scrollbar-corner-color: #000000;
|
||||
scrollbar-size: 1 1;
|
||||
}
|
||||
|
||||
@@ -93,7 +213,7 @@ Screen {
|
||||
color: #a3a3a3;
|
||||
text-align: left;
|
||||
content-align: left middle;
|
||||
text-style: italic;
|
||||
text-style: none;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
@@ -113,11 +233,11 @@ Screen {
|
||||
#chat_input_container {
|
||||
height: 3;
|
||||
background: transparent;
|
||||
border: round #525252;
|
||||
border: round #333333;
|
||||
margin-right: 0;
|
||||
padding: 0;
|
||||
layout: horizontal;
|
||||
align-vertical: middle;
|
||||
align-vertical: top;
|
||||
}
|
||||
|
||||
#chat_input_container:focus-within {
|
||||
@@ -134,7 +254,7 @@ Screen {
|
||||
height: 100%;
|
||||
padding: 0 0 0 1;
|
||||
color: #737373;
|
||||
content-align-vertical: middle;
|
||||
content-align-vertical: top;
|
||||
}
|
||||
|
||||
#chat_history:focus {
|
||||
@@ -144,7 +264,7 @@ Screen {
|
||||
#chat_input {
|
||||
width: 1fr;
|
||||
height: 100%;
|
||||
background: #121212;
|
||||
background: transparent;
|
||||
border: none;
|
||||
color: #d4d4d4;
|
||||
padding: 0;
|
||||
@@ -155,6 +275,14 @@ Screen {
|
||||
border: none;
|
||||
}
|
||||
|
||||
#chat_input .text-area--cursor-line {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
#chat_input:focus .text-area--cursor-line {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
#chat_input > .text-area--placeholder {
|
||||
color: #525252;
|
||||
text-style: italic;
|
||||
@@ -198,39 +326,31 @@ Screen {
|
||||
}
|
||||
|
||||
.tool-call {
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
padding: 0 1;
|
||||
background: #0a0a0a;
|
||||
border: round #1a1a1a;
|
||||
border-left: thick #f59e0b;
|
||||
background: transparent;
|
||||
border: none;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.tool-call.status-completed {
|
||||
border-left: thick #22c55e;
|
||||
background: #0d1f12;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
background: transparent;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.tool-call.status-running {
|
||||
border-left: thick #f59e0b;
|
||||
background: #1f1611;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
background: transparent;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.tool-call.status-failed,
|
||||
.tool-call.status-error {
|
||||
border-left: thick #ef4444;
|
||||
background: #1f0d0d;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
background: transparent;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.browser-tool,
|
||||
@@ -242,209 +362,54 @@ Screen {
|
||||
.notes-tool,
|
||||
.thinking-tool,
|
||||
.web-search-tool,
|
||||
.finish-tool,
|
||||
.reporting-tool,
|
||||
.scan-info-tool,
|
||||
.subagent-info-tool {
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
|
||||
.browser-tool {
|
||||
border-left: thick #06b6d4;
|
||||
}
|
||||
|
||||
.browser-tool.status-completed {
|
||||
border-left: thick #06b6d4;
|
||||
background: transparent;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
|
||||
.browser-tool.status-running {
|
||||
border-left: thick #0891b2;
|
||||
background: transparent;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
|
||||
.terminal-tool {
|
||||
border-left: thick #22c55e;
|
||||
}
|
||||
|
||||
.terminal-tool.status-completed {
|
||||
border-left: thick #22c55e;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.terminal-tool.status-running {
|
||||
border-left: thick #16a34a;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.python-tool {
|
||||
border-left: thick #3b82f6;
|
||||
}
|
||||
|
||||
.python-tool.status-completed {
|
||||
border-left: thick #3b82f6;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.python-tool.status-running {
|
||||
border-left: thick #2563eb;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.agents-graph-tool {
|
||||
border-left: thick #fbbf24;
|
||||
}
|
||||
|
||||
.agents-graph-tool.status-completed {
|
||||
border-left: thick #fbbf24;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.agents-graph-tool.status-running {
|
||||
border-left: thick #f59e0b;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.file-edit-tool {
|
||||
border-left: thick #10b981;
|
||||
}
|
||||
|
||||
.file-edit-tool.status-completed {
|
||||
border-left: thick #10b981;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.file-edit-tool.status-running {
|
||||
border-left: thick #059669;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.proxy-tool {
|
||||
border-left: thick #06b6d4;
|
||||
}
|
||||
|
||||
.proxy-tool.status-completed {
|
||||
border-left: thick #06b6d4;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.proxy-tool.status-running {
|
||||
border-left: thick #0891b2;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.notes-tool {
|
||||
border-left: thick #fbbf24;
|
||||
}
|
||||
|
||||
.notes-tool.status-completed {
|
||||
border-left: thick #fbbf24;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.notes-tool.status-running {
|
||||
border-left: thick #f59e0b;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.thinking-tool {
|
||||
border-left: thick #a855f7;
|
||||
}
|
||||
|
||||
.thinking-tool.status-completed {
|
||||
border-left: thick #a855f7;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.thinking-tool.status-running {
|
||||
border-left: thick #9333ea;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.web-search-tool {
|
||||
border-left: thick #22c55e;
|
||||
}
|
||||
|
||||
.web-search-tool.status-completed {
|
||||
border-left: thick #22c55e;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.web-search-tool.status-running {
|
||||
border-left: thick #16a34a;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.finish-tool {
|
||||
border-left: thick #dc2626;
|
||||
}
|
||||
|
||||
.finish-tool.status-completed {
|
||||
border-left: thick #dc2626;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.finish-tool.status-running {
|
||||
border-left: thick #b91c1c;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.finish-tool,
|
||||
.reporting-tool {
|
||||
border-left: thick #ea580c;
|
||||
}
|
||||
|
||||
.reporting-tool.status-completed {
|
||||
border-left: thick #ea580c;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.reporting-tool.status-running {
|
||||
border-left: thick #c2410c;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.scan-info-tool {
|
||||
border-left: thick #22c55e;
|
||||
background: transparent;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
|
||||
.scan-info-tool.status-completed {
|
||||
border-left: thick #22c55e;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.scan-info-tool.status-running {
|
||||
border-left: thick #16a34a;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.subagent-info-tool {
|
||||
border-left: thick #22c55e;
|
||||
background: transparent;
|
||||
margin: 0 !important;
|
||||
margin-top: 0 !important;
|
||||
margin-bottom: 0 !important;
|
||||
}
|
||||
|
||||
.subagent-info-tool.status-completed {
|
||||
border-left: thick #22c55e;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.browser-tool.status-completed,
|
||||
.browser-tool.status-running,
|
||||
.terminal-tool.status-completed,
|
||||
.terminal-tool.status-running,
|
||||
.python-tool.status-completed,
|
||||
.python-tool.status-running,
|
||||
.agents-graph-tool.status-completed,
|
||||
.agents-graph-tool.status-running,
|
||||
.file-edit-tool.status-completed,
|
||||
.file-edit-tool.status-running,
|
||||
.proxy-tool.status-completed,
|
||||
.proxy-tool.status-running,
|
||||
.notes-tool.status-completed,
|
||||
.notes-tool.status-running,
|
||||
.thinking-tool.status-completed,
|
||||
.thinking-tool.status-running,
|
||||
.web-search-tool.status-completed,
|
||||
.web-search-tool.status-running,
|
||||
.scan-info-tool.status-completed,
|
||||
.scan-info-tool.status-running,
|
||||
.subagent-info-tool.status-completed,
|
||||
.subagent-info-tool.status-running {
|
||||
border-left: thick #16a34a;
|
||||
background: transparent;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.finish-tool.status-completed,
|
||||
.finish-tool.status-running,
|
||||
.reporting-tool.status-completed,
|
||||
.reporting-tool.status-running {
|
||||
background: transparent;
|
||||
margin-top: 1;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
Tree {
|
||||
@@ -462,7 +427,7 @@ Tree > .tree--label {
|
||||
background: transparent;
|
||||
padding: 0 1;
|
||||
margin-bottom: 1;
|
||||
border-bottom: solid #262626;
|
||||
border-bottom: solid #1a1a1a;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
@@ -502,7 +467,7 @@ Tree > .tree--label {
|
||||
}
|
||||
|
||||
Tree:focus {
|
||||
border: round #262626;
|
||||
border: round #1a1a1a;
|
||||
}
|
||||
|
||||
Tree:focus > .tree--label {
|
||||
@@ -546,7 +511,7 @@ StopAgentScreen {
|
||||
width: 30;
|
||||
height: auto;
|
||||
border: round #a3a3a3;
|
||||
background: #1a1a1a 98%;
|
||||
background: #000000 98%;
|
||||
}
|
||||
|
||||
#stop_agent_title {
|
||||
@@ -608,8 +573,8 @@ QuitScreen {
|
||||
padding: 1;
|
||||
width: 24;
|
||||
height: auto;
|
||||
border: round #525252;
|
||||
background: #1a1a1a 98%;
|
||||
border: round #333333;
|
||||
background: #000000 98%;
|
||||
}
|
||||
|
||||
#quit_title {
|
||||
@@ -672,7 +637,7 @@ HelpScreen {
|
||||
width: 40;
|
||||
height: auto;
|
||||
border: round #22c55e;
|
||||
background: #1a1a1a 98%;
|
||||
background: #000000 98%;
|
||||
}
|
||||
|
||||
#help_title {
|
||||
|
||||
@@ -14,7 +14,10 @@ from strix.agents.StrixAgent import StrixAgent
|
||||
from strix.llm.config import LLMConfig
|
||||
from strix.telemetry.tracer import Tracer, set_global_tracer
|
||||
|
||||
from .utils import build_final_stats_text, build_live_stats_text, get_severity_color
|
||||
from .utils import (
|
||||
build_live_stats_text,
|
||||
format_vulnerability_report,
|
||||
)
|
||||
|
||||
|
||||
async def run_cli(args: Any) -> None: # noqa: PLR0915
|
||||
@@ -88,28 +91,14 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
|
||||
tracer = Tracer(args.run_name)
|
||||
tracer.set_scan_config(scan_config)
|
||||
|
||||
def display_vulnerability(report_id: str, title: str, content: str, severity: str) -> None:
|
||||
severity_color = get_severity_color(severity.lower())
|
||||
def display_vulnerability(report: dict[str, Any]) -> None:
|
||||
report_id = report.get("id", "unknown")
|
||||
|
||||
vuln_text = Text()
|
||||
vuln_text.append("🐞 ", style="bold red")
|
||||
vuln_text.append("VULNERABILITY FOUND", style="bold red")
|
||||
vuln_text.append(" • ", style="dim white")
|
||||
vuln_text.append(title, style="bold white")
|
||||
|
||||
severity_text = Text()
|
||||
severity_text.append("Severity: ", style="dim white")
|
||||
severity_text.append(severity.upper(), style=f"bold {severity_color}")
|
||||
vuln_text = format_vulnerability_report(report)
|
||||
|
||||
vuln_panel = Panel(
|
||||
Text.assemble(
|
||||
vuln_text,
|
||||
"\n\n",
|
||||
severity_text,
|
||||
"\n\n",
|
||||
content,
|
||||
),
|
||||
title=f"[bold red]🔍 {report_id.upper()}",
|
||||
title=f"[bold red]{report_id.upper()}",
|
||||
title_align="left",
|
||||
border_style="red",
|
||||
padding=(1, 2),
|
||||
@@ -178,8 +167,11 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
|
||||
|
||||
if isinstance(result, dict) and not result.get("success", True):
|
||||
error_msg = result.get("error", "Unknown error")
|
||||
error_details = result.get("details")
|
||||
console.print()
|
||||
console.print(f"[bold red]❌ Penetration test failed:[/] {error_msg}")
|
||||
if error_details:
|
||||
console.print(f"[dim]{error_details}[/]")
|
||||
console.print()
|
||||
sys.exit(1)
|
||||
finally:
|
||||
@@ -190,25 +182,6 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
|
||||
console.print(f"[bold red]Error during penetration test:[/] {e}")
|
||||
raise
|
||||
|
||||
console.print()
|
||||
final_stats_text = Text()
|
||||
final_stats_text.append("📊 ", style="bold cyan")
|
||||
final_stats_text.append("PENETRATION TEST COMPLETED", style="bold green")
|
||||
final_stats_text.append("\n\n")
|
||||
|
||||
stats_text = build_final_stats_text(tracer)
|
||||
if stats_text:
|
||||
final_stats_text.append(stats_text)
|
||||
|
||||
final_stats_panel = Panel(
|
||||
final_stats_text,
|
||||
title="[bold green]✅ Final Statistics",
|
||||
title_align="center",
|
||||
border_style="green",
|
||||
padding=(1, 2),
|
||||
)
|
||||
console.print(final_stats_panel)
|
||||
|
||||
if tracer.final_scan_result:
|
||||
console.print()
|
||||
|
||||
|
||||
@@ -6,7 +6,6 @@ Strix Agent Interface
|
||||
import argparse
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
@@ -18,9 +17,14 @@ from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
|
||||
from strix.interface.cli import run_cli
|
||||
from strix.interface.tui import run_tui
|
||||
from strix.interface.utils import (
|
||||
from strix.config import Config, apply_saved_config, save_current_config
|
||||
|
||||
|
||||
apply_saved_config()
|
||||
|
||||
from strix.interface.cli import run_cli # noqa: E402
|
||||
from strix.interface.tui import run_tui # noqa: E402
|
||||
from strix.interface.utils import ( # noqa: E402
|
||||
assign_workspace_subdirs,
|
||||
build_final_stats_text,
|
||||
check_docker_connection,
|
||||
@@ -30,10 +34,12 @@ from strix.interface.utils import (
|
||||
image_exists,
|
||||
infer_target_type,
|
||||
process_pull_line,
|
||||
rewrite_localhost_targets,
|
||||
validate_llm_response,
|
||||
)
|
||||
from strix.runtime.docker_runtime import STRIX_IMAGE
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
from strix.runtime.docker_runtime import HOST_GATEWAY_HOSTNAME # noqa: E402
|
||||
from strix.telemetry import posthog # noqa: E402
|
||||
from strix.telemetry.tracer import get_global_tracer # noqa: E402
|
||||
|
||||
|
||||
logging.getLogger().setLevel(logging.ERROR)
|
||||
@@ -44,27 +50,30 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
||||
missing_required_vars = []
|
||||
missing_optional_vars = []
|
||||
|
||||
if not os.getenv("STRIX_LLM"):
|
||||
if not Config.get("strix_llm"):
|
||||
missing_required_vars.append("STRIX_LLM")
|
||||
|
||||
has_base_url = any(
|
||||
[
|
||||
os.getenv("LLM_API_BASE"),
|
||||
os.getenv("OPENAI_API_BASE"),
|
||||
os.getenv("LITELLM_BASE_URL"),
|
||||
os.getenv("OLLAMA_API_BASE"),
|
||||
Config.get("llm_api_base"),
|
||||
Config.get("openai_api_base"),
|
||||
Config.get("litellm_base_url"),
|
||||
Config.get("ollama_api_base"),
|
||||
]
|
||||
)
|
||||
|
||||
if not os.getenv("LLM_API_KEY"):
|
||||
if not Config.get("llm_api_key"):
|
||||
missing_optional_vars.append("LLM_API_KEY")
|
||||
|
||||
if not has_base_url:
|
||||
missing_optional_vars.append("LLM_API_BASE")
|
||||
|
||||
if not os.getenv("PERPLEXITY_API_KEY"):
|
||||
if not Config.get("perplexity_api_key"):
|
||||
missing_optional_vars.append("PERPLEXITY_API_KEY")
|
||||
|
||||
if not Config.get("strix_reasoning_effort"):
|
||||
missing_optional_vars.append("STRIX_REASONING_EFFORT")
|
||||
|
||||
if missing_required_vars:
|
||||
error_text = Text()
|
||||
error_text.append("❌ ", style="bold red")
|
||||
@@ -116,6 +125,14 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
||||
" - API key for Perplexity AI web search (enables real-time research)\n",
|
||||
style="white",
|
||||
)
|
||||
elif var == "STRIX_REASONING_EFFORT":
|
||||
error_text.append("• ", style="white")
|
||||
error_text.append("STRIX_REASONING_EFFORT", style="bold cyan")
|
||||
error_text.append(
|
||||
" - Reasoning effort level: none, minimal, low, medium, high, xhigh "
|
||||
"(default: high)\n",
|
||||
style="white",
|
||||
)
|
||||
|
||||
error_text.append("\nExample setup:\n", style="white")
|
||||
error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
|
||||
@@ -138,6 +155,11 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
|
||||
error_text.append(
|
||||
"export PERPLEXITY_API_KEY='your-perplexity-key-here'\n", style="dim white"
|
||||
)
|
||||
elif var == "STRIX_REASONING_EFFORT":
|
||||
error_text.append(
|
||||
"export STRIX_REASONING_EFFORT='high'\n",
|
||||
style="dim white",
|
||||
)
|
||||
|
||||
panel = Panel(
|
||||
error_text,
|
||||
@@ -180,13 +202,13 @@ async def warm_up_llm() -> None:
|
||||
console = Console()
|
||||
|
||||
try:
|
||||
model_name = os.getenv("STRIX_LLM", "openai/gpt-5")
|
||||
api_key = os.getenv("LLM_API_KEY")
|
||||
model_name = Config.get("strix_llm")
|
||||
api_key = Config.get("llm_api_key")
|
||||
api_base = (
|
||||
os.getenv("LLM_API_BASE")
|
||||
or os.getenv("OPENAI_API_BASE")
|
||||
or os.getenv("LITELLM_BASE_URL")
|
||||
or os.getenv("OLLAMA_API_BASE")
|
||||
Config.get("llm_api_base")
|
||||
or Config.get("openai_api_base")
|
||||
or Config.get("litellm_base_url")
|
||||
or Config.get("ollama_api_base")
|
||||
)
|
||||
|
||||
test_messages = [
|
||||
@@ -194,7 +216,7 @@ async def warm_up_llm() -> None:
|
||||
{"role": "user", "content": "Reply with just 'OK'."},
|
||||
]
|
||||
|
||||
llm_timeout = int(os.getenv("LLM_TIMEOUT", "600"))
|
||||
llm_timeout = int(Config.get("llm_timeout") or "300")
|
||||
|
||||
completion_kwargs: dict[str, Any] = {
|
||||
"model": model_name,
|
||||
@@ -312,12 +334,6 @@ Examples:
|
||||
"(e.g., '--instruction-file ./detailed_instructions.txt').",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--run-name",
|
||||
type=str,
|
||||
help="Custom name for this penetration test run",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-n",
|
||||
"--non-interactive",
|
||||
@@ -377,6 +393,7 @@ Examples:
|
||||
parser.error(f"Invalid target '{target}'")
|
||||
|
||||
assign_workspace_subdirs(args.targets_info)
|
||||
rewrite_localhost_targets(args.targets_info, HOST_GATEWAY_HOSTNAME)
|
||||
|
||||
return args
|
||||
|
||||
@@ -444,7 +461,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
|
||||
console.print("\n")
|
||||
console.print(panel)
|
||||
console.print()
|
||||
console.print("[dim]🌐 Website:[/] [cyan]https://usestrix.com[/]")
|
||||
console.print("[dim]🌐 Website:[/] [cyan]https://strix.ai[/]")
|
||||
console.print("[dim]💬 Discord:[/] [cyan]https://discord.gg/YjKFvEZSdZ[/]")
|
||||
console.print()
|
||||
|
||||
@@ -453,11 +470,11 @@ def pull_docker_image() -> None:
|
||||
console = Console()
|
||||
client = check_docker_connection()
|
||||
|
||||
if image_exists(client, STRIX_IMAGE):
|
||||
if image_exists(client, Config.get("strix_image")): # type: ignore[arg-type]
|
||||
return
|
||||
|
||||
console.print()
|
||||
console.print(f"[bold cyan]🐳 Pulling Docker image:[/] {STRIX_IMAGE}")
|
||||
console.print(f"[bold cyan]🐳 Pulling Docker image:[/] {Config.get('strix_image')}")
|
||||
console.print("[dim yellow]This only happens on first run and may take a few minutes...[/]")
|
||||
console.print()
|
||||
|
||||
@@ -466,7 +483,7 @@ def pull_docker_image() -> None:
|
||||
layers_info: dict[str, str] = {}
|
||||
last_update = ""
|
||||
|
||||
for line in client.api.pull(STRIX_IMAGE, stream=True, decode=True):
|
||||
for line in client.api.pull(Config.get("strix_image"), stream=True, decode=True):
|
||||
last_update = process_pull_line(line, layers_info, status, last_update)
|
||||
|
||||
except DockerException as e:
|
||||
@@ -475,7 +492,7 @@ def pull_docker_image() -> None:
|
||||
error_text.append("❌ ", style="bold red")
|
||||
error_text.append("FAILED TO PULL IMAGE", style="bold red")
|
||||
error_text.append("\n\n", style="white")
|
||||
error_text.append(f"Could not download: {STRIX_IMAGE}\n", style="white")
|
||||
error_text.append(f"Could not download: {Config.get('strix_image')}\n", style="white")
|
||||
error_text.append(str(e), style="dim red")
|
||||
|
||||
panel = Panel(
|
||||
@@ -507,7 +524,8 @@ def main() -> None:
|
||||
validate_environment()
|
||||
asyncio.run(warm_up_llm())
|
||||
|
||||
if not args.run_name:
|
||||
save_current_config()
|
||||
|
||||
args.run_name = generate_run_name(args.targets_info)
|
||||
|
||||
for target_info in args.targets_info:
|
||||
@@ -519,10 +537,32 @@ def main() -> None:
|
||||
|
||||
args.local_sources = collect_local_sources(args.targets_info)
|
||||
|
||||
is_whitebox = bool(args.local_sources)
|
||||
|
||||
posthog.start(
|
||||
model=Config.get("strix_llm"),
|
||||
scan_mode=args.scan_mode,
|
||||
is_whitebox=is_whitebox,
|
||||
interactive=not args.non_interactive,
|
||||
has_instructions=bool(args.instruction),
|
||||
)
|
||||
|
||||
exit_reason = "user_exit"
|
||||
try:
|
||||
if args.non_interactive:
|
||||
asyncio.run(run_cli(args))
|
||||
else:
|
||||
asyncio.run(run_tui(args))
|
||||
except KeyboardInterrupt:
|
||||
exit_reason = "interrupted"
|
||||
except Exception as e:
|
||||
exit_reason = "error"
|
||||
posthog.error("unhandled_exception", str(e))
|
||||
raise
|
||||
finally:
|
||||
tracer = get_global_tracer()
|
||||
if tracer:
|
||||
posthog.end(tracer, exit_reason=exit_reason)
|
||||
|
||||
results_path = Path("strix_runs") / args.run_name
|
||||
display_completion_message(args, results_path)
|
||||
|
||||
119
strix/interface/streaming_parser.py
Normal file
119
strix/interface/streaming_parser.py
Normal file
@@ -0,0 +1,119 @@
|
||||
import html
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import Literal
|
||||
|
||||
|
||||
_FUNCTION_TAG_PREFIX = "<function="
|
||||
|
||||
|
||||
def _get_safe_content(content: str) -> tuple[str, str]:
|
||||
if not content:
|
||||
return "", ""
|
||||
|
||||
last_lt = content.rfind("<")
|
||||
if last_lt == -1:
|
||||
return content, ""
|
||||
|
||||
suffix = content[last_lt:]
|
||||
target = _FUNCTION_TAG_PREFIX # "<function="
|
||||
|
||||
if target.startswith(suffix):
|
||||
return content[:last_lt], suffix
|
||||
|
||||
return content, ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class StreamSegment:
|
||||
type: Literal["text", "tool"]
|
||||
content: str
|
||||
tool_name: str | None = None
|
||||
args: dict[str, str] | None = None
|
||||
is_complete: bool = False
|
||||
|
||||
|
||||
def parse_streaming_content(content: str) -> list[StreamSegment]:
|
||||
if not content:
|
||||
return []
|
||||
|
||||
segments: list[StreamSegment] = []
|
||||
|
||||
func_pattern = r"<function=([^>]+)>"
|
||||
func_matches = list(re.finditer(func_pattern, content))
|
||||
|
||||
if not func_matches:
|
||||
safe_content, _ = _get_safe_content(content)
|
||||
text = safe_content.strip()
|
||||
if text:
|
||||
segments.append(StreamSegment(type="text", content=text))
|
||||
return segments
|
||||
|
||||
first_func_start = func_matches[0].start()
|
||||
if first_func_start > 0:
|
||||
text_before = content[:first_func_start].strip()
|
||||
if text_before:
|
||||
segments.append(StreamSegment(type="text", content=text_before))
|
||||
|
||||
for i, match in enumerate(func_matches):
|
||||
tool_name = match.group(1)
|
||||
func_start = match.end()
|
||||
|
||||
func_end_match = re.search(r"</function>", content[func_start:])
|
||||
|
||||
if func_end_match:
|
||||
func_body = content[func_start : func_start + func_end_match.start()]
|
||||
is_complete = True
|
||||
end_pos = func_start + func_end_match.end()
|
||||
else:
|
||||
if i + 1 < len(func_matches):
|
||||
next_func_start = func_matches[i + 1].start()
|
||||
func_body = content[func_start:next_func_start]
|
||||
else:
|
||||
func_body = content[func_start:]
|
||||
is_complete = False
|
||||
end_pos = len(content)
|
||||
|
||||
args = _parse_streaming_params(func_body)
|
||||
|
||||
segments.append(
|
||||
StreamSegment(
|
||||
type="tool",
|
||||
content=func_body,
|
||||
tool_name=tool_name,
|
||||
args=args,
|
||||
is_complete=is_complete,
|
||||
)
|
||||
)
|
||||
|
||||
if is_complete and i + 1 < len(func_matches):
|
||||
next_start = func_matches[i + 1].start()
|
||||
text_between = content[end_pos:next_start].strip()
|
||||
if text_between:
|
||||
segments.append(StreamSegment(type="text", content=text_between))
|
||||
|
||||
return segments
|
||||
|
||||
|
||||
def _parse_streaming_params(func_body: str) -> dict[str, str]:
|
||||
args: dict[str, str] = {}
|
||||
|
||||
complete_pattern = r"<parameter=([^>]+)>(.*?)</parameter>"
|
||||
complete_matches = list(re.finditer(complete_pattern, func_body, re.DOTALL))
|
||||
complete_end_pos = 0
|
||||
|
||||
for match in complete_matches:
|
||||
param_name = match.group(1)
|
||||
param_value = html.unescape(match.group(2).strip())
|
||||
args[param_name] = param_value
|
||||
complete_end_pos = max(complete_end_pos, match.end())
|
||||
|
||||
remaining = func_body[complete_end_pos:]
|
||||
incomplete_pattern = r"<parameter=([^>]+)>(.*)$"
|
||||
incomplete_match = re.search(incomplete_pattern, remaining, re.DOTALL)
|
||||
if incomplete_match:
|
||||
param_name = incomplete_match.group(1)
|
||||
param_value = html.unescape(incomplete_match.group(2).strip())
|
||||
args[param_name] = param_value
|
||||
|
||||
return args
|
||||
@@ -1,43 +1,163 @@
|
||||
import re
|
||||
from functools import cache
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pygments.lexers import get_lexer_by_name, guess_lexer
|
||||
from pygments.styles import get_style_by_name
|
||||
from pygments.util import ClassNotFound
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
def markdown_to_rich(text: str) -> str:
|
||||
# Fenced code blocks: ```lang\n...\n``` or ```\n...\n```
|
||||
text = re.sub(
|
||||
r"```(?:\w*)\n(.*?)```",
|
||||
r"[dim]\1[/dim]",
|
||||
text,
|
||||
flags=re.DOTALL,
|
||||
)
|
||||
_HEADER_STYLES = [
|
||||
("###### ", 7, "bold #4ade80"),
|
||||
("##### ", 6, "bold #22c55e"),
|
||||
("#### ", 5, "bold #16a34a"),
|
||||
("### ", 4, "bold #15803d"),
|
||||
("## ", 3, "bold #22c55e"),
|
||||
("# ", 2, "bold #4ade80"),
|
||||
]
|
||||
|
||||
# Headers
|
||||
text = re.sub(r"^#### (.+)$", r"[bold]\1[/bold]", text, flags=re.MULTILINE)
|
||||
text = re.sub(r"^### (.+)$", r"[bold]\1[/bold]", text, flags=re.MULTILINE)
|
||||
text = re.sub(r"^## (.+)$", r"[bold]\1[/bold]", text, flags=re.MULTILINE)
|
||||
text = re.sub(r"^# (.+)$", r"[bold]\1[/bold]", text, flags=re.MULTILINE)
|
||||
|
||||
# Links
|
||||
text = re.sub(r"\[([^\]]+)\]\(([^)]+)\)", r"[underline]\1[/underline] [dim](\2)[/dim]", text)
|
||||
@cache
|
||||
def _get_style_colors() -> dict[Any, str]:
|
||||
style = get_style_by_name("native")
|
||||
return {token: f"#{style_def['color']}" for token, style_def in style if style_def["color"]}
|
||||
|
||||
# Bold
|
||||
text = re.sub(r"\*\*(.+?)\*\*", r"[bold]\1[/bold]", text)
|
||||
text = re.sub(r"__(.+?)__", r"[bold]\1[/bold]", text)
|
||||
|
||||
# Italic
|
||||
text = re.sub(r"(?<!\*)\*(?!\*)(.+?)(?<!\*)\*(?!\*)", r"[italic]\1[/italic]", text)
|
||||
text = re.sub(r"(?<![_\w])_(?!_)(.+?)(?<!_)_(?![_\w])", r"[italic]\1[/italic]", text)
|
||||
def _get_token_color(token_type: Any) -> str | None:
|
||||
colors = _get_style_colors()
|
||||
while token_type:
|
||||
if token_type in colors:
|
||||
return colors[token_type]
|
||||
token_type = token_type.parent
|
||||
return None
|
||||
|
||||
# Inline code
|
||||
text = re.sub(r"`([^`]+)`", r"[bold dim]\1[/bold dim]", text)
|
||||
|
||||
# Strikethrough
|
||||
return re.sub(r"~~(.+?)~~", r"[strike]\1[/strike]", text)
|
||||
def _highlight_code(code: str, language: str | None = None) -> Text:
|
||||
text = Text()
|
||||
|
||||
try:
|
||||
lexer = get_lexer_by_name(language) if language else guess_lexer(code)
|
||||
except ClassNotFound:
|
||||
text.append(code, style="#d4d4d4")
|
||||
return text
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
color = _get_token_color(token_type)
|
||||
text.append(token_value, style=color)
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def _try_parse_header(line: str) -> tuple[str, str] | None:
|
||||
for prefix, strip_len, style in _HEADER_STYLES:
|
||||
if line.startswith(prefix):
|
||||
return (line[strip_len:], style)
|
||||
return None
|
||||
|
||||
|
||||
def _apply_markdown_styles(text: str) -> Text: # noqa: PLR0912
|
||||
result = Text()
|
||||
lines = text.split("\n")
|
||||
|
||||
in_code_block = False
|
||||
code_block_lang: str | None = None
|
||||
code_block_lines: list[str] = []
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
if i > 0 and not in_code_block:
|
||||
result.append("\n")
|
||||
|
||||
if line.startswith("```"):
|
||||
if not in_code_block:
|
||||
in_code_block = True
|
||||
code_block_lang = line[3:].strip() or None
|
||||
code_block_lines = []
|
||||
if i > 0:
|
||||
result.append("\n")
|
||||
else:
|
||||
in_code_block = False
|
||||
code_content = "\n".join(code_block_lines)
|
||||
if code_content:
|
||||
result.append_text(_highlight_code(code_content, code_block_lang))
|
||||
code_block_lines = []
|
||||
code_block_lang = None
|
||||
continue
|
||||
|
||||
if in_code_block:
|
||||
code_block_lines.append(line)
|
||||
continue
|
||||
|
||||
header = _try_parse_header(line)
|
||||
if header:
|
||||
result.append(header[0], style=header[1])
|
||||
elif line.startswith("> "):
|
||||
result.append("┃ ", style="#22c55e")
|
||||
result.append_text(_process_inline_formatting(line[2:]))
|
||||
elif line.startswith(("- ", "* ")):
|
||||
result.append("• ", style="#22c55e")
|
||||
result.append_text(_process_inline_formatting(line[2:]))
|
||||
elif len(line) > 2 and line[0].isdigit() and line[1:3] in (". ", ") "):
|
||||
result.append(line[0] + ". ", style="#22c55e")
|
||||
result.append_text(_process_inline_formatting(line[2:]))
|
||||
elif line.strip() in ("---", "***", "___"):
|
||||
result.append("─" * 40, style="#22c55e")
|
||||
else:
|
||||
result.append_text(_process_inline_formatting(line))
|
||||
|
||||
if in_code_block and code_block_lines:
|
||||
code_content = "\n".join(code_block_lines)
|
||||
result.append_text(_highlight_code(code_content, code_block_lang))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def _process_inline_formatting(line: str) -> Text:
|
||||
result = Text()
|
||||
i = 0
|
||||
n = len(line)
|
||||
|
||||
while i < n:
|
||||
if i + 1 < n and line[i : i + 2] in ("**", "__"):
|
||||
marker = line[i : i + 2]
|
||||
end = line.find(marker, i + 2)
|
||||
if end != -1:
|
||||
result.append(line[i + 2 : end], style="bold #4ade80")
|
||||
i = end + 2
|
||||
continue
|
||||
|
||||
if i + 1 < n and line[i : i + 2] == "~~":
|
||||
end = line.find("~~", i + 2)
|
||||
if end != -1:
|
||||
result.append(line[i + 2 : end], style="strike #525252")
|
||||
i = end + 2
|
||||
continue
|
||||
|
||||
if line[i] == "`":
|
||||
end = line.find("`", i + 1)
|
||||
if end != -1:
|
||||
result.append(line[i + 1 : end], style="bold #22c55e on #0a0a0a")
|
||||
i = end + 1
|
||||
continue
|
||||
|
||||
if line[i] in ("*", "_"):
|
||||
marker = line[i]
|
||||
if i + 1 < n and line[i + 1] != marker:
|
||||
end = line.find(marker, i + 1)
|
||||
if end != -1 and (end + 1 >= n or line[end + 1] != marker):
|
||||
result.append(line[i + 1 : end], style="italic #86efac")
|
||||
i = end + 1
|
||||
continue
|
||||
|
||||
result.append(line[i])
|
||||
i += 1
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -46,25 +166,25 @@ class AgentMessageRenderer(BaseToolRenderer):
|
||||
css_classes: ClassVar[list[str]] = ["chat-message", "agent-message"]
|
||||
|
||||
@classmethod
|
||||
def render(cls, message_data: dict[str, Any]) -> Static:
|
||||
content = message_data.get("content", "")
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
content = tool_data.get("content", "")
|
||||
|
||||
if not content:
|
||||
return Static("", classes=cls.css_classes)
|
||||
return Static(Text(), classes=" ".join(cls.css_classes))
|
||||
|
||||
formatted_content = cls._format_agent_message(content)
|
||||
styled_text = _apply_markdown_styles(content)
|
||||
|
||||
css_classes = " ".join(cls.css_classes)
|
||||
return Static(formatted_content, classes=css_classes)
|
||||
return Static(styled_text, classes=" ".join(cls.css_classes))
|
||||
|
||||
@classmethod
|
||||
def render_simple(cls, content: str) -> str:
|
||||
def render_simple(cls, content: str) -> Text:
|
||||
if not content:
|
||||
return ""
|
||||
return Text()
|
||||
|
||||
return cls._format_agent_message(content)
|
||||
from strix.llm.utils import clean_content
|
||||
|
||||
@classmethod
|
||||
def _format_agent_message(cls, content: str) -> str:
|
||||
escaped_content = cls.escape_markup(content)
|
||||
return markdown_to_rich(escaped_content)
|
||||
cleaned = clean_content(content)
|
||||
if not cleaned:
|
||||
return Text()
|
||||
|
||||
return _apply_markdown_styles(cleaned)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -12,11 +13,15 @@ class ViewAgentGraphRenderer(BaseToolRenderer):
|
||||
css_classes: ClassVar[list[str]] = ["tool-call", "agents-graph-tool"]
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static: # noqa: ARG003
|
||||
content_text = "🕸️ [bold #fbbf24]Viewing agents graph[/]"
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
text = Text()
|
||||
text.append("◇ ", style="#a78bfa")
|
||||
text.append("viewing agents graph", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -27,20 +32,22 @@ class CreateAgentRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
task = args.get("task", "")
|
||||
name = args.get("name", "Agent")
|
||||
|
||||
header = f"🤖 [bold #fbbf24]Creating {cls.escape_markup(name)}[/]"
|
||||
text = Text()
|
||||
text.append("◈ ", style="#a78bfa")
|
||||
text.append("spawning ", style="dim")
|
||||
text.append(name, style="bold #a78bfa")
|
||||
|
||||
if task:
|
||||
task_display = task[:400] + "..." if len(task) > 400 else task
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(task_display)}[/]"
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Spawning agent...[/]"
|
||||
text.append("\n ")
|
||||
text.append(task, style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -51,19 +58,24 @@ class SendMessageToAgentRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
message = args.get("message", "")
|
||||
agent_id = args.get("agent_id", "")
|
||||
|
||||
header = "💬 [bold #fbbf24]Sending message[/]"
|
||||
text = Text()
|
||||
text.append("→ ", style="#60a5fa")
|
||||
if agent_id:
|
||||
text.append(f"to {agent_id}", style="dim")
|
||||
else:
|
||||
text.append("sending message", style="dim")
|
||||
|
||||
if message:
|
||||
message_display = message[:400] + "..." if len(message) > 400 else message
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(message_display)}[/]"
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Sending...[/]"
|
||||
text.append("\n ")
|
||||
text.append(message, style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -79,25 +91,28 @@ class AgentFinishRenderer(BaseToolRenderer):
|
||||
findings = args.get("findings", [])
|
||||
success = args.get("success", True)
|
||||
|
||||
header = (
|
||||
"🏁 [bold #fbbf24]Agent completed[/]" if success else "🏁 [bold #fbbf24]Agent failed[/]"
|
||||
)
|
||||
text = Text()
|
||||
text.append("🏁 ")
|
||||
|
||||
if success:
|
||||
text.append("Agent completed", style="bold #fbbf24")
|
||||
else:
|
||||
text.append("Agent failed", style="bold #fbbf24")
|
||||
|
||||
if result_summary:
|
||||
content_parts = [f"{header}\n [bold]{cls.escape_markup(result_summary)}[/]"]
|
||||
text.append("\n ")
|
||||
text.append(result_summary, style="bold")
|
||||
|
||||
if findings and isinstance(findings, list):
|
||||
finding_lines = [f"• {finding}" for finding in findings]
|
||||
content_parts.append(
|
||||
f" [dim]{chr(10).join([cls.escape_markup(line) for line in finding_lines])}[/]"
|
||||
)
|
||||
|
||||
content_text = "\n".join(content_parts)
|
||||
for finding in findings:
|
||||
text.append("\n • ")
|
||||
text.append(str(finding), style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Completing task...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Completing task...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -108,16 +123,17 @@ class WaitForMessageRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
reason = args.get("reason", "Waiting for messages from other agents or user input")
|
||||
reason = args.get("reason", "")
|
||||
|
||||
header = "⏸️ [bold #fbbf24]Waiting for messages[/]"
|
||||
text = Text()
|
||||
text.append("○ ", style="#6b7280")
|
||||
text.append("waiting", style="dim")
|
||||
|
||||
if reason:
|
||||
reason_display = reason[:400] + "..." if len(reason) > 400 else reason
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(reason_display)}[/]"
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Agent paused until message received...[/]"
|
||||
text.append("\n ")
|
||||
text.append(reason, style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, ClassVar, cast
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.markup import escape as rich_escape
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
|
||||
class BaseToolRenderer(ABC):
|
||||
tool_name: ClassVar[str] = ""
|
||||
|
||||
css_classes: ClassVar[list[str]] = ["tool-call"]
|
||||
|
||||
@classmethod
|
||||
@@ -16,47 +15,80 @@ class BaseToolRenderer(ABC):
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def escape_markup(cls, text: str) -> str:
|
||||
return cast("str", rich_escape(text))
|
||||
def build_text(cls, tool_data: dict[str, Any]) -> Text: # noqa: ARG003
|
||||
return Text()
|
||||
|
||||
@classmethod
|
||||
def format_args(cls, args: dict[str, Any], max_length: int = 500) -> str:
|
||||
if not args:
|
||||
return ""
|
||||
|
||||
args_parts = []
|
||||
for k, v in args.items():
|
||||
str_v = str(v)
|
||||
if len(str_v) > max_length:
|
||||
str_v = str_v[: max_length - 3] + "..."
|
||||
args_parts.append(f" [dim]{k}:[/] {cls.escape_markup(str_v)}")
|
||||
return "\n".join(args_parts)
|
||||
def create_static(cls, content: Text, status: str) -> Static:
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def format_result(cls, result: Any, max_length: int = 1000) -> str:
|
||||
if result is None:
|
||||
return ""
|
||||
|
||||
str_result = str(result).strip()
|
||||
if not str_result:
|
||||
return ""
|
||||
|
||||
if len(str_result) > max_length:
|
||||
str_result = str_result[: max_length - 3] + "..."
|
||||
return cls.escape_markup(str_result)
|
||||
|
||||
@classmethod
|
||||
def get_status_icon(cls, status: str) -> str:
|
||||
status_icons = {
|
||||
"running": "[#f59e0b]●[/#f59e0b] In progress...",
|
||||
"completed": "[#22c55e]✓[/#22c55e] Done",
|
||||
"failed": "[#dc2626]✗[/#dc2626] Failed",
|
||||
"error": "[#dc2626]✗[/#dc2626] Error",
|
||||
def status_icon(cls, status: str) -> tuple[str, str]:
|
||||
icons = {
|
||||
"running": ("● In progress...", "#f59e0b"),
|
||||
"completed": ("✓ Done", "#22c55e"),
|
||||
"failed": ("✗ Failed", "#dc2626"),
|
||||
"error": ("✗ Error", "#dc2626"),
|
||||
}
|
||||
return status_icons.get(status, "[dim]○[/dim] Unknown")
|
||||
return icons.get(status, ("○ Unknown", "dim"))
|
||||
|
||||
@classmethod
|
||||
def get_css_classes(cls, status: str) -> str:
|
||||
base_classes = cls.css_classes.copy()
|
||||
base_classes.append(f"status-{status}")
|
||||
return " ".join(base_classes)
|
||||
|
||||
@classmethod
|
||||
def text_with_style(cls, content: str, style: str | None = None) -> Text:
|
||||
text = Text()
|
||||
text.append(content, style=style)
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def text_icon_label(
|
||||
cls,
|
||||
icon: str,
|
||||
label: str,
|
||||
icon_style: str | None = None,
|
||||
label_style: str | None = None,
|
||||
) -> Text:
|
||||
text = Text()
|
||||
text.append(icon, style=icon_style)
|
||||
text.append(" ")
|
||||
text.append(label, style=label_style)
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def text_header(
|
||||
cls,
|
||||
icon: str,
|
||||
title: str,
|
||||
subtitle: str = "",
|
||||
title_style: str = "bold",
|
||||
subtitle_style: str = "dim",
|
||||
) -> Text:
|
||||
text = Text()
|
||||
text.append(icon)
|
||||
text.append(" ")
|
||||
text.append(title, style=title_style)
|
||||
if subtitle:
|
||||
text.append(" ")
|
||||
text.append(subtitle, style=subtitle_style)
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def text_key_value(
|
||||
cls,
|
||||
key: str,
|
||||
value: str,
|
||||
key_style: str = "dim",
|
||||
value_style: str | None = None,
|
||||
indent: int = 2,
|
||||
) -> Text:
|
||||
text = Text()
|
||||
text.append(" " * indent)
|
||||
text.append(key, style=key_style)
|
||||
text.append(": ")
|
||||
text.append(value, style=value_style)
|
||||
return text
|
||||
|
||||
@@ -3,6 +3,7 @@ from typing import Any, ClassVar
|
||||
|
||||
from pygments.lexers import get_lexer_by_name
|
||||
from pygments.styles import get_style_by_name
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -20,104 +21,7 @@ class BrowserRenderer(BaseToolRenderer):
|
||||
tool_name: ClassVar[str] = "browser_action"
|
||||
css_classes: ClassVar[list[str]] = ["tool-call", "browser-tool"]
|
||||
|
||||
@classmethod
|
||||
def _get_token_color(cls, token_type: Any) -> str | None:
|
||||
colors = _get_style_colors()
|
||||
while token_type:
|
||||
if token_type in colors:
|
||||
return colors[token_type]
|
||||
token_type = token_type.parent
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _highlight_js(cls, code: str) -> str:
|
||||
lexer = get_lexer_by_name("javascript")
|
||||
result_parts: list[str] = []
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
|
||||
escaped_value = cls.escape_markup(token_value)
|
||||
color = cls._get_token_color(token_type)
|
||||
|
||||
if color:
|
||||
result_parts.append(f"[{color}]{escaped_value}[/]")
|
||||
else:
|
||||
result_parts.append(escaped_value)
|
||||
|
||||
return "".join(result_parts)
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
action = args.get("action", "unknown")
|
||||
|
||||
content = cls._build_sleek_content(action, args)
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def _build_sleek_content(cls, action: str, args: dict[str, Any]) -> str:
|
||||
browser_icon = "🌐"
|
||||
|
||||
url = args.get("url")
|
||||
text = args.get("text")
|
||||
js_code = args.get("js_code")
|
||||
key = args.get("key")
|
||||
file_path = args.get("file_path")
|
||||
|
||||
if action in [
|
||||
"launch",
|
||||
"goto",
|
||||
"new_tab",
|
||||
"type",
|
||||
"execute_js",
|
||||
"click",
|
||||
"double_click",
|
||||
"hover",
|
||||
"press_key",
|
||||
"save_pdf",
|
||||
]:
|
||||
if action == "launch":
|
||||
display_url = cls._format_url(url) if url else None
|
||||
message = (
|
||||
f"launching {display_url} on browser" if display_url else "launching browser"
|
||||
)
|
||||
elif action == "goto":
|
||||
display_url = cls._format_url(url) if url else None
|
||||
message = f"navigating to {display_url}" if display_url else "navigating"
|
||||
elif action == "new_tab":
|
||||
display_url = cls._format_url(url) if url else None
|
||||
message = f"opening tab {display_url}" if display_url else "opening tab"
|
||||
elif action == "type":
|
||||
display_text = cls._format_text(text) if text else None
|
||||
message = f"typing {display_text}" if display_text else "typing"
|
||||
elif action == "execute_js":
|
||||
display_js = cls._format_js(js_code) if js_code else None
|
||||
message = (
|
||||
f"executing javascript\n{display_js}" if display_js else "executing javascript"
|
||||
)
|
||||
elif action == "press_key":
|
||||
display_key = cls.escape_markup(key) if key else None
|
||||
message = f"pressing key {display_key}" if display_key else "pressing key"
|
||||
elif action == "save_pdf":
|
||||
display_path = cls.escape_markup(file_path) if file_path else None
|
||||
message = f"saving PDF to {display_path}" if display_path else "saving PDF"
|
||||
else:
|
||||
action_words = {
|
||||
"click": "clicking",
|
||||
"double_click": "double clicking",
|
||||
"hover": "hovering",
|
||||
}
|
||||
message = cls.escape_markup(action_words[action])
|
||||
|
||||
return f"{browser_icon} [#06b6d4]{message}[/]"
|
||||
|
||||
simple_actions = {
|
||||
SIMPLE_ACTIONS: ClassVar[dict[str, str]] = {
|
||||
"back": "going back in browser history",
|
||||
"forward": "going forward in browser history",
|
||||
"scroll_down": "scrolling down",
|
||||
@@ -133,24 +37,99 @@ class BrowserRenderer(BaseToolRenderer):
|
||||
"close": "closing browser",
|
||||
}
|
||||
|
||||
if action in simple_actions:
|
||||
return f"{browser_icon} [#06b6d4]{cls.escape_markup(simple_actions[action])}[/]"
|
||||
|
||||
return f"{browser_icon} [#06b6d4]{cls.escape_markup(action)}[/]"
|
||||
@classmethod
|
||||
def _get_token_color(cls, token_type: Any) -> str | None:
|
||||
colors = _get_style_colors()
|
||||
while token_type:
|
||||
if token_type in colors:
|
||||
return colors[token_type]
|
||||
token_type = token_type.parent
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _format_url(cls, url: str) -> str:
|
||||
if len(url) > 300:
|
||||
url = url[:297] + "..."
|
||||
return cls.escape_markup(url)
|
||||
def _highlight_js(cls, code: str) -> Text:
|
||||
lexer = get_lexer_by_name("javascript")
|
||||
text = Text()
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
color = cls._get_token_color(token_type)
|
||||
text.append(token_value, style=color)
|
||||
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def _format_text(cls, text: str) -> str:
|
||||
if len(text) > 200:
|
||||
text = text[:197] + "..."
|
||||
return cls.escape_markup(text)
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
action = args.get("action", "unknown")
|
||||
content = cls._build_content(action, args)
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def _format_js(cls, js_code: str) -> str:
|
||||
code_display = js_code[:2000] + "..." if len(js_code) > 2000 else js_code
|
||||
return cls._highlight_js(code_display)
|
||||
def _build_url_action(cls, text: Text, label: str, url: str | None, suffix: str = "") -> None:
|
||||
text.append(label, style="#06b6d4")
|
||||
if url:
|
||||
text.append(url, style="#06b6d4")
|
||||
if suffix:
|
||||
text.append(suffix, style="#06b6d4")
|
||||
|
||||
@classmethod
|
||||
def _build_content(cls, action: str, args: dict[str, Any]) -> Text:
|
||||
text = Text()
|
||||
text.append("🌐 ")
|
||||
|
||||
if action in cls.SIMPLE_ACTIONS:
|
||||
text.append(cls.SIMPLE_ACTIONS[action], style="#06b6d4")
|
||||
return text
|
||||
|
||||
url = args.get("url")
|
||||
|
||||
url_actions = {
|
||||
"launch": ("launching ", " on browser" if url else "browser"),
|
||||
"goto": ("navigating to ", ""),
|
||||
"new_tab": ("opening tab ", ""),
|
||||
}
|
||||
if action in url_actions:
|
||||
label, suffix = url_actions[action]
|
||||
if action == "launch" and not url:
|
||||
text.append("launching browser", style="#06b6d4")
|
||||
else:
|
||||
cls._build_url_action(text, label, url, suffix)
|
||||
return text
|
||||
|
||||
click_actions = {
|
||||
"click": "clicking",
|
||||
"double_click": "double clicking",
|
||||
"hover": "hovering",
|
||||
}
|
||||
if action in click_actions:
|
||||
text.append(click_actions[action], style="#06b6d4")
|
||||
return text
|
||||
|
||||
handlers: dict[str, tuple[str, str | None]] = {
|
||||
"type": ("typing ", args.get("text")),
|
||||
"press_key": ("pressing key ", args.get("key")),
|
||||
"save_pdf": ("saving PDF to ", args.get("file_path")),
|
||||
}
|
||||
if action in handlers:
|
||||
label, value = handlers[action]
|
||||
text.append(label, style="#06b6d4")
|
||||
if value:
|
||||
text.append(str(value), style="#06b6d4")
|
||||
return text
|
||||
|
||||
if action == "execute_js":
|
||||
text.append("executing javascript", style="#06b6d4")
|
||||
js_code = args.get("js_code")
|
||||
if js_code:
|
||||
text.append("\n")
|
||||
text.append_text(cls._highlight_js(js_code))
|
||||
return text
|
||||
|
||||
text.append(action, style="#06b6d4")
|
||||
return text
|
||||
|
||||
@@ -4,6 +4,7 @@ from typing import Any, ClassVar
|
||||
from pygments.lexers import get_lexer_by_name, get_lexer_for_filename
|
||||
from pygments.styles import get_style_by_name
|
||||
from pygments.util import ClassNotFound
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -38,23 +39,17 @@ class StrReplaceEditorRenderer(BaseToolRenderer):
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _highlight_code(cls, code: str, path: str) -> str:
|
||||
def _highlight_code(cls, code: str, path: str) -> Text:
|
||||
lexer = _get_lexer_for_file(path)
|
||||
result_parts: list[str] = []
|
||||
text = Text()
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
|
||||
escaped_value = cls.escape_markup(token_value)
|
||||
color = cls._get_token_color(token_type)
|
||||
text.append(token_value, style=color)
|
||||
|
||||
if color:
|
||||
result_parts.append(f"[{color}]{escaped_value}[/]")
|
||||
else:
|
||||
result_parts.append(escaped_value)
|
||||
|
||||
return "".join(result_parts)
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
@@ -67,48 +62,63 @@ class StrReplaceEditorRenderer(BaseToolRenderer):
|
||||
new_str = args.get("new_str", "")
|
||||
file_text = args.get("file_text", "")
|
||||
|
||||
if command == "view":
|
||||
header = "📖 [bold #10b981]Reading file[/]"
|
||||
elif command == "str_replace":
|
||||
header = "✏️ [bold #10b981]Editing file[/]"
|
||||
elif command == "create":
|
||||
header = "📝 [bold #10b981]Creating file[/]"
|
||||
elif command == "insert":
|
||||
header = "✏️ [bold #10b981]Inserting text[/]"
|
||||
elif command == "undo_edit":
|
||||
header = "↩️ [bold #10b981]Undoing edit[/]"
|
||||
else:
|
||||
header = "📄 [bold #10b981]File operation[/]"
|
||||
text = Text()
|
||||
|
||||
icons_and_labels = {
|
||||
"view": ("📖 ", "Reading file", "#10b981"),
|
||||
"str_replace": ("✏️ ", "Editing file", "#10b981"),
|
||||
"create": ("📝 ", "Creating file", "#10b981"),
|
||||
"insert": ("✏️ ", "Inserting text", "#10b981"),
|
||||
"undo_edit": ("↩️ ", "Undoing edit", "#10b981"),
|
||||
}
|
||||
|
||||
icon, label, color = icons_and_labels.get(command, ("📄 ", "File operation", "#10b981"))
|
||||
text.append(icon)
|
||||
text.append(label, style=f"bold {color}")
|
||||
|
||||
if path:
|
||||
path_display = path[-60:] if len(path) > 60 else path
|
||||
content_parts = [f"{header} [dim]{cls.escape_markup(path_display)}[/]"]
|
||||
text.append(" ")
|
||||
text.append(path_display, style="dim")
|
||||
|
||||
if command == "str_replace" and (old_str or new_str):
|
||||
if old_str:
|
||||
old_display = old_str[:1000] + "..." if len(old_str) > 1000 else old_str
|
||||
highlighted_old = cls._highlight_code(old_display, path)
|
||||
old_lines = highlighted_old.split("\n")
|
||||
content_parts.extend(f"[#ef4444]-[/] {line}" for line in old_lines)
|
||||
if new_str:
|
||||
new_display = new_str[:1000] + "..." if len(new_str) > 1000 else new_str
|
||||
highlighted_new = cls._highlight_code(new_display, path)
|
||||
new_lines = highlighted_new.split("\n")
|
||||
content_parts.extend(f"[#22c55e]+[/] {line}" for line in new_lines)
|
||||
elif command == "create" and file_text:
|
||||
text_display = file_text[:1500] + "..." if len(file_text) > 1500 else file_text
|
||||
highlighted_text = cls._highlight_code(text_display, path)
|
||||
content_parts.append(highlighted_text)
|
||||
elif command == "insert" and new_str:
|
||||
new_display = new_str[:1000] + "..." if len(new_str) > 1000 else new_str
|
||||
highlighted_new = cls._highlight_code(new_display, path)
|
||||
new_lines = highlighted_new.split("\n")
|
||||
content_parts.extend(f"[#22c55e]+[/] {line}" for line in new_lines)
|
||||
elif not (result and isinstance(result, dict) and "content" in result) and not path:
|
||||
content_parts = [f"{header} [dim]Processing...[/]"]
|
||||
highlighted_old = cls._highlight_code(old_str, path)
|
||||
for line in highlighted_old.plain.split("\n"):
|
||||
text.append("\n")
|
||||
text.append("-", style="#ef4444")
|
||||
text.append(" ")
|
||||
text.append(line)
|
||||
|
||||
if new_str:
|
||||
highlighted_new = cls._highlight_code(new_str, path)
|
||||
for line in highlighted_new.plain.split("\n"):
|
||||
text.append("\n")
|
||||
text.append("+", style="#22c55e")
|
||||
text.append(" ")
|
||||
text.append(line)
|
||||
|
||||
elif command == "create" and file_text:
|
||||
text.append("\n")
|
||||
text.append_text(cls._highlight_code(file_text, path))
|
||||
|
||||
elif command == "insert" and new_str:
|
||||
highlighted_new = cls._highlight_code(new_str, path)
|
||||
for line in highlighted_new.plain.split("\n"):
|
||||
text.append("\n")
|
||||
text.append("+", style="#22c55e")
|
||||
text.append(" ")
|
||||
text.append(line)
|
||||
|
||||
elif isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif not (result and isinstance(result, dict) and "content" in result) and not path:
|
||||
text.append(" ")
|
||||
text.append("Processing...", style="dim")
|
||||
|
||||
content_text = "\n".join(content_parts)
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -119,19 +129,21 @@ class ListFilesRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
|
||||
path = args.get("path", "")
|
||||
|
||||
header = "📂 [bold #10b981]Listing files[/]"
|
||||
text = Text()
|
||||
text.append("📂 ")
|
||||
text.append("Listing files", style="bold #10b981")
|
||||
text.append(" ")
|
||||
|
||||
if path:
|
||||
path_display = path[-60:] if len(path) > 60 else path
|
||||
content_text = f"{header} [dim]{cls.escape_markup(path_display)}[/]"
|
||||
text.append(path_display, style="dim")
|
||||
else:
|
||||
content_text = f"{header} [dim]Current directory[/]"
|
||||
text.append("Current directory", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -142,27 +154,27 @@ class SearchFilesRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
|
||||
path = args.get("path", "")
|
||||
regex = args.get("regex", "")
|
||||
|
||||
header = "🔍 [bold purple]Searching files[/]"
|
||||
text = Text()
|
||||
text.append("🔍 ")
|
||||
text.append("Searching files", style="bold purple")
|
||||
text.append(" ")
|
||||
|
||||
if path and regex:
|
||||
path_display = path[-30:] if len(path) > 30 else path
|
||||
regex_display = regex[:30] if len(regex) > 30 else regex
|
||||
content_text = (
|
||||
f"{header} [dim]{cls.escape_markup(path_display)} for "
|
||||
f"'{cls.escape_markup(regex_display)}'[/]"
|
||||
)
|
||||
text.append(path, style="dim")
|
||||
text.append(" for '", style="dim")
|
||||
text.append(regex, style="dim")
|
||||
text.append("'", style="dim")
|
||||
elif path:
|
||||
path_display = path[-60:] if len(path) > 60 else path
|
||||
content_text = f"{header} [dim]{cls.escape_markup(path_display)}[/]"
|
||||
text.append(path, style="dim")
|
||||
elif regex:
|
||||
regex_display = regex[:60] if len(regex) > 60 else regex
|
||||
content_text = f"{header} [dim]'{cls.escape_markup(regex_display)}'[/]"
|
||||
text.append("'", style="dim")
|
||||
text.append(regex, style="dim")
|
||||
text.append("'", style="dim")
|
||||
else:
|
||||
content_text = f"{header} [dim]Searching...[/]"
|
||||
text.append("Searching...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.padding import Padding
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
FIELD_STYLE = "bold #4ade80"
|
||||
BG_COLOR = "#141414"
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
class FinishScanRenderer(BaseToolRenderer):
|
||||
tool_name: ClassVar[str] = "finish_scan"
|
||||
@@ -15,17 +21,44 @@ class FinishScanRenderer(BaseToolRenderer):
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
|
||||
content = args.get("content", "")
|
||||
success = args.get("success", True)
|
||||
executive_summary = args.get("executive_summary", "")
|
||||
methodology = args.get("methodology", "")
|
||||
technical_analysis = args.get("technical_analysis", "")
|
||||
recommendations = args.get("recommendations", "")
|
||||
|
||||
header = (
|
||||
"🏁 [bold #dc2626]Finishing Scan[/]" if success else "🏁 [bold #dc2626]Scan Failed[/]"
|
||||
)
|
||||
text = Text()
|
||||
text.append("🏁 ")
|
||||
text.append("Finishing Scan", style="bold #dc2626")
|
||||
|
||||
if content:
|
||||
content_text = f"{header}\n [bold]{cls.escape_markup(content)}[/]"
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Generating final report...[/]"
|
||||
if executive_summary:
|
||||
text.append("\n\n")
|
||||
text.append("Executive Summary", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(executive_summary)
|
||||
|
||||
if methodology:
|
||||
text.append("\n\n")
|
||||
text.append("Methodology", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(methodology)
|
||||
|
||||
if technical_analysis:
|
||||
text.append("\n\n")
|
||||
text.append("Technical Analysis", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(technical_analysis)
|
||||
|
||||
if recommendations:
|
||||
text.append("\n\n")
|
||||
text.append("Recommendations", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(recommendations)
|
||||
|
||||
if not (executive_summary or methodology or technical_analysis or recommendations):
|
||||
text.append("\n ")
|
||||
text.append("Generating final report...", style="dim")
|
||||
|
||||
padded = Padding(text, 2, style=f"on {BG_COLOR}")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(padded, classes=css_classes)
|
||||
|
||||
@@ -1,17 +1,12 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
def _truncate(text: str, length: int = 800) -> str:
|
||||
if len(text) <= length:
|
||||
return text
|
||||
return text[: length - 3] + "..."
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
class CreateNoteRenderer(BaseToolRenderer):
|
||||
tool_name: ClassVar[str] = "create_note"
|
||||
@@ -25,22 +20,26 @@ class CreateNoteRenderer(BaseToolRenderer):
|
||||
content = args.get("content", "")
|
||||
category = args.get("category", "general")
|
||||
|
||||
header = f"📝 [bold #fbbf24]Note[/] [dim]({category})[/]"
|
||||
text = Text()
|
||||
text.append("📝 ")
|
||||
text.append("Note", style="bold #fbbf24")
|
||||
text.append(" ")
|
||||
text.append(f"({category})", style="dim")
|
||||
|
||||
lines = [header]
|
||||
if title:
|
||||
title_display = _truncate(title.strip(), 300)
|
||||
lines.append(f" {cls.escape_markup(title_display)}")
|
||||
text.append("\n ")
|
||||
text.append(title.strip())
|
||||
|
||||
if content:
|
||||
content_display = _truncate(content.strip(), 800)
|
||||
lines.append(f" [dim]{cls.escape_markup(content_display)}[/]")
|
||||
text.append("\n ")
|
||||
text.append(content.strip(), style="dim")
|
||||
|
||||
if len(lines) == 1:
|
||||
lines.append(" [dim]Capturing...[/]")
|
||||
if not title and not content:
|
||||
text.append("\n ")
|
||||
text.append("Capturing...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static("\n".join(lines), classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -50,11 +49,12 @@ class DeleteNoteRenderer(BaseToolRenderer):
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static: # noqa: ARG003
|
||||
header = "📝 [bold #94a3b8]Note Removed[/]"
|
||||
content_text = header
|
||||
text = Text()
|
||||
text.append("📝 ")
|
||||
text.append("Note Removed", style="bold #94a3b8")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -69,21 +69,24 @@ class UpdateNoteRenderer(BaseToolRenderer):
|
||||
title = args.get("title")
|
||||
content = args.get("content")
|
||||
|
||||
header = "📝 [bold #fbbf24]Note Updated[/]"
|
||||
lines = [header]
|
||||
text = Text()
|
||||
text.append("📝 ")
|
||||
text.append("Note Updated", style="bold #fbbf24")
|
||||
|
||||
if title:
|
||||
lines.append(f" {cls.escape_markup(_truncate(title, 300))}")
|
||||
text.append("\n ")
|
||||
text.append(title)
|
||||
|
||||
if content:
|
||||
content_display = _truncate(content.strip(), 800)
|
||||
lines.append(f" [dim]{cls.escape_markup(content_display)}[/]")
|
||||
text.append("\n ")
|
||||
text.append(content.strip(), style="dim")
|
||||
|
||||
if len(lines) == 1:
|
||||
lines.append(" [dim]Updating...[/]")
|
||||
if not title and not content:
|
||||
text.append("\n ")
|
||||
text.append("Updating...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static("\n".join(lines), classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -95,34 +98,36 @@ class ListNotesRenderer(BaseToolRenderer):
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
|
||||
header = "📝 [bold #fbbf24]Notes[/]"
|
||||
text = Text()
|
||||
text.append("📝 ")
|
||||
text.append("Notes", style="bold #fbbf24")
|
||||
|
||||
if result and isinstance(result, dict) and result.get("success"):
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict) and result.get("success"):
|
||||
count = result.get("total_count", 0)
|
||||
notes = result.get("notes", []) or []
|
||||
lines = [header]
|
||||
|
||||
if count == 0:
|
||||
lines.append(" [dim]No notes[/]")
|
||||
text.append("\n ")
|
||||
text.append("No notes", style="dim")
|
||||
else:
|
||||
for note in notes[:5]:
|
||||
for note in notes:
|
||||
title = note.get("title", "").strip() or "(untitled)"
|
||||
category = note.get("category", "general")
|
||||
content = note.get("content", "").strip()
|
||||
note_content = note.get("content", "").strip()
|
||||
|
||||
lines.append(
|
||||
f" - {cls.escape_markup(_truncate(title, 300))} [dim]({category})[/]"
|
||||
)
|
||||
if content:
|
||||
content_preview = _truncate(content, 400)
|
||||
lines.append(f" [dim]{cls.escape_markup(content_preview)}[/]")
|
||||
text.append("\n - ")
|
||||
text.append(title)
|
||||
text.append(f" ({category})", style="dim")
|
||||
|
||||
remaining = max(count - 5, 0)
|
||||
if remaining:
|
||||
lines.append(f" [dim]... +{remaining} more[/]")
|
||||
content_text = "\n".join(lines)
|
||||
if note_content:
|
||||
text.append("\n ")
|
||||
text.append(note_content, style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Loading...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Loading...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -18,38 +19,42 @@ class ListRequestsRenderer(BaseToolRenderer):
|
||||
|
||||
httpql_filter = args.get("httpql_filter")
|
||||
|
||||
header = "📋 [bold #06b6d4]Listing requests[/]"
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Listing requests", style="bold #06b6d4")
|
||||
|
||||
if result and isinstance(result, dict) and "requests" in result:
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict) and "requests" in result:
|
||||
requests = result["requests"]
|
||||
if isinstance(requests, list) and requests:
|
||||
request_lines = []
|
||||
for req in requests[:3]:
|
||||
for req in requests[:25]:
|
||||
if isinstance(req, dict):
|
||||
method = req.get("method", "?")
|
||||
path = req.get("path", "?")
|
||||
response = req.get("response") or {}
|
||||
status = response.get("statusCode", "?")
|
||||
line = f"{method} {path} → {status}"
|
||||
request_lines.append(line)
|
||||
|
||||
if len(requests) > 3:
|
||||
request_lines.append(f"... +{len(requests) - 3} more")
|
||||
|
||||
escaped_lines = [cls.escape_markup(line) for line in request_lines]
|
||||
content_text = f"{header}\n [dim]{chr(10).join(escaped_lines)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(f"{method} {path} → {status}", style="dim")
|
||||
if len(requests) > 25:
|
||||
text.append("\n ")
|
||||
text.append(f"... +{len(requests) - 25} more", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]No requests found[/]"
|
||||
text.append("\n ")
|
||||
text.append("No requests found", style="dim")
|
||||
elif httpql_filter:
|
||||
filter_display = (
|
||||
httpql_filter[:300] + "..." if len(httpql_filter) > 300 else httpql_filter
|
||||
httpql_filter[:500] + "..." if len(httpql_filter) > 500 else httpql_filter
|
||||
)
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(filter_display)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(filter_display, style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]All requests[/]"
|
||||
text.append("\n ")
|
||||
text.append("All requests", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -64,34 +69,41 @@ class ViewRequestRenderer(BaseToolRenderer):
|
||||
|
||||
part = args.get("part", "request")
|
||||
|
||||
header = f"👀 [bold #06b6d4]Viewing {cls.escape_markup(part)}[/]"
|
||||
text = Text()
|
||||
text.append("👀 ")
|
||||
text.append(f"Viewing {part}", style="bold #06b6d4")
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if "content" in result:
|
||||
content = result["content"]
|
||||
content_preview = content[:500] + "..." if len(content) > 500 else content
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(content_preview)}[/]"
|
||||
content_preview = content[:2000] + "..." if len(content) > 2000 else content
|
||||
text.append("\n ")
|
||||
text.append(content_preview, style="dim")
|
||||
elif "matches" in result:
|
||||
matches = result["matches"]
|
||||
if isinstance(matches, list) and matches:
|
||||
match_lines = [
|
||||
match["match"]
|
||||
for match in matches[:3]
|
||||
if isinstance(match, dict) and "match" in match
|
||||
]
|
||||
if len(matches) > 3:
|
||||
match_lines.append(f"... +{len(matches) - 3} more matches")
|
||||
escaped_lines = [cls.escape_markup(line) for line in match_lines]
|
||||
content_text = f"{header}\n [dim]{chr(10).join(escaped_lines)}[/]"
|
||||
for match in matches[:25]:
|
||||
if isinstance(match, dict) and "match" in match:
|
||||
text.append("\n ")
|
||||
text.append(match["match"], style="dim")
|
||||
if len(matches) > 25:
|
||||
text.append("\n ")
|
||||
text.append(f"... +{len(matches) - 25} more matches", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]No matches found[/]"
|
||||
text.append("\n ")
|
||||
text.append("No matches found", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Viewing content...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Viewing content...", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Loading...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Loading...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -107,30 +119,39 @@ class SendRequestRenderer(BaseToolRenderer):
|
||||
method = args.get("method", "GET")
|
||||
url = args.get("url", "")
|
||||
|
||||
header = f"📤 [bold #06b6d4]Sending {cls.escape_markup(method)}[/]"
|
||||
text = Text()
|
||||
text.append("📤 ")
|
||||
text.append(f"Sending {method}", style="bold #06b6d4")
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
status_code = result.get("status_code")
|
||||
response_body = result.get("body", "")
|
||||
|
||||
if status_code:
|
||||
response_preview = f"Status: {status_code}"
|
||||
text.append("\n ")
|
||||
text.append(f"Status: {status_code}", style="dim")
|
||||
if response_body:
|
||||
body_preview = (
|
||||
response_body[:300] + "..." if len(response_body) > 300 else response_body
|
||||
response_body[:2000] + "..." if len(response_body) > 2000 else response_body
|
||||
)
|
||||
response_preview += f"\n{body_preview}"
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(response_preview)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(body_preview, style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Response received[/]"
|
||||
text.append("\n ")
|
||||
text.append("Response received", style="dim")
|
||||
elif url:
|
||||
url_display = url[:400] + "..." if len(url) > 400 else url
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(url_display)}[/]"
|
||||
url_display = url[:500] + "..." if len(url) > 500 else url
|
||||
text.append("\n ")
|
||||
text.append(url_display, style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Sending...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Sending...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -145,31 +166,40 @@ class RepeatRequestRenderer(BaseToolRenderer):
|
||||
|
||||
modifications = args.get("modifications", {})
|
||||
|
||||
header = "🔄 [bold #06b6d4]Repeating request[/]"
|
||||
text = Text()
|
||||
text.append("🔄 ")
|
||||
text.append("Repeating request", style="bold #06b6d4")
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
status_code = result.get("status_code")
|
||||
response_body = result.get("body", "")
|
||||
|
||||
if status_code:
|
||||
response_preview = f"Status: {status_code}"
|
||||
text.append("\n ")
|
||||
text.append(f"Status: {status_code}", style="dim")
|
||||
if response_body:
|
||||
body_preview = (
|
||||
response_body[:300] + "..." if len(response_body) > 300 else response_body
|
||||
response_body[:2000] + "..." if len(response_body) > 2000 else response_body
|
||||
)
|
||||
response_preview += f"\n{body_preview}"
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(response_preview)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(body_preview, style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Response received[/]"
|
||||
text.append("\n ")
|
||||
text.append("Response received", style="dim")
|
||||
elif modifications:
|
||||
mod_text = str(modifications)
|
||||
mod_display = mod_text[:400] + "..." if len(mod_text) > 400 else mod_text
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(mod_display)}[/]"
|
||||
mod_str = str(modifications)
|
||||
mod_display = mod_str[:500] + "..." if len(mod_str) > 500 else mod_str
|
||||
text.append("\n ")
|
||||
text.append(mod_display, style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]No modifications[/]"
|
||||
text.append("\n ")
|
||||
text.append("No modifications", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -179,11 +209,14 @@ class ScopeRulesRenderer(BaseToolRenderer):
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static: # noqa: ARG003
|
||||
header = "⚙️ [bold #06b6d4]Updating proxy scope[/]"
|
||||
content_text = f"{header}\n [dim]Configuring...[/]"
|
||||
text = Text()
|
||||
text.append("⚙️ ")
|
||||
text.append("Updating proxy scope", style="bold #06b6d4")
|
||||
text.append("\n ")
|
||||
text.append("Configuring...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -195,31 +228,34 @@ class ListSitemapRenderer(BaseToolRenderer):
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
|
||||
header = "🗺️ [bold #06b6d4]Listing sitemap[/]"
|
||||
text = Text()
|
||||
text.append("🗺️ ")
|
||||
text.append("Listing sitemap", style="bold #06b6d4")
|
||||
|
||||
if result and isinstance(result, dict) and "entries" in result:
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict) and "entries" in result:
|
||||
entries = result["entries"]
|
||||
if isinstance(entries, list) and entries:
|
||||
entry_lines = []
|
||||
for entry in entries[:4]:
|
||||
for entry in entries[:30]:
|
||||
if isinstance(entry, dict):
|
||||
label = entry.get("label", "?")
|
||||
kind = entry.get("kind", "?")
|
||||
line = f"{kind}: {label}"
|
||||
entry_lines.append(line)
|
||||
|
||||
if len(entries) > 4:
|
||||
entry_lines.append(f"... +{len(entries) - 4} more")
|
||||
|
||||
escaped_lines = [cls.escape_markup(line) for line in entry_lines]
|
||||
content_text = f"{header}\n [dim]{chr(10).join(escaped_lines)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(f"{kind}: {label}", style="dim")
|
||||
if len(entries) > 30:
|
||||
text.append("\n ")
|
||||
text.append(f"... +{len(entries) - 30} more entries", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]No entries found[/]"
|
||||
text.append("\n ")
|
||||
text.append("No entries found", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Loading...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Loading...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -231,25 +267,30 @@ class ViewSitemapEntryRenderer(BaseToolRenderer):
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
|
||||
header = "📍 [bold #06b6d4]Viewing sitemap entry[/]"
|
||||
text = Text()
|
||||
text.append("📍 ")
|
||||
text.append("Viewing sitemap entry", style="bold #06b6d4")
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
if "entry" in result:
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict) and "entry" in result:
|
||||
entry = result["entry"]
|
||||
if isinstance(entry, dict):
|
||||
label = entry.get("label", "")
|
||||
kind = entry.get("kind", "")
|
||||
if label and kind:
|
||||
entry_info = f"{kind}: {label}"
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(entry_info)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(f"{kind}: {label}", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Entry details loaded[/]"
|
||||
text.append("\n ")
|
||||
text.append("Entry details loaded", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Entry details loaded[/]"
|
||||
text.append("\n ")
|
||||
text.append("Entry details loaded", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Loading entry...[/]"
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Loading...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Loading...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,14 +1,24 @@
|
||||
import re
|
||||
from functools import cache
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pygments.lexers import PythonLexer
|
||||
from pygments.styles import get_style_by_name
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
MAX_OUTPUT_LINES = 50
|
||||
MAX_LINE_LENGTH = 200
|
||||
|
||||
STRIP_PATTERNS = [
|
||||
r"\.\.\. \[(stdout|stderr|result|output|error) truncated at \d+k? chars\]",
|
||||
]
|
||||
|
||||
|
||||
@cache
|
||||
def _get_style_colors() -> dict[Any, str]:
|
||||
style = get_style_by_name("native")
|
||||
@@ -30,43 +40,117 @@ class PythonRenderer(BaseToolRenderer):
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _highlight_python(cls, code: str) -> str:
|
||||
def _highlight_python(cls, code: str) -> Text:
|
||||
lexer = PythonLexer()
|
||||
result_parts: list[str] = []
|
||||
text = Text()
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
|
||||
escaped_value = cls.escape_markup(token_value)
|
||||
color = cls._get_token_color(token_type)
|
||||
text.append(token_value, style=color)
|
||||
|
||||
if color:
|
||||
result_parts.append(f"[{color}]{escaped_value}[/]")
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def _clean_output(cls, output: str) -> str:
|
||||
cleaned = output
|
||||
for pattern in STRIP_PATTERNS:
|
||||
cleaned = re.sub(pattern, "", cleaned)
|
||||
return cleaned.strip()
|
||||
|
||||
@classmethod
|
||||
def _truncate_line(cls, line: str) -> str:
|
||||
if len(line) > MAX_LINE_LENGTH:
|
||||
return line[: MAX_LINE_LENGTH - 3] + "..."
|
||||
return line
|
||||
|
||||
@classmethod
|
||||
def _format_output(cls, output: str) -> Text:
|
||||
text = Text()
|
||||
lines = output.splitlines()
|
||||
total_lines = len(lines)
|
||||
|
||||
head_count = MAX_OUTPUT_LINES // 2
|
||||
tail_count = MAX_OUTPUT_LINES - head_count - 1
|
||||
|
||||
if total_lines <= MAX_OUTPUT_LINES:
|
||||
display_lines = lines
|
||||
truncated = False
|
||||
hidden_count = 0
|
||||
else:
|
||||
result_parts.append(escaped_value)
|
||||
display_lines = lines[:head_count]
|
||||
truncated = True
|
||||
hidden_count = total_lines - head_count - tail_count
|
||||
|
||||
return "".join(result_parts)
|
||||
for i, line in enumerate(display_lines):
|
||||
truncated_line = cls._truncate_line(line)
|
||||
text.append(" ")
|
||||
text.append(truncated_line, style="dim")
|
||||
if i < len(display_lines) - 1 or truncated:
|
||||
text.append("\n")
|
||||
|
||||
if truncated:
|
||||
text.append(f" ... {hidden_count} lines truncated ...", style="dim italic")
|
||||
text.append("\n")
|
||||
tail_lines = lines[-tail_count:]
|
||||
for i, line in enumerate(tail_lines):
|
||||
truncated_line = cls._truncate_line(line)
|
||||
text.append(" ")
|
||||
text.append(truncated_line, style="dim")
|
||||
if i < len(tail_lines) - 1:
|
||||
text.append("\n")
|
||||
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def _append_output(cls, text: Text, result: dict[str, Any] | str) -> None:
|
||||
if isinstance(result, str):
|
||||
if result.strip():
|
||||
text.append("\n")
|
||||
text.append_text(cls._format_output(result))
|
||||
return
|
||||
|
||||
stdout = result.get("stdout", "")
|
||||
stderr = result.get("stderr", "")
|
||||
|
||||
stdout = cls._clean_output(stdout) if stdout else ""
|
||||
stderr = cls._clean_output(stderr) if stderr else ""
|
||||
|
||||
if stdout:
|
||||
text.append("\n")
|
||||
formatted_output = cls._format_output(stdout)
|
||||
text.append_text(formatted_output)
|
||||
|
||||
if stderr:
|
||||
text.append("\n")
|
||||
text.append(" stderr: ", style="bold #ef4444")
|
||||
formatted_stderr = cls._format_output(stderr)
|
||||
text.append_text(formatted_stderr)
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
result = tool_data.get("result")
|
||||
|
||||
action = args.get("action", "")
|
||||
code = args.get("code", "")
|
||||
|
||||
header = "</> [bold #3b82f6]Python[/]"
|
||||
text = Text()
|
||||
text.append("</> ", style="dim")
|
||||
|
||||
if code and action in ["new_session", "execute"]:
|
||||
code_display = code[:2000] + "..." if len(code) > 2000 else code
|
||||
highlighted_code = cls._highlight_python(code_display)
|
||||
content_text = f"{header}\n{highlighted_code}"
|
||||
text.append_text(cls._highlight_python(code))
|
||||
elif action == "close":
|
||||
content_text = f"{header}\n [dim]Closing session...[/]"
|
||||
text.append("Closing session...", style="dim")
|
||||
elif action == "list_sessions":
|
||||
content_text = f"{header}\n [dim]Listing sessions...[/]"
|
||||
text.append("Listing sessions...", style="dim")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Running...[/]"
|
||||
text.append("Running...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
if result and isinstance(result, dict | str):
|
||||
cls._append_output(text, result)
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -47,26 +48,32 @@ def render_tool_widget(tool_data: dict[str, Any]) -> Static:
|
||||
|
||||
|
||||
def _render_default_tool_widget(tool_data: dict[str, Any]) -> Static:
|
||||
tool_name = BaseToolRenderer.escape_markup(tool_data.get("tool_name", "Unknown Tool"))
|
||||
tool_name = tool_data.get("tool_name", "Unknown Tool")
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
result = tool_data.get("result")
|
||||
|
||||
status_text = BaseToolRenderer.get_status_icon(status)
|
||||
text = Text()
|
||||
|
||||
header = f"→ Using tool [bold blue]{BaseToolRenderer.escape_markup(tool_name)}[/]"
|
||||
content_parts = [header]
|
||||
text.append("→ Using tool ", style="dim")
|
||||
text.append(tool_name, style="bold blue")
|
||||
text.append("\n")
|
||||
|
||||
args_str = BaseToolRenderer.format_args(args)
|
||||
if args_str:
|
||||
content_parts.append(args_str)
|
||||
for k, v in list(args.items()):
|
||||
str_v = str(v)
|
||||
text.append(" ")
|
||||
text.append(k, style="dim")
|
||||
text.append(": ")
|
||||
text.append(str_v)
|
||||
text.append("\n")
|
||||
|
||||
if status in ["completed", "failed", "error"] and result is not None:
|
||||
result_str = BaseToolRenderer.format_result(result)
|
||||
if result_str:
|
||||
content_parts.append(f"[bold]Result:[/] {result_str}")
|
||||
result_str = str(result)
|
||||
text.append("Result: ", style="bold")
|
||||
text.append(result_str)
|
||||
else:
|
||||
content_parts.append(status_text)
|
||||
icon, color = BaseToolRenderer.status_icon(status)
|
||||
text.append(icon, style=color)
|
||||
|
||||
css_classes = BaseToolRenderer.get_css_classes(status)
|
||||
return Static("\n".join(content_parts), classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,53 +1,221 @@
|
||||
from functools import cache
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pygments.lexers import PythonLexer
|
||||
from pygments.styles import get_style_by_name
|
||||
from rich.padding import Padding
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
@cache
|
||||
def _get_style_colors() -> dict[Any, str]:
|
||||
style = get_style_by_name("native")
|
||||
return {token: f"#{style_def['color']}" for token, style_def in style if style_def["color"]}
|
||||
|
||||
|
||||
FIELD_STYLE = "bold #4ade80"
|
||||
BG_COLOR = "#141414"
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
class CreateVulnerabilityReportRenderer(BaseToolRenderer):
|
||||
tool_name: ClassVar[str] = "create_vulnerability_report"
|
||||
css_classes: ClassVar[list[str]] = ["tool-call", "reporting-tool"]
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
|
||||
title = args.get("title", "")
|
||||
severity = args.get("severity", "")
|
||||
content = args.get("content", "")
|
||||
|
||||
header = "🐞 [bold #ea580c]Vulnerability Report[/]"
|
||||
|
||||
if title:
|
||||
content_parts = [f"{header}\n [bold]{cls.escape_markup(title)}[/]"]
|
||||
|
||||
if severity:
|
||||
severity_color = cls._get_severity_color(severity.lower())
|
||||
content_parts.append(
|
||||
f" [dim]Severity: [{severity_color}]"
|
||||
f"{cls.escape_markup(severity.upper())}[/{severity_color}][/]"
|
||||
)
|
||||
|
||||
if content:
|
||||
content_parts.append(f" [dim]{cls.escape_markup(content)}[/]")
|
||||
|
||||
content_text = "\n".join(content_parts)
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Creating report...[/]"
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def _get_severity_color(cls, severity: str) -> str:
|
||||
severity_colors = {
|
||||
SEVERITY_COLORS: ClassVar[dict[str, str]] = {
|
||||
"critical": "#dc2626",
|
||||
"high": "#ea580c",
|
||||
"medium": "#d97706",
|
||||
"low": "#65a30d",
|
||||
"info": "#0284c7",
|
||||
}
|
||||
return severity_colors.get(severity, "#6b7280")
|
||||
|
||||
@classmethod
|
||||
def _get_token_color(cls, token_type: Any) -> str | None:
|
||||
colors = _get_style_colors()
|
||||
while token_type:
|
||||
if token_type in colors:
|
||||
return colors[token_type]
|
||||
token_type = token_type.parent
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _highlight_python(cls, code: str) -> Text:
|
||||
lexer = PythonLexer()
|
||||
text = Text()
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
color = cls._get_token_color(token_type)
|
||||
text.append(token_value, style=color)
|
||||
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def _get_cvss_color(cls, cvss_score: float) -> str:
|
||||
if cvss_score >= 9.0:
|
||||
return "#dc2626"
|
||||
if cvss_score >= 7.0:
|
||||
return "#ea580c"
|
||||
if cvss_score >= 4.0:
|
||||
return "#d97706"
|
||||
if cvss_score >= 0.1:
|
||||
return "#65a30d"
|
||||
return "#6b7280"
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static: # noqa: PLR0912, PLR0915
|
||||
args = tool_data.get("args", {})
|
||||
result = tool_data.get("result", {})
|
||||
|
||||
title = args.get("title", "")
|
||||
description = args.get("description", "")
|
||||
impact = args.get("impact", "")
|
||||
target = args.get("target", "")
|
||||
technical_analysis = args.get("technical_analysis", "")
|
||||
poc_description = args.get("poc_description", "")
|
||||
poc_script_code = args.get("poc_script_code", "")
|
||||
remediation_steps = args.get("remediation_steps", "")
|
||||
|
||||
attack_vector = args.get("attack_vector", "")
|
||||
attack_complexity = args.get("attack_complexity", "")
|
||||
privileges_required = args.get("privileges_required", "")
|
||||
user_interaction = args.get("user_interaction", "")
|
||||
scope = args.get("scope", "")
|
||||
confidentiality = args.get("confidentiality", "")
|
||||
integrity = args.get("integrity", "")
|
||||
availability = args.get("availability", "")
|
||||
|
||||
endpoint = args.get("endpoint", "")
|
||||
method = args.get("method", "")
|
||||
cve = args.get("cve", "")
|
||||
|
||||
severity = ""
|
||||
cvss_score = None
|
||||
if isinstance(result, dict):
|
||||
severity = result.get("severity", "")
|
||||
cvss_score = result.get("cvss_score")
|
||||
|
||||
text = Text()
|
||||
text.append("🐞 ")
|
||||
text.append("Vulnerability Report", style="bold #ea580c")
|
||||
|
||||
if title:
|
||||
text.append("\n\n")
|
||||
text.append("Title: ", style=FIELD_STYLE)
|
||||
text.append(title)
|
||||
|
||||
if severity:
|
||||
text.append("\n\n")
|
||||
text.append("Severity: ", style=FIELD_STYLE)
|
||||
severity_color = cls.SEVERITY_COLORS.get(severity.lower(), "#6b7280")
|
||||
text.append(severity.upper(), style=f"bold {severity_color}")
|
||||
|
||||
if cvss_score is not None:
|
||||
text.append("\n\n")
|
||||
text.append("CVSS Score: ", style=FIELD_STYLE)
|
||||
cvss_color = cls._get_cvss_color(cvss_score)
|
||||
text.append(str(cvss_score), style=f"bold {cvss_color}")
|
||||
|
||||
if target:
|
||||
text.append("\n\n")
|
||||
text.append("Target: ", style=FIELD_STYLE)
|
||||
text.append(target)
|
||||
|
||||
if endpoint:
|
||||
text.append("\n\n")
|
||||
text.append("Endpoint: ", style=FIELD_STYLE)
|
||||
text.append(endpoint)
|
||||
|
||||
if method:
|
||||
text.append("\n\n")
|
||||
text.append("Method: ", style=FIELD_STYLE)
|
||||
text.append(method)
|
||||
|
||||
if cve:
|
||||
text.append("\n\n")
|
||||
text.append("CVE: ", style=FIELD_STYLE)
|
||||
text.append(cve)
|
||||
|
||||
if any(
|
||||
[
|
||||
attack_vector,
|
||||
attack_complexity,
|
||||
privileges_required,
|
||||
user_interaction,
|
||||
scope,
|
||||
confidentiality,
|
||||
integrity,
|
||||
availability,
|
||||
]
|
||||
):
|
||||
text.append("\n\n")
|
||||
cvss_parts = []
|
||||
if attack_vector:
|
||||
cvss_parts.append(f"AV:{attack_vector}")
|
||||
if attack_complexity:
|
||||
cvss_parts.append(f"AC:{attack_complexity}")
|
||||
if privileges_required:
|
||||
cvss_parts.append(f"PR:{privileges_required}")
|
||||
if user_interaction:
|
||||
cvss_parts.append(f"UI:{user_interaction}")
|
||||
if scope:
|
||||
cvss_parts.append(f"S:{scope}")
|
||||
if confidentiality:
|
||||
cvss_parts.append(f"C:{confidentiality}")
|
||||
if integrity:
|
||||
cvss_parts.append(f"I:{integrity}")
|
||||
if availability:
|
||||
cvss_parts.append(f"A:{availability}")
|
||||
text.append("CVSS Vector: ", style=FIELD_STYLE)
|
||||
text.append("/".join(cvss_parts), style="dim")
|
||||
|
||||
if description:
|
||||
text.append("\n\n")
|
||||
text.append("Description", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(description)
|
||||
|
||||
if impact:
|
||||
text.append("\n\n")
|
||||
text.append("Impact", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(impact)
|
||||
|
||||
if technical_analysis:
|
||||
text.append("\n\n")
|
||||
text.append("Technical Analysis", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(technical_analysis)
|
||||
|
||||
if poc_description:
|
||||
text.append("\n\n")
|
||||
text.append("PoC Description", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(poc_description)
|
||||
|
||||
if poc_script_code:
|
||||
text.append("\n\n")
|
||||
text.append("PoC Code", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append_text(cls._highlight_python(poc_script_code))
|
||||
|
||||
if remediation_steps:
|
||||
text.append("\n\n")
|
||||
text.append("Remediation", style=FIELD_STYLE)
|
||||
text.append("\n")
|
||||
text.append(remediation_steps)
|
||||
|
||||
if not title:
|
||||
text.append("\n ")
|
||||
text.append("Creating report...", style="dim")
|
||||
|
||||
padded = Padding(text, 2, style=f"on {BG_COLOR}")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(padded, classes=css_classes)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -15,29 +16,28 @@ class ScanStartInfoRenderer(BaseToolRenderer):
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
targets = args.get("targets", [])
|
||||
|
||||
text = Text()
|
||||
text.append("🚀 Starting penetration test")
|
||||
|
||||
if len(targets) == 1:
|
||||
target_display = cls._build_single_target_display(targets[0])
|
||||
content = f"🚀 Starting penetration test on {target_display}"
|
||||
text.append(" on ")
|
||||
text.append(cls._get_target_display(targets[0]))
|
||||
elif len(targets) > 1:
|
||||
content = f"🚀 Starting penetration test on {len(targets)} targets"
|
||||
text.append(f" on {len(targets)} targets")
|
||||
for target_info in targets:
|
||||
target_display = cls._build_single_target_display(target_info)
|
||||
content += f"\n • {target_display}"
|
||||
else:
|
||||
content = "🚀 Starting penetration test"
|
||||
text.append("\n • ")
|
||||
text.append(cls._get_target_display(target_info))
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def _build_single_target_display(cls, target_info: dict[str, Any]) -> str:
|
||||
def _get_target_display(cls, target_info: dict[str, Any]) -> str:
|
||||
original = target_info.get("original")
|
||||
if original:
|
||||
return cls.escape_markup(str(original))
|
||||
|
||||
return str(original)
|
||||
return "unknown target"
|
||||
|
||||
|
||||
@@ -51,14 +51,17 @@ class SubagentStartInfoRenderer(BaseToolRenderer):
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
|
||||
name = args.get("name", "Unknown Agent")
|
||||
task = args.get("task", "")
|
||||
name = str(args.get("name", "Unknown Agent"))
|
||||
task = str(args.get("task", ""))
|
||||
|
||||
text = Text()
|
||||
text.append("◈ ", style="#a78bfa")
|
||||
text.append("subagent ", style="dim")
|
||||
text.append(name, style="bold #a78bfa")
|
||||
|
||||
name = cls.escape_markup(str(name))
|
||||
content = f"🤖 Spawned subagent {name}"
|
||||
if task:
|
||||
task = cls.escape_markup(str(task))
|
||||
content += f"\n Task: {task}"
|
||||
text.append("\n ")
|
||||
text.append(task, style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,14 +1,33 @@
|
||||
import re
|
||||
from functools import cache
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from pygments.lexers import get_lexer_by_name
|
||||
from pygments.styles import get_style_by_name
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
MAX_OUTPUT_LINES = 50
|
||||
MAX_LINE_LENGTH = 200
|
||||
|
||||
STRIP_PATTERNS = [
|
||||
(
|
||||
r"\n?\[Command still running after [\d.]+s - showing output so far\.?"
|
||||
r"\s*(?:Use C-c to interrupt if needed\.)?\]"
|
||||
),
|
||||
r"^\[Below is the output of the previous command\.\]\n?",
|
||||
r"^No command is currently running\. Cannot send input\.$",
|
||||
(
|
||||
r"^A command is already running\. Use is_input=true to send input to it, "
|
||||
r"or interrupt it first \(e\.g\., with C-c\)\.$"
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
@cache
|
||||
def _get_style_colors() -> dict[Any, str]:
|
||||
style = get_style_by_name("native")
|
||||
@@ -20,65 +39,7 @@ class TerminalRenderer(BaseToolRenderer):
|
||||
tool_name: ClassVar[str] = "terminal_execute"
|
||||
css_classes: ClassVar[list[str]] = ["tool-call", "terminal-tool"]
|
||||
|
||||
@classmethod
|
||||
def _get_token_color(cls, token_type: Any) -> str | None:
|
||||
colors = _get_style_colors()
|
||||
while token_type:
|
||||
if token_type in colors:
|
||||
return colors[token_type]
|
||||
token_type = token_type.parent
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _highlight_bash(cls, code: str) -> str:
|
||||
lexer = get_lexer_by_name("bash")
|
||||
result_parts: list[str] = []
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
|
||||
escaped_value = cls.escape_markup(token_value)
|
||||
color = cls._get_token_color(token_type)
|
||||
|
||||
if color:
|
||||
result_parts.append(f"[{color}]{escaped_value}[/]")
|
||||
else:
|
||||
result_parts.append(escaped_value)
|
||||
|
||||
return "".join(result_parts)
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
result = tool_data.get("result", {})
|
||||
|
||||
command = args.get("command", "")
|
||||
is_input = args.get("is_input", False)
|
||||
terminal_id = args.get("terminal_id", "default")
|
||||
timeout = args.get("timeout")
|
||||
|
||||
content = cls._build_sleek_content(command, is_input, terminal_id, timeout, result)
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def _build_sleek_content(
|
||||
cls,
|
||||
command: str,
|
||||
is_input: bool,
|
||||
terminal_id: str, # noqa: ARG003
|
||||
timeout: float | None, # noqa: ARG003
|
||||
result: dict[str, Any], # noqa: ARG003
|
||||
) -> str:
|
||||
terminal_icon = ">_"
|
||||
|
||||
if not command.strip():
|
||||
return f"{terminal_icon} [dim]getting logs...[/]"
|
||||
|
||||
control_sequences = {
|
||||
CONTROL_SEQUENCES: ClassVar[set[str]] = {
|
||||
"C-c",
|
||||
"C-d",
|
||||
"C-z",
|
||||
@@ -106,7 +67,7 @@ class TerminalRenderer(BaseToolRenderer):
|
||||
"^t",
|
||||
"^y",
|
||||
}
|
||||
special_keys = {
|
||||
SPECIAL_KEYS: ClassVar[set[str]] = {
|
||||
"Enter",
|
||||
"Escape",
|
||||
"Space",
|
||||
@@ -141,26 +102,210 @@ class TerminalRenderer(BaseToolRenderer):
|
||||
"F12",
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def _get_token_color(cls, token_type: Any) -> str | None:
|
||||
colors = _get_style_colors()
|
||||
while token_type:
|
||||
if token_type in colors:
|
||||
return colors[token_type]
|
||||
token_type = token_type.parent
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _highlight_bash(cls, code: str) -> Text:
|
||||
lexer = get_lexer_by_name("bash")
|
||||
text = Text()
|
||||
|
||||
for token_type, token_value in lexer.get_tokens(code):
|
||||
if not token_value:
|
||||
continue
|
||||
color = cls._get_token_color(token_type)
|
||||
text.append(token_value, style=color)
|
||||
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
status = tool_data.get("status", "unknown")
|
||||
result = tool_data.get("result")
|
||||
|
||||
command = args.get("command", "")
|
||||
is_input = args.get("is_input", False)
|
||||
|
||||
content = cls._build_content(command, is_input, status, result)
|
||||
|
||||
css_classes = cls.get_css_classes(status)
|
||||
return Static(content, classes=css_classes)
|
||||
|
||||
@classmethod
|
||||
def _build_content(
|
||||
cls, command: str, is_input: bool, status: str, result: dict[str, Any] | str | None
|
||||
) -> Text:
|
||||
text = Text()
|
||||
terminal_icon = ">_"
|
||||
|
||||
if not command.strip():
|
||||
text.append(terminal_icon, style="dim")
|
||||
text.append(" ")
|
||||
text.append("getting logs...", style="dim")
|
||||
if result:
|
||||
cls._append_output(text, result, status, command)
|
||||
return text
|
||||
|
||||
is_special = (
|
||||
command in control_sequences
|
||||
or command in special_keys
|
||||
command in cls.CONTROL_SEQUENCES
|
||||
or command in cls.SPECIAL_KEYS
|
||||
or command.startswith(("M-", "S-", "C-S-", "C-M-", "S-M-"))
|
||||
)
|
||||
|
||||
text.append(terminal_icon, style="dim")
|
||||
text.append(" ")
|
||||
|
||||
if is_special:
|
||||
return f"{terminal_icon} [#ef4444]{cls.escape_markup(command)}[/]"
|
||||
text.append(command, style="#ef4444")
|
||||
elif is_input:
|
||||
text.append(">>>", style="#3b82f6")
|
||||
text.append(" ")
|
||||
text.append_text(cls._format_command(command))
|
||||
else:
|
||||
text.append("$", style="#22c55e")
|
||||
text.append(" ")
|
||||
text.append_text(cls._format_command(command))
|
||||
|
||||
if is_input:
|
||||
formatted_command = cls._format_command_display(command)
|
||||
return f"{terminal_icon} [#3b82f6]>>>[/] {formatted_command}"
|
||||
if result:
|
||||
cls._append_output(text, result, status, command)
|
||||
|
||||
formatted_command = cls._format_command_display(command)
|
||||
return f"{terminal_icon} [#22c55e]$[/] {formatted_command}"
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def _format_command_display(cls, command: str) -> str:
|
||||
if not command:
|
||||
return ""
|
||||
def _clean_output(cls, output: str, command: str = "") -> str:
|
||||
cleaned = output
|
||||
|
||||
cmd_display = command[:2000] + "..." if len(command) > 2000 else command
|
||||
return cls._highlight_bash(cmd_display)
|
||||
for pattern in STRIP_PATTERNS:
|
||||
cleaned = re.sub(pattern, "", cleaned, flags=re.MULTILINE)
|
||||
|
||||
if cleaned.strip():
|
||||
lines = cleaned.splitlines()
|
||||
filtered_lines: list[str] = []
|
||||
for line in lines:
|
||||
if not filtered_lines and not line.strip():
|
||||
continue
|
||||
if re.match(r"^\[STRIX_\d+\]\$\s*", line):
|
||||
continue
|
||||
if command and line.strip() == command.strip():
|
||||
continue
|
||||
if command and re.match(r"^[\$#>]\s*" + re.escape(command.strip()) + r"\s*$", line):
|
||||
continue
|
||||
filtered_lines.append(line)
|
||||
|
||||
while filtered_lines and re.match(r"^\[STRIX_\d+\]\$\s*", filtered_lines[-1]):
|
||||
filtered_lines.pop()
|
||||
|
||||
cleaned = "\n".join(filtered_lines)
|
||||
|
||||
return cleaned.strip()
|
||||
|
||||
@classmethod
|
||||
def _append_output(
|
||||
cls, text: Text, result: dict[str, Any] | str, tool_status: str, command: str = ""
|
||||
) -> None:
|
||||
if isinstance(result, str):
|
||||
if result.strip():
|
||||
text.append("\n")
|
||||
text.append_text(cls._format_output(result))
|
||||
return
|
||||
|
||||
raw_output = result.get("content", "")
|
||||
output = cls._clean_output(raw_output, command)
|
||||
error = result.get("error")
|
||||
exit_code = result.get("exit_code")
|
||||
result_status = result.get("status", "")
|
||||
|
||||
if error and not cls._is_status_message(error):
|
||||
text.append("\n")
|
||||
text.append(" error: ", style="bold #ef4444")
|
||||
text.append(cls._truncate_line(error), style="#ef4444")
|
||||
return
|
||||
|
||||
if result_status == "running" or tool_status == "running":
|
||||
if output and output.strip():
|
||||
text.append("\n")
|
||||
formatted_output = cls._format_output(output)
|
||||
text.append_text(formatted_output)
|
||||
return
|
||||
|
||||
if not output or not output.strip():
|
||||
if exit_code is not None and exit_code != 0:
|
||||
text.append("\n")
|
||||
text.append(f" exit {exit_code}", style="dim #ef4444")
|
||||
return
|
||||
|
||||
text.append("\n")
|
||||
formatted_output = cls._format_output(output)
|
||||
text.append_text(formatted_output)
|
||||
|
||||
if exit_code is not None and exit_code != 0:
|
||||
text.append("\n")
|
||||
text.append(f" exit {exit_code}", style="dim #ef4444")
|
||||
|
||||
@classmethod
|
||||
def _is_status_message(cls, message: str) -> bool:
|
||||
status_patterns = [
|
||||
r"No command is currently running",
|
||||
r"A command is already running",
|
||||
r"Cannot send input",
|
||||
r"Use is_input=true",
|
||||
r"Use C-c to interrupt",
|
||||
r"showing output so far",
|
||||
]
|
||||
return any(re.search(pattern, message) for pattern in status_patterns)
|
||||
|
||||
@classmethod
|
||||
def _format_output(cls, output: str) -> Text:
|
||||
text = Text()
|
||||
lines = output.splitlines()
|
||||
total_lines = len(lines)
|
||||
|
||||
head_count = MAX_OUTPUT_LINES // 2
|
||||
tail_count = MAX_OUTPUT_LINES - head_count - 1
|
||||
|
||||
if total_lines <= MAX_OUTPUT_LINES:
|
||||
display_lines = lines
|
||||
truncated = False
|
||||
hidden_count = 0
|
||||
else:
|
||||
display_lines = lines[:head_count]
|
||||
truncated = True
|
||||
hidden_count = total_lines - head_count - tail_count
|
||||
|
||||
for i, line in enumerate(display_lines):
|
||||
truncated_line = cls._truncate_line(line)
|
||||
text.append(" ")
|
||||
text.append(truncated_line, style="dim")
|
||||
if i < len(display_lines) - 1 or truncated:
|
||||
text.append("\n")
|
||||
|
||||
if truncated:
|
||||
text.append(f" ... {hidden_count} lines truncated ...", style="dim italic")
|
||||
text.append("\n")
|
||||
tail_lines = lines[-tail_count:]
|
||||
for i, line in enumerate(tail_lines):
|
||||
truncated_line = cls._truncate_line(line)
|
||||
text.append(" ")
|
||||
text.append(truncated_line, style="dim")
|
||||
if i < len(tail_lines) - 1:
|
||||
text.append("\n")
|
||||
|
||||
return text
|
||||
|
||||
@classmethod
|
||||
def _truncate_line(cls, line: str) -> str:
|
||||
clean_line = re.sub(r"\x1b\[[0-9;]*m", "", line)
|
||||
if len(clean_line) > MAX_LINE_LENGTH:
|
||||
return line[: MAX_LINE_LENGTH - 3] + "..."
|
||||
return line
|
||||
|
||||
@classmethod
|
||||
def _format_command(cls, command: str) -> Text:
|
||||
return cls._highlight_bash(command)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -14,16 +15,17 @@ class ThinkRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
args = tool_data.get("args", {})
|
||||
|
||||
thought = args.get("thought", "")
|
||||
|
||||
header = "🧠 [bold #a855f7]Thinking[/]"
|
||||
text = Text()
|
||||
text.append("🧠 ")
|
||||
text.append("Thinking", style="bold #a855f7")
|
||||
text.append("\n ")
|
||||
|
||||
if thought:
|
||||
thought_display = thought[:600] + "..." if len(thought) > 600 else thought
|
||||
content = f"{header}\n [italic dim]{cls.escape_markup(thought_display)}[/]"
|
||||
text.append(thought, style="italic dim")
|
||||
else:
|
||||
content = f"{header}\n [italic dim]Thinking...[/]"
|
||||
text.append("Thinking...", style="italic dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,57 +1,42 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
from .registry import register_tool_renderer
|
||||
|
||||
|
||||
STATUS_MARKERS = {
|
||||
STATUS_MARKERS: dict[str, str] = {
|
||||
"pending": "[ ]",
|
||||
"in_progress": "[~]",
|
||||
"done": "[•]",
|
||||
}
|
||||
|
||||
|
||||
def _truncate(text: str, length: int = 80) -> str:
|
||||
if len(text) <= length:
|
||||
return text
|
||||
return text[: length - 3] + "..."
|
||||
|
||||
|
||||
def _format_todo_lines(
|
||||
cls: type[BaseToolRenderer], result: dict[str, Any], limit: int = 25
|
||||
) -> list[str]:
|
||||
def _format_todo_lines(text: Text, result: dict[str, Any]) -> None:
|
||||
todos = result.get("todos")
|
||||
if not isinstance(todos, list) or not todos:
|
||||
return [" [dim]No todos[/]"]
|
||||
|
||||
lines: list[str] = []
|
||||
total = len(todos)
|
||||
|
||||
for index, todo in enumerate(todos):
|
||||
if index >= limit:
|
||||
remaining = total - limit
|
||||
if remaining > 0:
|
||||
lines.append(f" [dim]... +{remaining} more[/]")
|
||||
break
|
||||
text.append("\n ")
|
||||
text.append("No todos", style="dim")
|
||||
return
|
||||
|
||||
for todo in todos:
|
||||
status = todo.get("status", "pending")
|
||||
marker = STATUS_MARKERS.get(status, STATUS_MARKERS["pending"])
|
||||
|
||||
title = todo.get("title", "").strip() or "(untitled)"
|
||||
title = cls.escape_markup(_truncate(title, 90))
|
||||
|
||||
text.append("\n ")
|
||||
text.append(marker)
|
||||
text.append(" ")
|
||||
|
||||
if status == "done":
|
||||
title_markup = f"[dim strike]{title}[/]"
|
||||
text.append(title, style="dim strike")
|
||||
elif status == "in_progress":
|
||||
title_markup = f"[italic]{title}[/]"
|
||||
text.append(title, style="italic")
|
||||
else:
|
||||
title_markup = title
|
||||
|
||||
lines.append(f" {marker} {title_markup}")
|
||||
|
||||
return lines
|
||||
text.append(title)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -62,21 +47,27 @@ class CreateTodoRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
header = "📋 [bold #a78bfa]Todo[/]"
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Todo", style="bold #a78bfa")
|
||||
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if result.get("success"):
|
||||
lines = [header]
|
||||
lines.extend(_format_todo_lines(cls, result))
|
||||
content_text = "\n".join(lines)
|
||||
_format_todo_lines(text, result)
|
||||
else:
|
||||
error = result.get("error", "Failed to create todo")
|
||||
content_text = f"{header}\n [#ef4444]{cls.escape_markup(error)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(error, style="#ef4444")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Creating...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Creating...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -87,21 +78,27 @@ class ListTodosRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
header = "📋 [bold #a78bfa]Todos[/]"
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Todos", style="bold #a78bfa")
|
||||
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if result.get("success"):
|
||||
lines = [header]
|
||||
lines.extend(_format_todo_lines(cls, result))
|
||||
content_text = "\n".join(lines)
|
||||
_format_todo_lines(text, result)
|
||||
else:
|
||||
error = result.get("error", "Unable to list todos")
|
||||
content_text = f"{header}\n [#ef4444]{cls.escape_markup(error)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(error, style="#ef4444")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Loading...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Loading...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -112,21 +109,27 @@ class UpdateTodoRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
header = "📋 [bold #a78bfa]Todo Updated[/]"
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Todo Updated", style="bold #a78bfa")
|
||||
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if result.get("success"):
|
||||
lines = [header]
|
||||
lines.extend(_format_todo_lines(cls, result))
|
||||
content_text = "\n".join(lines)
|
||||
_format_todo_lines(text, result)
|
||||
else:
|
||||
error = result.get("error", "Failed to update todo")
|
||||
content_text = f"{header}\n [#ef4444]{cls.escape_markup(error)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(error, style="#ef4444")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Updating...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Updating...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -137,21 +140,27 @@ class MarkTodoDoneRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
header = "📋 [bold #a78bfa]Todo Completed[/]"
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Todo Completed", style="bold #a78bfa")
|
||||
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if result.get("success"):
|
||||
lines = [header]
|
||||
lines.extend(_format_todo_lines(cls, result))
|
||||
content_text = "\n".join(lines)
|
||||
_format_todo_lines(text, result)
|
||||
else:
|
||||
error = result.get("error", "Failed to mark todo done")
|
||||
content_text = f"{header}\n [#ef4444]{cls.escape_markup(error)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(error, style="#ef4444")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Marking done...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Marking done...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -162,21 +171,27 @@ class MarkTodoPendingRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
header = "📋 [bold #f59e0b]Todo Reopened[/]"
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Todo Reopened", style="bold #f59e0b")
|
||||
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if result.get("success"):
|
||||
lines = [header]
|
||||
lines.extend(_format_todo_lines(cls, result))
|
||||
content_text = "\n".join(lines)
|
||||
_format_todo_lines(text, result)
|
||||
else:
|
||||
error = result.get("error", "Failed to reopen todo")
|
||||
content_text = f"{header}\n [#ef4444]{cls.escape_markup(error)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(error, style="#ef4444")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Reopening...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Reopening...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
|
||||
@register_tool_renderer
|
||||
@@ -187,18 +202,24 @@ class DeleteTodoRenderer(BaseToolRenderer):
|
||||
@classmethod
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
result = tool_data.get("result")
|
||||
header = "📋 [bold #94a3b8]Todo Removed[/]"
|
||||
|
||||
if result and isinstance(result, dict):
|
||||
text = Text()
|
||||
text.append("📋 ")
|
||||
text.append("Todo Removed", style="bold #94a3b8")
|
||||
|
||||
if isinstance(result, str) and result.strip():
|
||||
text.append("\n ")
|
||||
text.append(result.strip(), style="dim")
|
||||
elif result and isinstance(result, dict):
|
||||
if result.get("success"):
|
||||
lines = [header]
|
||||
lines.extend(_format_todo_lines(cls, result))
|
||||
content_text = "\n".join(lines)
|
||||
_format_todo_lines(text, result)
|
||||
else:
|
||||
error = result.get("error", "Failed to remove todo")
|
||||
content_text = f"{header}\n [#ef4444]{cls.escape_markup(error)}[/]"
|
||||
text.append("\n ")
|
||||
text.append(error, style="#ef4444")
|
||||
else:
|
||||
content_text = f"{header}\n [dim]Removing...[/]"
|
||||
text.append("\n ")
|
||||
text.append("Removing...", style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -12,32 +13,38 @@ class UserMessageRenderer(BaseToolRenderer):
|
||||
css_classes: ClassVar[list[str]] = ["chat-message", "user-message"]
|
||||
|
||||
@classmethod
|
||||
def render(cls, message_data: dict[str, Any]) -> Static:
|
||||
content = message_data.get("content", "")
|
||||
def render(cls, tool_data: dict[str, Any]) -> Static:
|
||||
content = tool_data.get("content", "")
|
||||
|
||||
if not content:
|
||||
return Static("", classes=cls.css_classes)
|
||||
return Static(Text(), classes=" ".join(cls.css_classes))
|
||||
|
||||
if len(content) > 300:
|
||||
content = content[:297] + "..."
|
||||
styled_text = cls._format_user_message(content)
|
||||
|
||||
lines = content.split("\n")
|
||||
bordered_lines = [f"[#3b82f6]▍[/#3b82f6] {line}" for line in lines]
|
||||
bordered_content = "\n".join(bordered_lines)
|
||||
formatted_content = f"[#3b82f6]▍[/#3b82f6] [bold]You:[/]\n{bordered_content}"
|
||||
|
||||
css_classes = " ".join(cls.css_classes)
|
||||
return Static(formatted_content, classes=css_classes)
|
||||
return Static(styled_text, classes=" ".join(cls.css_classes))
|
||||
|
||||
@classmethod
|
||||
def render_simple(cls, content: str) -> str:
|
||||
def render_simple(cls, content: str) -> Text:
|
||||
if not content:
|
||||
return ""
|
||||
return Text()
|
||||
|
||||
if len(content) > 300:
|
||||
content = content[:297] + "..."
|
||||
return cls._format_user_message(content)
|
||||
|
||||
@classmethod
|
||||
def _format_user_message(cls, content: str) -> Text:
|
||||
text = Text()
|
||||
|
||||
text.append("▍", style="#3b82f6")
|
||||
text.append(" ")
|
||||
text.append("You:", style="bold")
|
||||
text.append("\n")
|
||||
|
||||
lines = content.split("\n")
|
||||
bordered_lines = [f"[#3b82f6]▍[/#3b82f6] {line}" for line in lines]
|
||||
bordered_content = "\n".join(bordered_lines)
|
||||
return f"[#3b82f6]▍[/#3b82f6] [bold]You:[/]\n{bordered_content}"
|
||||
for i, line in enumerate(lines):
|
||||
if i > 0:
|
||||
text.append("\n")
|
||||
text.append("▍", style="#3b82f6")
|
||||
text.append(" ")
|
||||
text.append(line)
|
||||
|
||||
return text
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from typing import Any, ClassVar
|
||||
|
||||
from rich.text import Text
|
||||
from textual.widgets import Static
|
||||
|
||||
from .base_renderer import BaseToolRenderer
|
||||
@@ -16,13 +17,13 @@ class WebSearchRenderer(BaseToolRenderer):
|
||||
args = tool_data.get("args", {})
|
||||
query = args.get("query", "")
|
||||
|
||||
header = "🌐 [bold #60a5fa]Searching the web...[/]"
|
||||
text = Text()
|
||||
text.append("🌐 ")
|
||||
text.append("Searching the web...", style="bold #60a5fa")
|
||||
|
||||
if query:
|
||||
query_display = query[:100] + "..." if len(query) > 100 else query
|
||||
content_text = f"{header}\n [dim]{cls.escape_markup(query_display)}[/]"
|
||||
else:
|
||||
content_text = f"{header}"
|
||||
text.append("\n ")
|
||||
text.append(query, style="dim")
|
||||
|
||||
css_classes = cls.get_css_classes("completed")
|
||||
return Static(content_text, classes=css_classes)
|
||||
return Static(text, classes=css_classes)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -38,6 +38,165 @@ def get_severity_color(severity: str) -> str:
|
||||
return severity_colors.get(severity, "#6b7280")
|
||||
|
||||
|
||||
def get_cvss_color(cvss_score: float) -> str:
|
||||
if cvss_score >= 9.0:
|
||||
return "#dc2626"
|
||||
if cvss_score >= 7.0:
|
||||
return "#ea580c"
|
||||
if cvss_score >= 4.0:
|
||||
return "#d97706"
|
||||
if cvss_score >= 0.1:
|
||||
return "#65a30d"
|
||||
return "#6b7280"
|
||||
|
||||
|
||||
def format_vulnerability_report(report: dict[str, Any]) -> Text: # noqa: PLR0912, PLR0915
|
||||
"""Format a vulnerability report for CLI display with all rich fields."""
|
||||
field_style = "bold #4ade80"
|
||||
|
||||
text = Text()
|
||||
|
||||
title = report.get("title", "")
|
||||
if title:
|
||||
text.append("Vulnerability Report", style="bold #ea580c")
|
||||
text.append("\n\n")
|
||||
text.append("Title: ", style=field_style)
|
||||
text.append(title)
|
||||
|
||||
severity = report.get("severity", "")
|
||||
if severity:
|
||||
text.append("\n\n")
|
||||
text.append("Severity: ", style=field_style)
|
||||
severity_color = get_severity_color(severity.lower())
|
||||
text.append(severity.upper(), style=f"bold {severity_color}")
|
||||
|
||||
cvss = report.get("cvss")
|
||||
if cvss is not None:
|
||||
text.append("\n\n")
|
||||
text.append("CVSS Score: ", style=field_style)
|
||||
cvss_color = get_cvss_color(cvss)
|
||||
text.append(f"{cvss:.1f}", style=f"bold {cvss_color}")
|
||||
|
||||
target = report.get("target")
|
||||
if target:
|
||||
text.append("\n\n")
|
||||
text.append("Target: ", style=field_style)
|
||||
text.append(target)
|
||||
|
||||
endpoint = report.get("endpoint")
|
||||
if endpoint:
|
||||
text.append("\n\n")
|
||||
text.append("Endpoint: ", style=field_style)
|
||||
text.append(endpoint)
|
||||
|
||||
method = report.get("method")
|
||||
if method:
|
||||
text.append("\n\n")
|
||||
text.append("Method: ", style=field_style)
|
||||
text.append(method)
|
||||
|
||||
cve = report.get("cve")
|
||||
if cve:
|
||||
text.append("\n\n")
|
||||
text.append("CVE: ", style=field_style)
|
||||
text.append(cve)
|
||||
|
||||
cvss_breakdown = report.get("cvss_breakdown", {})
|
||||
if cvss_breakdown:
|
||||
text.append("\n\n")
|
||||
cvss_parts = []
|
||||
if cvss_breakdown.get("attack_vector"):
|
||||
cvss_parts.append(f"AV:{cvss_breakdown['attack_vector']}")
|
||||
if cvss_breakdown.get("attack_complexity"):
|
||||
cvss_parts.append(f"AC:{cvss_breakdown['attack_complexity']}")
|
||||
if cvss_breakdown.get("privileges_required"):
|
||||
cvss_parts.append(f"PR:{cvss_breakdown['privileges_required']}")
|
||||
if cvss_breakdown.get("user_interaction"):
|
||||
cvss_parts.append(f"UI:{cvss_breakdown['user_interaction']}")
|
||||
if cvss_breakdown.get("scope"):
|
||||
cvss_parts.append(f"S:{cvss_breakdown['scope']}")
|
||||
if cvss_breakdown.get("confidentiality"):
|
||||
cvss_parts.append(f"C:{cvss_breakdown['confidentiality']}")
|
||||
if cvss_breakdown.get("integrity"):
|
||||
cvss_parts.append(f"I:{cvss_breakdown['integrity']}")
|
||||
if cvss_breakdown.get("availability"):
|
||||
cvss_parts.append(f"A:{cvss_breakdown['availability']}")
|
||||
if cvss_parts:
|
||||
text.append("CVSS Vector: ", style=field_style)
|
||||
text.append("/".join(cvss_parts), style="dim")
|
||||
|
||||
description = report.get("description")
|
||||
if description:
|
||||
text.append("\n\n")
|
||||
text.append("Description", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(description)
|
||||
|
||||
impact = report.get("impact")
|
||||
if impact:
|
||||
text.append("\n\n")
|
||||
text.append("Impact", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(impact)
|
||||
|
||||
technical_analysis = report.get("technical_analysis")
|
||||
if technical_analysis:
|
||||
text.append("\n\n")
|
||||
text.append("Technical Analysis", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(technical_analysis)
|
||||
|
||||
poc_description = report.get("poc_description")
|
||||
if poc_description:
|
||||
text.append("\n\n")
|
||||
text.append("PoC Description", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(poc_description)
|
||||
|
||||
poc_script_code = report.get("poc_script_code")
|
||||
if poc_script_code:
|
||||
text.append("\n\n")
|
||||
text.append("PoC Code", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(poc_script_code, style="dim")
|
||||
|
||||
code_file = report.get("code_file")
|
||||
if code_file:
|
||||
text.append("\n\n")
|
||||
text.append("Code File: ", style=field_style)
|
||||
text.append(code_file)
|
||||
|
||||
code_before = report.get("code_before")
|
||||
if code_before:
|
||||
text.append("\n\n")
|
||||
text.append("Code Before", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(code_before, style="dim")
|
||||
|
||||
code_after = report.get("code_after")
|
||||
if code_after:
|
||||
text.append("\n\n")
|
||||
text.append("Code After", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(code_after, style="dim")
|
||||
|
||||
code_diff = report.get("code_diff")
|
||||
if code_diff:
|
||||
text.append("\n\n")
|
||||
text.append("Code Diff", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(code_diff, style="dim")
|
||||
|
||||
remediation_steps = report.get("remediation_steps")
|
||||
if remediation_steps:
|
||||
text.append("\n\n")
|
||||
text.append("Remediation", style=field_style)
|
||||
text.append("\n")
|
||||
text.append(remediation_steps)
|
||||
|
||||
return text
|
||||
|
||||
|
||||
def _build_vulnerability_stats(stats_text: Text, tracer: Any) -> None:
|
||||
"""Build vulnerability section of stats text."""
|
||||
vuln_count = len(tracer.vulnerability_reports)
|
||||
@@ -134,6 +293,12 @@ def build_live_stats_text(tracer: Any, agent_config: dict[str, Any] | None = Non
|
||||
if not tracer:
|
||||
return stats_text
|
||||
|
||||
if agent_config:
|
||||
llm_config = agent_config["llm_config"]
|
||||
model = getattr(llm_config, "model_name", "Unknown")
|
||||
stats_text.append(f"🧠 Model: {model}")
|
||||
stats_text.append("\n")
|
||||
|
||||
vuln_count = len(tracer.vulnerability_reports)
|
||||
tool_count = tracer.get_real_tool_count()
|
||||
agent_count = len(tracer.agents)
|
||||
@@ -165,12 +330,6 @@ def build_live_stats_text(tracer: Any, agent_config: dict[str, Any] | None = Non
|
||||
|
||||
stats_text.append("\n")
|
||||
|
||||
if agent_config:
|
||||
llm_config = agent_config["llm_config"]
|
||||
model = getattr(llm_config, "model_name", "Unknown")
|
||||
stats_text.append(f"🧠 Model: {model}")
|
||||
stats_text.append("\n")
|
||||
|
||||
stats_text.append("🤖 Agents: ", style="bold white")
|
||||
stats_text.append(str(agent_count), style="dim white")
|
||||
stats_text.append(" • ", style="dim white")
|
||||
@@ -202,6 +361,31 @@ def build_live_stats_text(tracer: Any, agent_config: dict[str, Any] | None = Non
|
||||
return stats_text
|
||||
|
||||
|
||||
def build_tui_stats_text(tracer: Any, agent_config: dict[str, Any] | None = None) -> Text:
|
||||
stats_text = Text()
|
||||
if not tracer:
|
||||
return stats_text
|
||||
|
||||
if agent_config:
|
||||
llm_config = agent_config["llm_config"]
|
||||
model = getattr(llm_config, "model_name", "Unknown")
|
||||
stats_text.append(model, style="dim")
|
||||
|
||||
llm_stats = tracer.get_total_llm_stats()
|
||||
total_stats = llm_stats["total"]
|
||||
|
||||
total_tokens = total_stats["input_tokens"] + total_stats["output_tokens"]
|
||||
if total_tokens > 0:
|
||||
stats_text.append("\n")
|
||||
stats_text.append(f"{format_token_count(total_tokens)} tokens", style="dim")
|
||||
|
||||
if total_stats["cost"] > 0:
|
||||
stats_text.append("\n")
|
||||
stats_text.append(f"${total_stats['cost']:.2f} spent", style="dim")
|
||||
|
||||
return stats_text
|
||||
|
||||
|
||||
# Name generation utilities
|
||||
|
||||
|
||||
@@ -404,6 +588,47 @@ def collect_local_sources(targets_info: list[dict[str, Any]]) -> list[dict[str,
|
||||
return local_sources
|
||||
|
||||
|
||||
def _is_localhost_host(host: str) -> bool:
|
||||
host_lower = host.lower().strip("[]")
|
||||
|
||||
if host_lower in ("localhost", "0.0.0.0", "::1"): # nosec B104
|
||||
return True
|
||||
|
||||
try:
|
||||
ip = ipaddress.ip_address(host_lower)
|
||||
if isinstance(ip, ipaddress.IPv4Address):
|
||||
return ip.is_loopback # 127.0.0.0/8
|
||||
if isinstance(ip, ipaddress.IPv6Address):
|
||||
return ip.is_loopback # ::1
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def rewrite_localhost_targets(targets_info: list[dict[str, Any]], host_gateway: str) -> None:
|
||||
from yarl import URL # type: ignore[import-not-found]
|
||||
|
||||
for target_info in targets_info:
|
||||
target_type = target_info.get("type")
|
||||
details = target_info.get("details", {})
|
||||
|
||||
if target_type == "web_application":
|
||||
target_url = details.get("target_url", "")
|
||||
try:
|
||||
url = URL(target_url)
|
||||
except (ValueError, TypeError):
|
||||
continue
|
||||
|
||||
if url.host and _is_localhost_host(url.host):
|
||||
details["target_url"] = str(url.with_host(host_gateway))
|
||||
|
||||
elif target_type == "ip_address":
|
||||
target_ip = details.get("target_ip", "")
|
||||
if target_ip and _is_localhost_host(target_ip):
|
||||
details["target_ip"] = host_gateway
|
||||
|
||||
|
||||
# Repository utilities
|
||||
def clone_repository(repo_url: str, run_name: str, dest_name: str | None = None) -> str:
|
||||
console = Console()
|
||||
@@ -494,9 +719,10 @@ def check_docker_connection() -> Any:
|
||||
error_text.append("DOCKER NOT AVAILABLE", style="bold red")
|
||||
error_text.append("\n\n", style="white")
|
||||
error_text.append("Cannot connect to Docker daemon.\n", style="white")
|
||||
error_text.append("Please ensure Docker is installed and running.\n\n", style="white")
|
||||
error_text.append("Try running: ", style="dim white")
|
||||
error_text.append("sudo systemctl start docker", style="dim cyan")
|
||||
error_text.append(
|
||||
"Please ensure Docker Desktop is installed and running, and try running strix again.\n",
|
||||
style="white",
|
||||
)
|
||||
|
||||
panel = Panel(
|
||||
error_text,
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
import logging
|
||||
import warnings
|
||||
|
||||
import litellm
|
||||
|
||||
from .config import LLMConfig
|
||||
@@ -11,3 +14,6 @@ __all__ = [
|
||||
]
|
||||
|
||||
litellm._logging._disable_debugging()
|
||||
logging.getLogger("asyncio").setLevel(logging.CRITICAL)
|
||||
logging.getLogger("asyncio").propagate = False
|
||||
warnings.filterwarnings("ignore", category=RuntimeWarning, module="asyncio")
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import os
|
||||
from strix.config import Config
|
||||
|
||||
|
||||
class LLMConfig:
|
||||
@@ -6,18 +6,18 @@ class LLMConfig:
|
||||
self,
|
||||
model_name: str | None = None,
|
||||
enable_prompt_caching: bool = True,
|
||||
prompt_modules: list[str] | None = None,
|
||||
skills: list[str] | None = None,
|
||||
timeout: int | None = None,
|
||||
scan_mode: str = "deep",
|
||||
):
|
||||
self.model_name = model_name or os.getenv("STRIX_LLM", "openai/gpt-5")
|
||||
self.model_name = model_name or Config.get("strix_llm")
|
||||
|
||||
if not self.model_name:
|
||||
raise ValueError("STRIX_LLM environment variable must be set and not empty")
|
||||
|
||||
self.enable_prompt_caching = enable_prompt_caching
|
||||
self.prompt_modules = prompt_modules or []
|
||||
self.skills = skills or []
|
||||
|
||||
self.timeout = timeout or int(os.getenv("LLM_TIMEOUT", "300"))
|
||||
self.timeout = timeout or int(Config.get("llm_timeout") or "300")
|
||||
|
||||
self.scan_mode = scan_mode if scan_mode in ["quick", "standard", "deep"] else "deep"
|
||||
|
||||
218
strix/llm/dedupe.py
Normal file
218
strix/llm/dedupe.py
Normal file
@@ -0,0 +1,218 @@
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
import litellm
|
||||
|
||||
from strix.config import Config
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEDUPE_SYSTEM_PROMPT = """You are an expert vulnerability report deduplication judge.
|
||||
Your task is to determine if a candidate vulnerability report describes the SAME vulnerability
|
||||
as any existing report.
|
||||
|
||||
CRITICAL DEDUPLICATION RULES:
|
||||
|
||||
1. SAME VULNERABILITY means:
|
||||
- Same root cause (e.g., "missing input validation" not just "SQL injection")
|
||||
- Same affected component/endpoint/file (exact match or clear overlap)
|
||||
- Same exploitation method or attack vector
|
||||
- Would be fixed by the same code change/patch
|
||||
|
||||
2. NOT DUPLICATES if:
|
||||
- Different endpoints even with same vulnerability type (e.g., SQLi in /login vs /search)
|
||||
- Different parameters in same endpoint (e.g., XSS in 'name' vs 'comment' field)
|
||||
- Different root causes (e.g., stored XSS vs reflected XSS in same field)
|
||||
- Different severity levels due to different impact
|
||||
- One is authenticated, other is unauthenticated
|
||||
|
||||
3. ARE DUPLICATES even if:
|
||||
- Titles are worded differently
|
||||
- Descriptions have different level of detail
|
||||
- PoC uses different payloads but exploits same issue
|
||||
- One report is more thorough than another
|
||||
- Minor variations in technical analysis
|
||||
|
||||
COMPARISON GUIDELINES:
|
||||
- Focus on the technical root cause, not surface-level similarities
|
||||
- Same vulnerability type (SQLi, XSS) doesn't mean duplicate - location matters
|
||||
- Consider the fix: would fixing one also fix the other?
|
||||
- When uncertain, lean towards NOT duplicate
|
||||
|
||||
FIELDS TO ANALYZE:
|
||||
- title, description: General vulnerability info
|
||||
- target, endpoint, method: Exact location of vulnerability
|
||||
- technical_analysis: Root cause details
|
||||
- poc_description: How it's exploited
|
||||
- impact: What damage it can cause
|
||||
|
||||
YOU MUST RESPOND WITH EXACTLY THIS XML FORMAT AND NOTHING ELSE:
|
||||
|
||||
<dedupe_result>
|
||||
<is_duplicate>true</is_duplicate>
|
||||
<duplicate_id>vuln-0001</duplicate_id>
|
||||
<confidence>0.95</confidence>
|
||||
<reason>Both reports describe SQL injection in /api/login via the username parameter</reason>
|
||||
</dedupe_result>
|
||||
|
||||
OR if not a duplicate:
|
||||
|
||||
<dedupe_result>
|
||||
<is_duplicate>false</is_duplicate>
|
||||
<duplicate_id></duplicate_id>
|
||||
<confidence>0.90</confidence>
|
||||
<reason>Different endpoints: candidate is /api/search, existing is /api/login</reason>
|
||||
</dedupe_result>
|
||||
|
||||
RULES:
|
||||
- is_duplicate MUST be exactly "true" or "false" (lowercase)
|
||||
- duplicate_id MUST be the exact ID from existing reports or empty if not duplicate
|
||||
- confidence MUST be a decimal (your confidence level in the decision)
|
||||
- reason MUST be a specific explanation mentioning endpoint/parameter/root cause
|
||||
- DO NOT include any text outside the <dedupe_result> tags"""
|
||||
|
||||
|
||||
def _prepare_report_for_comparison(report: dict[str, Any]) -> dict[str, Any]:
|
||||
relevant_fields = [
|
||||
"id",
|
||||
"title",
|
||||
"description",
|
||||
"impact",
|
||||
"target",
|
||||
"technical_analysis",
|
||||
"poc_description",
|
||||
"endpoint",
|
||||
"method",
|
||||
]
|
||||
|
||||
cleaned = {}
|
||||
for field in relevant_fields:
|
||||
if report.get(field):
|
||||
value = report[field]
|
||||
if isinstance(value, str) and len(value) > 8000:
|
||||
value = value[:8000] + "...[truncated]"
|
||||
cleaned[field] = value
|
||||
|
||||
return cleaned
|
||||
|
||||
|
||||
def _extract_xml_field(content: str, field: str) -> str:
|
||||
pattern = rf"<{field}>(.*?)</{field}>"
|
||||
match = re.search(pattern, content, re.DOTALL | re.IGNORECASE)
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
|
||||
def _parse_dedupe_response(content: str) -> dict[str, Any]:
|
||||
result_match = re.search(
|
||||
r"<dedupe_result>(.*?)</dedupe_result>", content, re.DOTALL | re.IGNORECASE
|
||||
)
|
||||
|
||||
if not result_match:
|
||||
logger.warning(f"No <dedupe_result> block found in response: {content[:500]}")
|
||||
raise ValueError("No <dedupe_result> block found in response")
|
||||
|
||||
result_content = result_match.group(1)
|
||||
|
||||
is_duplicate_str = _extract_xml_field(result_content, "is_duplicate")
|
||||
duplicate_id = _extract_xml_field(result_content, "duplicate_id")
|
||||
confidence_str = _extract_xml_field(result_content, "confidence")
|
||||
reason = _extract_xml_field(result_content, "reason")
|
||||
|
||||
is_duplicate = is_duplicate_str.lower() == "true"
|
||||
|
||||
try:
|
||||
confidence = float(confidence_str) if confidence_str else 0.0
|
||||
except ValueError:
|
||||
confidence = 0.0
|
||||
|
||||
return {
|
||||
"is_duplicate": is_duplicate,
|
||||
"duplicate_id": duplicate_id[:64] if duplicate_id else "",
|
||||
"confidence": confidence,
|
||||
"reason": reason[:500] if reason else "",
|
||||
}
|
||||
|
||||
|
||||
def check_duplicate(
|
||||
candidate: dict[str, Any], existing_reports: list[dict[str, Any]]
|
||||
) -> dict[str, Any]:
|
||||
if not existing_reports:
|
||||
return {
|
||||
"is_duplicate": False,
|
||||
"duplicate_id": "",
|
||||
"confidence": 1.0,
|
||||
"reason": "No existing reports to compare against",
|
||||
}
|
||||
|
||||
try:
|
||||
candidate_cleaned = _prepare_report_for_comparison(candidate)
|
||||
existing_cleaned = [_prepare_report_for_comparison(r) for r in existing_reports]
|
||||
|
||||
comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned}
|
||||
|
||||
model_name = Config.get("strix_llm")
|
||||
api_key = Config.get("llm_api_key")
|
||||
api_base = (
|
||||
Config.get("llm_api_base")
|
||||
or Config.get("openai_api_base")
|
||||
or Config.get("litellm_base_url")
|
||||
or Config.get("ollama_api_base")
|
||||
)
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": DEDUPE_SYSTEM_PROMPT},
|
||||
{
|
||||
"role": "user",
|
||||
"content": (
|
||||
f"Compare this candidate vulnerability against existing reports:\n\n"
|
||||
f"{json.dumps(comparison_data, indent=2)}\n\n"
|
||||
f"Respond with ONLY the <dedupe_result> XML block."
|
||||
),
|
||||
},
|
||||
]
|
||||
|
||||
completion_kwargs: dict[str, Any] = {
|
||||
"model": model_name,
|
||||
"messages": messages,
|
||||
"timeout": 120,
|
||||
"temperature": 0,
|
||||
}
|
||||
if api_key:
|
||||
completion_kwargs["api_key"] = api_key
|
||||
if api_base:
|
||||
completion_kwargs["api_base"] = api_base
|
||||
|
||||
response = litellm.completion(**completion_kwargs)
|
||||
|
||||
content = response.choices[0].message.content
|
||||
if not content:
|
||||
return {
|
||||
"is_duplicate": False,
|
||||
"duplicate_id": "",
|
||||
"confidence": 0.0,
|
||||
"reason": "Empty response from LLM",
|
||||
}
|
||||
|
||||
result = _parse_dedupe_response(content)
|
||||
|
||||
logger.info(
|
||||
f"Deduplication check: is_duplicate={result['is_duplicate']}, "
|
||||
f"confidence={result['confidence']}, reason={result['reason'][:100]}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception("Error during vulnerability deduplication check")
|
||||
return {
|
||||
"is_duplicate": False,
|
||||
"duplicate_id": "",
|
||||
"confidence": 0.0,
|
||||
"reason": f"Deduplication check failed: {e}",
|
||||
"error": str(e),
|
||||
}
|
||||
else:
|
||||
return result
|
||||
603
strix/llm/llm.py
603
strix/llm/llm.py
@@ -1,41 +1,29 @@
|
||||
import logging
|
||||
import os
|
||||
import asyncio
|
||||
from collections.abc import AsyncIterator
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from fnmatch import fnmatch
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import litellm
|
||||
from jinja2 import (
|
||||
Environment,
|
||||
FileSystemLoader,
|
||||
select_autoescape,
|
||||
)
|
||||
from litellm import ModelResponse, completion_cost
|
||||
from jinja2 import Environment, FileSystemLoader, select_autoescape
|
||||
from litellm import acompletion, completion_cost, stream_chunk_builder, supports_reasoning
|
||||
from litellm.utils import supports_prompt_caching, supports_vision
|
||||
|
||||
from strix.config import Config
|
||||
from strix.llm.config import LLMConfig
|
||||
from strix.llm.memory_compressor import MemoryCompressor
|
||||
from strix.llm.request_queue import get_global_queue
|
||||
from strix.llm.utils import _truncate_to_first_function, parse_tool_invocations
|
||||
from strix.prompts import load_prompt_modules
|
||||
from strix.llm.utils import (
|
||||
_truncate_to_first_function,
|
||||
fix_incomplete_tool_call,
|
||||
parse_tool_invocations,
|
||||
)
|
||||
from strix.skills import load_skills
|
||||
from strix.tools import get_tools_prompt
|
||||
from strix.utils.resource_paths import get_strix_resource_path
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
litellm.drop_params = True
|
||||
litellm.modify_params = True
|
||||
|
||||
_LLM_API_KEY = os.getenv("LLM_API_KEY")
|
||||
_LLM_API_BASE = (
|
||||
os.getenv("LLM_API_BASE")
|
||||
or os.getenv("OPENAI_API_BASE")
|
||||
or os.getenv("LITELLM_BASE_URL")
|
||||
or os.getenv("OLLAMA_API_BASE")
|
||||
)
|
||||
|
||||
|
||||
class LLMRequestFailedError(Exception):
|
||||
def __init__(self, message: str, details: str | None = None):
|
||||
@@ -44,70 +32,11 @@ class LLMRequestFailedError(Exception):
|
||||
self.details = details
|
||||
|
||||
|
||||
SUPPORTS_STOP_WORDS_FALSE_PATTERNS: list[str] = [
|
||||
"o1*",
|
||||
"grok-4-0709",
|
||||
"grok-code-fast-1",
|
||||
"deepseek-r1-0528*",
|
||||
]
|
||||
|
||||
REASONING_EFFORT_PATTERNS: list[str] = [
|
||||
"o1-2024-12-17",
|
||||
"o1",
|
||||
"o3",
|
||||
"o3-2025-04-16",
|
||||
"o3-mini-2025-01-31",
|
||||
"o3-mini",
|
||||
"o4-mini",
|
||||
"o4-mini-2025-04-16",
|
||||
"gemini-2.5-flash",
|
||||
"gemini-2.5-pro",
|
||||
"gpt-5*",
|
||||
"deepseek-r1-0528*",
|
||||
"claude-sonnet-4-5*",
|
||||
"claude-haiku-4-5*",
|
||||
]
|
||||
|
||||
|
||||
def normalize_model_name(model: str) -> str:
|
||||
raw = (model or "").strip().lower()
|
||||
if "/" in raw:
|
||||
name = raw.split("/")[-1]
|
||||
if ":" in name:
|
||||
name = name.split(":", 1)[0]
|
||||
else:
|
||||
name = raw
|
||||
if name.endswith("-gguf"):
|
||||
name = name[: -len("-gguf")]
|
||||
return name
|
||||
|
||||
|
||||
def model_matches(model: str, patterns: list[str]) -> bool:
|
||||
raw = (model or "").strip().lower()
|
||||
name = normalize_model_name(model)
|
||||
for pat in patterns:
|
||||
pat_l = pat.lower()
|
||||
if "/" in pat_l:
|
||||
if fnmatch(raw, pat_l):
|
||||
return True
|
||||
elif fnmatch(name, pat_l):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class StepRole(str, Enum):
|
||||
AGENT = "agent"
|
||||
USER = "user"
|
||||
SYSTEM = "system"
|
||||
|
||||
|
||||
@dataclass
|
||||
class LLMResponse:
|
||||
content: str
|
||||
tool_invocations: list[dict[str, Any]] | None = None
|
||||
scan_id: str | None = None
|
||||
step_number: int = 1
|
||||
role: StepRole = StepRole.AGENT
|
||||
thinking_blocks: list[dict[str, Any]] | None = None
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -115,69 +44,63 @@ class RequestStats:
|
||||
input_tokens: int = 0
|
||||
output_tokens: int = 0
|
||||
cached_tokens: int = 0
|
||||
cache_creation_tokens: int = 0
|
||||
cost: float = 0.0
|
||||
requests: int = 0
|
||||
failed_requests: int = 0
|
||||
|
||||
def to_dict(self) -> dict[str, int | float]:
|
||||
return {
|
||||
"input_tokens": self.input_tokens,
|
||||
"output_tokens": self.output_tokens,
|
||||
"cached_tokens": self.cached_tokens,
|
||||
"cache_creation_tokens": self.cache_creation_tokens,
|
||||
"cost": round(self.cost, 4),
|
||||
"requests": self.requests,
|
||||
"failed_requests": self.failed_requests,
|
||||
}
|
||||
|
||||
|
||||
class LLM:
|
||||
def __init__(
|
||||
self, config: LLMConfig, agent_name: str | None = None, agent_id: str | None = None
|
||||
):
|
||||
def __init__(self, config: LLMConfig, agent_name: str | None = None):
|
||||
self.config = config
|
||||
self.agent_name = agent_name
|
||||
self.agent_id = agent_id
|
||||
self.agent_id: str | None = None
|
||||
self._total_stats = RequestStats()
|
||||
self._last_request_stats = RequestStats()
|
||||
self.memory_compressor = MemoryCompressor(model_name=config.model_name)
|
||||
self.system_prompt = self._load_system_prompt(agent_name)
|
||||
|
||||
self.memory_compressor = MemoryCompressor(
|
||||
model_name=self.config.model_name,
|
||||
timeout=self.config.timeout,
|
||||
)
|
||||
reasoning = Config.get("strix_reasoning_effort")
|
||||
if reasoning:
|
||||
self._reasoning_effort = reasoning
|
||||
elif config.scan_mode == "quick":
|
||||
self._reasoning_effort = "medium"
|
||||
else:
|
||||
self._reasoning_effort = "high"
|
||||
|
||||
if agent_name:
|
||||
prompt_dir = Path(__file__).parent.parent / "agents" / agent_name
|
||||
prompts_dir = Path(__file__).parent.parent / "prompts"
|
||||
def _load_system_prompt(self, agent_name: str | None) -> str:
|
||||
if not agent_name:
|
||||
return ""
|
||||
|
||||
loader = FileSystemLoader([prompt_dir, prompts_dir])
|
||||
self.jinja_env = Environment(
|
||||
loader=loader,
|
||||
try:
|
||||
prompt_dir = get_strix_resource_path("agents", agent_name)
|
||||
skills_dir = get_strix_resource_path("skills")
|
||||
env = Environment(
|
||||
loader=FileSystemLoader([prompt_dir, skills_dir]),
|
||||
autoescape=select_autoescape(enabled_extensions=(), default_for_string=False),
|
||||
)
|
||||
|
||||
try:
|
||||
modules_to_load = list(self.config.prompt_modules or [])
|
||||
modules_to_load.append(f"scan_modes/{self.config.scan_mode}")
|
||||
skills_to_load = [
|
||||
*list(self.config.skills or []),
|
||||
f"scan_modes/{self.config.scan_mode}",
|
||||
]
|
||||
skill_content = load_skills(skills_to_load, env)
|
||||
env.globals["get_skill"] = lambda name: skill_content.get(name, "")
|
||||
|
||||
prompt_module_content = load_prompt_modules(modules_to_load, self.jinja_env)
|
||||
|
||||
def get_module(name: str) -> str:
|
||||
return prompt_module_content.get(name, "")
|
||||
|
||||
self.jinja_env.globals["get_module"] = get_module
|
||||
|
||||
self.system_prompt = self.jinja_env.get_template("system_prompt.jinja").render(
|
||||
result = env.get_template("system_prompt.jinja").render(
|
||||
get_tools_prompt=get_tools_prompt,
|
||||
loaded_module_names=list(prompt_module_content.keys()),
|
||||
**prompt_module_content,
|
||||
loaded_skill_names=list(skill_content.keys()),
|
||||
**skill_content,
|
||||
)
|
||||
except (FileNotFoundError, OSError, ValueError) as e:
|
||||
logger.warning(f"Failed to load system prompt for {agent_name}: {e}")
|
||||
self.system_prompt = "You are a helpful AI assistant."
|
||||
else:
|
||||
self.system_prompt = "You are a helpful AI assistant."
|
||||
return str(result)
|
||||
except Exception: # noqa: BLE001
|
||||
return ""
|
||||
|
||||
def set_agent_identity(self, agent_name: str | None, agent_id: str | None) -> None:
|
||||
if agent_name:
|
||||
@@ -185,335 +108,211 @@ class LLM:
|
||||
if agent_id:
|
||||
self.agent_id = agent_id
|
||||
|
||||
def _build_identity_message(self) -> dict[str, Any] | None:
|
||||
if not (self.agent_name and str(self.agent_name).strip()):
|
||||
return None
|
||||
identity_name = self.agent_name
|
||||
identity_id = self.agent_id
|
||||
content = (
|
||||
"\n\n"
|
||||
"<agent_identity>\n"
|
||||
"<meta>Internal metadata: do not echo or reference; "
|
||||
"not part of history or tool calls.</meta>\n"
|
||||
"<note>You are now assuming the role of this agent. "
|
||||
"Act strictly as this agent and maintain self-identity for this step. "
|
||||
"Now go answer the next needed step!</note>\n"
|
||||
f"<agent_name>{identity_name}</agent_name>\n"
|
||||
f"<agent_id>{identity_id}</agent_id>\n"
|
||||
"</agent_identity>\n\n"
|
||||
)
|
||||
return {"role": "user", "content": content}
|
||||
async def generate(
|
||||
self, conversation_history: list[dict[str, Any]]
|
||||
) -> AsyncIterator[LLMResponse]:
|
||||
messages = self._prepare_messages(conversation_history)
|
||||
max_retries = int(Config.get("strix_llm_max_retries") or "5")
|
||||
|
||||
def _add_cache_control_to_content(
|
||||
self, content: str | list[dict[str, Any]]
|
||||
) -> str | list[dict[str, Any]]:
|
||||
if isinstance(content, str):
|
||||
return [{"type": "text", "text": content, "cache_control": {"type": "ephemeral"}}]
|
||||
if isinstance(content, list) and content:
|
||||
last_item = content[-1]
|
||||
if isinstance(last_item, dict) and last_item.get("type") == "text":
|
||||
return content[:-1] + [{**last_item, "cache_control": {"type": "ephemeral"}}]
|
||||
return content
|
||||
for attempt in range(max_retries + 1):
|
||||
try:
|
||||
async for response in self._stream(messages):
|
||||
yield response
|
||||
return # noqa: TRY300
|
||||
except Exception as e: # noqa: BLE001
|
||||
if attempt >= max_retries or not self._should_retry(e):
|
||||
self._raise_error(e)
|
||||
wait = min(10, 2 * (2**attempt))
|
||||
await asyncio.sleep(wait)
|
||||
|
||||
def _is_anthropic_model(self) -> bool:
|
||||
if not self.config.model_name:
|
||||
return False
|
||||
model_lower = self.config.model_name.lower()
|
||||
return any(provider in model_lower for provider in ["anthropic/", "claude"])
|
||||
async def _stream(self, messages: list[dict[str, Any]]) -> AsyncIterator[LLMResponse]:
|
||||
accumulated = ""
|
||||
chunks: list[Any] = []
|
||||
|
||||
def _calculate_cache_interval(self, total_messages: int) -> int:
|
||||
if total_messages <= 1:
|
||||
return 10
|
||||
self._total_stats.requests += 1
|
||||
response = await acompletion(**self._build_completion_args(messages), stream=True)
|
||||
|
||||
max_cached_messages = 3
|
||||
non_system_messages = total_messages - 1
|
||||
|
||||
interval = 10
|
||||
while non_system_messages // interval > max_cached_messages:
|
||||
interval += 10
|
||||
|
||||
return interval
|
||||
|
||||
def _prepare_cached_messages(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
if (
|
||||
not self.config.enable_prompt_caching
|
||||
or not supports_prompt_caching(self.config.model_name)
|
||||
or not messages
|
||||
):
|
||||
return messages
|
||||
|
||||
if not self._is_anthropic_model():
|
||||
return messages
|
||||
|
||||
cached_messages = list(messages)
|
||||
|
||||
if cached_messages and cached_messages[0].get("role") == "system":
|
||||
system_message = cached_messages[0].copy()
|
||||
system_message["content"] = self._add_cache_control_to_content(
|
||||
system_message["content"]
|
||||
)
|
||||
cached_messages[0] = system_message
|
||||
|
||||
total_messages = len(cached_messages)
|
||||
if total_messages > 1:
|
||||
interval = self._calculate_cache_interval(total_messages)
|
||||
|
||||
cached_count = 0
|
||||
for i in range(interval, total_messages, interval):
|
||||
if cached_count >= 3:
|
||||
async for chunk in response:
|
||||
chunks.append(chunk)
|
||||
delta = self._get_chunk_content(chunk)
|
||||
if delta:
|
||||
accumulated += delta
|
||||
if "</function>" in accumulated:
|
||||
accumulated = accumulated[
|
||||
: accumulated.find("</function>") + len("</function>")
|
||||
]
|
||||
yield LLMResponse(content=accumulated)
|
||||
break
|
||||
yield LLMResponse(content=accumulated)
|
||||
|
||||
if i < len(cached_messages):
|
||||
message = cached_messages[i].copy()
|
||||
message["content"] = self._add_cache_control_to_content(message["content"])
|
||||
cached_messages[i] = message
|
||||
cached_count += 1
|
||||
if chunks:
|
||||
self._update_usage_stats(stream_chunk_builder(chunks))
|
||||
|
||||
return cached_messages
|
||||
accumulated = fix_incomplete_tool_call(_truncate_to_first_function(accumulated))
|
||||
yield LLMResponse(
|
||||
content=accumulated,
|
||||
tool_invocations=parse_tool_invocations(accumulated),
|
||||
thinking_blocks=self._extract_thinking(chunks),
|
||||
)
|
||||
|
||||
async def generate( # noqa: PLR0912, PLR0915
|
||||
self,
|
||||
conversation_history: list[dict[str, Any]],
|
||||
scan_id: str | None = None,
|
||||
step_number: int = 1,
|
||||
) -> LLMResponse:
|
||||
def _prepare_messages(self, conversation_history: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
messages = [{"role": "system", "content": self.system_prompt}]
|
||||
|
||||
identity_message = self._build_identity_message()
|
||||
if identity_message:
|
||||
messages.append(identity_message)
|
||||
|
||||
compressed_history = list(self.memory_compressor.compress_history(conversation_history))
|
||||
|
||||
conversation_history.clear()
|
||||
conversation_history.extend(compressed_history)
|
||||
messages.extend(compressed_history)
|
||||
|
||||
cached_messages = self._prepare_cached_messages(messages)
|
||||
|
||||
try:
|
||||
response = await self._make_request(cached_messages)
|
||||
self._update_usage_stats(response)
|
||||
|
||||
content = ""
|
||||
if (
|
||||
response.choices
|
||||
and hasattr(response.choices[0], "message")
|
||||
and response.choices[0].message
|
||||
):
|
||||
content = getattr(response.choices[0].message, "content", "") or ""
|
||||
|
||||
content = _truncate_to_first_function(content)
|
||||
|
||||
if "</function>" in content:
|
||||
function_end_index = content.find("</function>") + len("</function>")
|
||||
content = content[:function_end_index]
|
||||
|
||||
tool_invocations = parse_tool_invocations(content)
|
||||
|
||||
return LLMResponse(
|
||||
scan_id=scan_id,
|
||||
step_number=step_number,
|
||||
role=StepRole.AGENT,
|
||||
content=content,
|
||||
tool_invocations=tool_invocations if tool_invocations else None,
|
||||
)
|
||||
|
||||
except litellm.RateLimitError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Rate limit exceeded", str(e)) from e
|
||||
except litellm.AuthenticationError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Invalid API key", str(e)) from e
|
||||
except litellm.NotFoundError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Model not found", str(e)) from e
|
||||
except litellm.ContextWindowExceededError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Context too long", str(e)) from e
|
||||
except litellm.ContentPolicyViolationError as e:
|
||||
raise LLMRequestFailedError(
|
||||
"LLM request failed: Content policy violation", str(e)
|
||||
) from e
|
||||
except litellm.ServiceUnavailableError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Service unavailable", str(e)) from e
|
||||
except litellm.Timeout as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Request timed out", str(e)) from e
|
||||
except litellm.UnprocessableEntityError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Unprocessable entity", str(e)) from e
|
||||
except litellm.InternalServerError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Internal server error", str(e)) from e
|
||||
except litellm.APIConnectionError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Connection error", str(e)) from e
|
||||
except litellm.UnsupportedParamsError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Unsupported parameters", str(e)) from e
|
||||
except litellm.BudgetExceededError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Budget exceeded", str(e)) from e
|
||||
except litellm.APIResponseValidationError as e:
|
||||
raise LLMRequestFailedError(
|
||||
"LLM request failed: Response validation error", str(e)
|
||||
) from e
|
||||
except litellm.JSONSchemaValidationError as e:
|
||||
raise LLMRequestFailedError(
|
||||
"LLM request failed: JSON schema validation error", str(e)
|
||||
) from e
|
||||
except litellm.InvalidRequestError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Invalid request", str(e)) from e
|
||||
except litellm.BadRequestError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: Bad request", str(e)) from e
|
||||
except litellm.APIError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: API error", str(e)) from e
|
||||
except litellm.OpenAIError as e:
|
||||
raise LLMRequestFailedError("LLM request failed: OpenAI error", str(e)) from e
|
||||
except Exception as e:
|
||||
raise LLMRequestFailedError(f"LLM request failed: {type(e).__name__}", str(e)) from e
|
||||
|
||||
@property
|
||||
def usage_stats(self) -> dict[str, dict[str, int | float]]:
|
||||
return {
|
||||
"total": self._total_stats.to_dict(),
|
||||
"last_request": self._last_request_stats.to_dict(),
|
||||
}
|
||||
|
||||
def get_cache_config(self) -> dict[str, bool]:
|
||||
return {
|
||||
"enabled": self.config.enable_prompt_caching,
|
||||
"supported": supports_prompt_caching(self.config.model_name),
|
||||
}
|
||||
|
||||
def _should_include_stop_param(self) -> bool:
|
||||
if not self.config.model_name:
|
||||
return True
|
||||
|
||||
return not model_matches(self.config.model_name, SUPPORTS_STOP_WORDS_FALSE_PATTERNS)
|
||||
|
||||
def _should_include_reasoning_effort(self) -> bool:
|
||||
if not self.config.model_name:
|
||||
return False
|
||||
|
||||
return model_matches(self.config.model_name, REASONING_EFFORT_PATTERNS)
|
||||
|
||||
def _model_supports_vision(self) -> bool:
|
||||
if not self.config.model_name:
|
||||
return False
|
||||
try:
|
||||
return bool(supports_vision(model=self.config.model_name))
|
||||
except Exception: # noqa: BLE001
|
||||
return False
|
||||
|
||||
def _filter_images_from_messages(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
filtered_messages = []
|
||||
for msg in messages:
|
||||
content = msg.get("content")
|
||||
updated_msg = msg
|
||||
if isinstance(content, list):
|
||||
filtered_content = []
|
||||
for item in content:
|
||||
if isinstance(item, dict):
|
||||
if item.get("type") == "image_url":
|
||||
filtered_content.append(
|
||||
if self.agent_name:
|
||||
messages.append(
|
||||
{
|
||||
"type": "text",
|
||||
"text": "[Screenshot removed - model does not support "
|
||||
"vision. Use view_source or execute_js instead.]",
|
||||
"role": "user",
|
||||
"content": (
|
||||
f"\n\n<agent_identity>\n"
|
||||
f"<meta>Internal metadata: do not echo or reference.</meta>\n"
|
||||
f"<agent_name>{self.agent_name}</agent_name>\n"
|
||||
f"<agent_id>{self.agent_id}</agent_id>\n"
|
||||
f"</agent_identity>\n\n"
|
||||
),
|
||||
}
|
||||
)
|
||||
else:
|
||||
filtered_content.append(item)
|
||||
else:
|
||||
filtered_content.append(item)
|
||||
if filtered_content:
|
||||
text_parts = [
|
||||
item.get("text", "") if isinstance(item, dict) else str(item)
|
||||
for item in filtered_content
|
||||
]
|
||||
all_text = all(
|
||||
isinstance(item, dict) and item.get("type") == "text"
|
||||
for item in filtered_content
|
||||
)
|
||||
if all_text:
|
||||
updated_msg = {**msg, "content": "\n".join(text_parts)}
|
||||
else:
|
||||
updated_msg = {**msg, "content": filtered_content}
|
||||
else:
|
||||
updated_msg = {**msg, "content": ""}
|
||||
filtered_messages.append(updated_msg)
|
||||
return filtered_messages
|
||||
|
||||
async def _make_request(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
) -> ModelResponse:
|
||||
if not self._model_supports_vision():
|
||||
messages = self._filter_images_from_messages(messages)
|
||||
compressed = list(self.memory_compressor.compress_history(conversation_history))
|
||||
conversation_history.clear()
|
||||
conversation_history.extend(compressed)
|
||||
messages.extend(compressed)
|
||||
|
||||
completion_args: dict[str, Any] = {
|
||||
if self._is_anthropic() and self.config.enable_prompt_caching:
|
||||
messages = self._add_cache_control(messages)
|
||||
|
||||
return messages
|
||||
|
||||
def _build_completion_args(self, messages: list[dict[str, Any]]) -> dict[str, Any]:
|
||||
if not self._supports_vision():
|
||||
messages = self._strip_images(messages)
|
||||
|
||||
args: dict[str, Any] = {
|
||||
"model": self.config.model_name,
|
||||
"messages": messages,
|
||||
"timeout": self.config.timeout,
|
||||
"stream_options": {"include_usage": True},
|
||||
}
|
||||
|
||||
if _LLM_API_KEY:
|
||||
completion_args["api_key"] = _LLM_API_KEY
|
||||
if _LLM_API_BASE:
|
||||
completion_args["api_base"] = _LLM_API_BASE
|
||||
if api_key := Config.get("llm_api_key"):
|
||||
args["api_key"] = api_key
|
||||
if api_base := (
|
||||
Config.get("llm_api_base")
|
||||
or Config.get("openai_api_base")
|
||||
or Config.get("litellm_base_url")
|
||||
or Config.get("ollama_api_base")
|
||||
):
|
||||
args["api_base"] = api_base
|
||||
if self._supports_reasoning():
|
||||
args["reasoning_effort"] = self._reasoning_effort
|
||||
|
||||
if self._should_include_stop_param():
|
||||
completion_args["stop"] = ["</function>"]
|
||||
return args
|
||||
|
||||
if self._should_include_reasoning_effort():
|
||||
completion_args["reasoning_effort"] = "high"
|
||||
def _get_chunk_content(self, chunk: Any) -> str:
|
||||
if chunk.choices and hasattr(chunk.choices[0], "delta"):
|
||||
return getattr(chunk.choices[0].delta, "content", "") or ""
|
||||
return ""
|
||||
|
||||
queue = get_global_queue()
|
||||
response = await queue.make_request(completion_args)
|
||||
def _extract_thinking(self, chunks: list[Any]) -> list[dict[str, Any]] | None:
|
||||
if not chunks or not self._supports_reasoning():
|
||||
return None
|
||||
try:
|
||||
resp = stream_chunk_builder(chunks)
|
||||
if resp.choices and hasattr(resp.choices[0].message, "thinking_blocks"):
|
||||
blocks: list[dict[str, Any]] = resp.choices[0].message.thinking_blocks
|
||||
return blocks
|
||||
except Exception: # noqa: BLE001, S110 # nosec B110
|
||||
pass
|
||||
return None
|
||||
|
||||
self._total_stats.requests += 1
|
||||
self._last_request_stats = RequestStats(requests=1)
|
||||
|
||||
return response
|
||||
|
||||
def _update_usage_stats(self, response: ModelResponse) -> None:
|
||||
def _update_usage_stats(self, response: Any) -> None:
|
||||
try:
|
||||
if hasattr(response, "usage") and response.usage:
|
||||
input_tokens = getattr(response.usage, "prompt_tokens", 0)
|
||||
output_tokens = getattr(response.usage, "completion_tokens", 0)
|
||||
|
||||
cached_tokens = 0
|
||||
cache_creation_tokens = 0
|
||||
|
||||
if hasattr(response.usage, "prompt_tokens_details"):
|
||||
prompt_details = response.usage.prompt_tokens_details
|
||||
if hasattr(prompt_details, "cached_tokens"):
|
||||
cached_tokens = prompt_details.cached_tokens or 0
|
||||
|
||||
if hasattr(response.usage, "cache_creation_input_tokens"):
|
||||
cache_creation_tokens = response.usage.cache_creation_input_tokens or 0
|
||||
|
||||
else:
|
||||
input_tokens = 0
|
||||
output_tokens = 0
|
||||
cached_tokens = 0
|
||||
cache_creation_tokens = 0
|
||||
|
||||
try:
|
||||
cost = completion_cost(response) or 0.0
|
||||
except Exception as e: # noqa: BLE001
|
||||
logger.warning(f"Failed to calculate cost: {e}")
|
||||
except Exception: # noqa: BLE001
|
||||
cost = 0.0
|
||||
|
||||
self._total_stats.input_tokens += input_tokens
|
||||
self._total_stats.output_tokens += output_tokens
|
||||
self._total_stats.cached_tokens += cached_tokens
|
||||
self._total_stats.cache_creation_tokens += cache_creation_tokens
|
||||
self._total_stats.cost += cost
|
||||
|
||||
self._last_request_stats.input_tokens = input_tokens
|
||||
self._last_request_stats.output_tokens = output_tokens
|
||||
self._last_request_stats.cached_tokens = cached_tokens
|
||||
self._last_request_stats.cache_creation_tokens = cache_creation_tokens
|
||||
self._last_request_stats.cost = cost
|
||||
except Exception: # noqa: BLE001, S110 # nosec B110
|
||||
pass
|
||||
|
||||
if cached_tokens > 0:
|
||||
logger.info(f"Cache hit: {cached_tokens} cached tokens, {input_tokens} new tokens")
|
||||
if cache_creation_tokens > 0:
|
||||
logger.info(f"Cache creation: {cache_creation_tokens} tokens written to cache")
|
||||
def _should_retry(self, e: Exception) -> bool:
|
||||
code = getattr(e, "status_code", None) or getattr(
|
||||
getattr(e, "response", None), "status_code", None
|
||||
)
|
||||
return code is None or litellm._should_retry(code)
|
||||
|
||||
logger.info(f"Usage stats: {self.usage_stats}")
|
||||
except Exception as e: # noqa: BLE001
|
||||
logger.warning(f"Failed to update usage stats: {e}")
|
||||
def _raise_error(self, e: Exception) -> None:
|
||||
from strix.telemetry import posthog
|
||||
|
||||
posthog.error("llm_error", type(e).__name__)
|
||||
raise LLMRequestFailedError(f"LLM request failed: {type(e).__name__}", str(e)) from e
|
||||
|
||||
def _is_anthropic(self) -> bool:
|
||||
if not self.config.model_name:
|
||||
return False
|
||||
return any(p in self.config.model_name.lower() for p in ["anthropic/", "claude"])
|
||||
|
||||
def _supports_vision(self) -> bool:
|
||||
try:
|
||||
return bool(supports_vision(model=self.config.model_name))
|
||||
except Exception: # noqa: BLE001
|
||||
return False
|
||||
|
||||
def _supports_reasoning(self) -> bool:
|
||||
try:
|
||||
return bool(supports_reasoning(model=self.config.model_name))
|
||||
except Exception: # noqa: BLE001
|
||||
return False
|
||||
|
||||
def _strip_images(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
result = []
|
||||
for msg in messages:
|
||||
content = msg.get("content")
|
||||
if isinstance(content, list):
|
||||
text_parts = []
|
||||
for item in content:
|
||||
if isinstance(item, dict) and item.get("type") == "text":
|
||||
text_parts.append(item.get("text", ""))
|
||||
elif isinstance(item, dict) and item.get("type") == "image_url":
|
||||
text_parts.append("[Image removed - model doesn't support vision]")
|
||||
result.append({**msg, "content": "\n".join(text_parts)})
|
||||
else:
|
||||
result.append(msg)
|
||||
return result
|
||||
|
||||
def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
if not messages or not supports_prompt_caching(self.config.model_name):
|
||||
return messages
|
||||
|
||||
result = list(messages)
|
||||
|
||||
if result[0].get("role") == "system":
|
||||
content = result[0]["content"]
|
||||
result[0] = {
|
||||
**result[0],
|
||||
"content": [
|
||||
{"type": "text", "text": content, "cache_control": {"type": "ephemeral"}}
|
||||
]
|
||||
if isinstance(content, str)
|
||||
else content,
|
||||
}
|
||||
return result
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
import logging
|
||||
import os
|
||||
from typing import Any
|
||||
|
||||
import litellm
|
||||
|
||||
from strix.config import Config
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -85,7 +86,7 @@ def _extract_message_text(msg: dict[str, Any]) -> str:
|
||||
def _summarize_messages(
|
||||
messages: list[dict[str, Any]],
|
||||
model: str,
|
||||
timeout: int = 600,
|
||||
timeout: int = 30,
|
||||
) -> dict[str, Any]:
|
||||
if not messages:
|
||||
empty_summary = "<context_summary message_count='0'>{text}</context_summary>"
|
||||
@@ -147,11 +148,11 @@ class MemoryCompressor:
|
||||
self,
|
||||
max_images: int = 3,
|
||||
model_name: str | None = None,
|
||||
timeout: int = 600,
|
||||
timeout: int | None = None,
|
||||
):
|
||||
self.max_images = max_images
|
||||
self.model_name = model_name or os.getenv("STRIX_LLM", "openai/gpt-5")
|
||||
self.timeout = timeout
|
||||
self.model_name = model_name or Config.get("strix_llm")
|
||||
self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "30")
|
||||
|
||||
if not self.model_name:
|
||||
raise ValueError("STRIX_LLM environment variable must be set and not empty")
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
import litellm
|
||||
from litellm import ModelResponse, completion
|
||||
from tenacity import retry, retry_if_exception, stop_after_attempt, wait_exponential
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def should_retry_exception(exception: Exception) -> bool:
|
||||
status_code = None
|
||||
|
||||
if hasattr(exception, "status_code"):
|
||||
status_code = exception.status_code
|
||||
elif hasattr(exception, "response") and hasattr(exception.response, "status_code"):
|
||||
status_code = exception.response.status_code
|
||||
|
||||
if status_code is not None:
|
||||
return bool(litellm._should_retry(status_code))
|
||||
return True
|
||||
|
||||
|
||||
class LLMRequestQueue:
|
||||
def __init__(self, max_concurrent: int = 1, delay_between_requests: float = 4.0):
|
||||
rate_limit_delay = os.getenv("LLM_RATE_LIMIT_DELAY")
|
||||
if rate_limit_delay:
|
||||
delay_between_requests = float(rate_limit_delay)
|
||||
|
||||
rate_limit_concurrent = os.getenv("LLM_RATE_LIMIT_CONCURRENT")
|
||||
if rate_limit_concurrent:
|
||||
max_concurrent = int(rate_limit_concurrent)
|
||||
|
||||
self.max_concurrent = max_concurrent
|
||||
self.delay_between_requests = delay_between_requests
|
||||
self._semaphore = threading.BoundedSemaphore(max_concurrent)
|
||||
self._last_request_time = 0.0
|
||||
self._lock = threading.Lock()
|
||||
|
||||
async def make_request(self, completion_args: dict[str, Any]) -> ModelResponse:
|
||||
try:
|
||||
while not self._semaphore.acquire(timeout=0.2):
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
with self._lock:
|
||||
now = time.time()
|
||||
time_since_last = now - self._last_request_time
|
||||
sleep_needed = max(0, self.delay_between_requests - time_since_last)
|
||||
self._last_request_time = now + sleep_needed
|
||||
|
||||
if sleep_needed > 0:
|
||||
await asyncio.sleep(sleep_needed)
|
||||
|
||||
return await self._reliable_request(completion_args)
|
||||
finally:
|
||||
self._semaphore.release()
|
||||
|
||||
@retry( # type: ignore[misc]
|
||||
stop=stop_after_attempt(3),
|
||||
wait=wait_exponential(multiplier=8, min=8, max=64),
|
||||
retry=retry_if_exception(should_retry_exception),
|
||||
reraise=True,
|
||||
)
|
||||
async def _reliable_request(self, completion_args: dict[str, Any]) -> ModelResponse:
|
||||
response = completion(**completion_args, stream=False)
|
||||
if isinstance(response, ModelResponse):
|
||||
return response
|
||||
self._raise_unexpected_response()
|
||||
raise RuntimeError("Unreachable code")
|
||||
|
||||
def _raise_unexpected_response(self) -> None:
|
||||
raise RuntimeError("Unexpected response type")
|
||||
|
||||
|
||||
_global_queue: LLMRequestQueue | None = None
|
||||
|
||||
|
||||
def get_global_queue() -> LLMRequestQueue:
|
||||
global _global_queue # noqa: PLW0603
|
||||
if _global_queue is None:
|
||||
_global_queue = LLMRequestQueue()
|
||||
return _global_queue
|
||||
@@ -18,7 +18,7 @@ def _truncate_to_first_function(content: str) -> str:
|
||||
|
||||
|
||||
def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
|
||||
content = _fix_stopword(content)
|
||||
content = fix_incomplete_tool_call(content)
|
||||
|
||||
tool_invocations: list[dict[str, Any]] = []
|
||||
|
||||
@@ -46,12 +46,15 @@ def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
|
||||
return tool_invocations if tool_invocations else None
|
||||
|
||||
|
||||
def _fix_stopword(content: str) -> str:
|
||||
if "<function=" in content and content.count("<function=") == 1:
|
||||
if content.endswith("</"):
|
||||
content = content.rstrip() + "function>"
|
||||
elif not content.rstrip().endswith("</function>"):
|
||||
content = content + "\n</function>"
|
||||
def fix_incomplete_tool_call(content: str) -> str:
|
||||
"""Fix incomplete tool calls by adding missing </function> tag."""
|
||||
if (
|
||||
"<function=" in content
|
||||
and content.count("<function=") == 1
|
||||
and "</function>" not in content
|
||||
):
|
||||
content = content.rstrip()
|
||||
content = content + "function>" if content.endswith("</") else content + "\n</function>"
|
||||
return content
|
||||
|
||||
|
||||
@@ -70,11 +73,17 @@ def clean_content(content: str) -> str:
|
||||
if not content:
|
||||
return ""
|
||||
|
||||
content = _fix_stopword(content)
|
||||
content = fix_incomplete_tool_call(content)
|
||||
|
||||
tool_pattern = r"<function=[^>]+>.*?</function>"
|
||||
cleaned = re.sub(tool_pattern, "", content, flags=re.DOTALL)
|
||||
|
||||
incomplete_tool_pattern = r"<function=[^>]+>.*$"
|
||||
cleaned = re.sub(incomplete_tool_pattern, "", cleaned, flags=re.DOTALL)
|
||||
|
||||
partial_tag_pattern = r"<f(?:u(?:n(?:c(?:t(?:i(?:o(?:n(?:=(?:[^>]*)?)?)?)?)?)?)?)?)?$"
|
||||
cleaned = re.sub(partial_tag_pattern, "", cleaned)
|
||||
|
||||
hidden_xml_patterns = [
|
||||
r"<inter_agent_message>.*?</inter_agent_message>",
|
||||
r"<agent_completion_report>.*?</agent_completion_report>",
|
||||
|
||||
@@ -1,64 +0,0 @@
|
||||
# 📚 Strix Prompt Modules
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Prompt modules are specialized knowledge packages that enhance Strix agents with deep expertise in specific vulnerability types, technologies, and testing methodologies. Each module provides advanced techniques, practical examples, and validation methods that go beyond baseline security knowledge.
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### How Prompts Work
|
||||
|
||||
When an agent is created, it can load up to 5 specialized prompt modules relevant to the specific subtask and context at hand:
|
||||
|
||||
```python
|
||||
# Agent creation with specialized modules
|
||||
create_agent(
|
||||
task="Test authentication mechanisms in API",
|
||||
name="Auth Specialist",
|
||||
prompt_modules="authentication_jwt,business_logic"
|
||||
)
|
||||
```
|
||||
|
||||
The modules are dynamically injected into the agent's system prompt, allowing it to operate with deep expertise tailored to the specific vulnerability types or technologies required for the task at hand.
|
||||
|
||||
---
|
||||
|
||||
## 📁 Module Categories
|
||||
|
||||
| Category | Purpose |
|
||||
|----------|---------|
|
||||
| **`/vulnerabilities`** | Advanced testing techniques for core vulnerability classes like authentication bypasses, business logic flaws, and race conditions |
|
||||
| **`/frameworks`** | Specific testing methods for popular frameworks e.g. Django, Express, FastAPI, and Next.js |
|
||||
| **`/technologies`** | Specialized techniques for third-party services such as Supabase, Firebase, Auth0, and payment gateways |
|
||||
| **`/protocols`** | Protocol-specific testing patterns for GraphQL, WebSocket, OAuth, and other communication standards |
|
||||
| **`/cloud`** | Cloud provider security testing for AWS, Azure, GCP, and Kubernetes environments |
|
||||
| **`/reconnaissance`** | Advanced information gathering and enumeration techniques for comprehensive attack surface mapping |
|
||||
| **`/custom`** | Community-contributed modules for specialized or industry-specific testing scenarios |
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Creating New Modules
|
||||
|
||||
### What Should a Module Contain?
|
||||
|
||||
A good prompt module is a structured knowledge package that typically includes:
|
||||
|
||||
- **Advanced techniques** - Non-obvious methods specific to the task and domain
|
||||
- **Practical examples** - Working payloads, commands, or test cases with variations
|
||||
- **Validation methods** - How to confirm findings and avoid false positives
|
||||
- **Context-specific insights** - Environment and version nuances, configuration-dependent behavior, and edge cases
|
||||
|
||||
Modules use XML-style tags for structure and focus on deep, specialized knowledge that significantly enhances agent capabilities for that specific context.
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Community contributions are more than welcome — contribute new modules via [pull requests](https://github.com/usestrix/strix/pulls) or [GitHub issues](https://github.com/usestrix/strix/issues) to help expand the collection and improve extensibility for Strix agents.
|
||||
|
||||
---
|
||||
|
||||
> [!NOTE]
|
||||
> **Work in Progress** - We're actively expanding the prompt module collection with specialized techniques and new categories.
|
||||
@@ -1,109 +0,0 @@
|
||||
from pathlib import Path
|
||||
|
||||
from jinja2 import Environment
|
||||
|
||||
|
||||
def get_available_prompt_modules() -> dict[str, list[str]]:
|
||||
modules_dir = Path(__file__).parent
|
||||
available_modules = {}
|
||||
|
||||
for category_dir in modules_dir.iterdir():
|
||||
if category_dir.is_dir() and not category_dir.name.startswith("__"):
|
||||
category_name = category_dir.name
|
||||
modules = []
|
||||
|
||||
for file_path in category_dir.glob("*.jinja"):
|
||||
module_name = file_path.stem
|
||||
modules.append(module_name)
|
||||
|
||||
if modules:
|
||||
available_modules[category_name] = sorted(modules)
|
||||
|
||||
return available_modules
|
||||
|
||||
|
||||
def get_all_module_names() -> set[str]:
|
||||
all_modules = set()
|
||||
for category_modules in get_available_prompt_modules().values():
|
||||
all_modules.update(category_modules)
|
||||
return all_modules
|
||||
|
||||
|
||||
def validate_module_names(module_names: list[str]) -> dict[str, list[str]]:
|
||||
available_modules = get_all_module_names()
|
||||
valid_modules = []
|
||||
invalid_modules = []
|
||||
|
||||
for module_name in module_names:
|
||||
if module_name in available_modules:
|
||||
valid_modules.append(module_name)
|
||||
else:
|
||||
invalid_modules.append(module_name)
|
||||
|
||||
return {"valid": valid_modules, "invalid": invalid_modules}
|
||||
|
||||
|
||||
def generate_modules_description() -> str:
|
||||
available_modules = get_available_prompt_modules()
|
||||
|
||||
if not available_modules:
|
||||
return "No prompt modules available"
|
||||
|
||||
all_module_names = get_all_module_names()
|
||||
|
||||
if not all_module_names:
|
||||
return "No prompt modules available"
|
||||
|
||||
sorted_modules = sorted(all_module_names)
|
||||
modules_str = ", ".join(sorted_modules)
|
||||
|
||||
description = (
|
||||
f"List of prompt modules to load for this agent (max 5). Available modules: {modules_str}. "
|
||||
)
|
||||
|
||||
example_modules = sorted_modules[:2]
|
||||
if example_modules:
|
||||
example = f"Example: {', '.join(example_modules)} for specialized agent"
|
||||
description += example
|
||||
|
||||
return description
|
||||
|
||||
|
||||
def load_prompt_modules(module_names: list[str], jinja_env: Environment) -> dict[str, str]:
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
module_content = {}
|
||||
prompts_dir = Path(__file__).parent
|
||||
|
||||
available_modules = get_available_prompt_modules()
|
||||
|
||||
for module_name in module_names:
|
||||
try:
|
||||
module_path = None
|
||||
|
||||
if "/" in module_name:
|
||||
module_path = f"{module_name}.jinja"
|
||||
else:
|
||||
for category, modules in available_modules.items():
|
||||
if module_name in modules:
|
||||
module_path = f"{category}/{module_name}.jinja"
|
||||
break
|
||||
|
||||
if not module_path:
|
||||
root_candidate = f"{module_name}.jinja"
|
||||
if (prompts_dir / root_candidate).exists():
|
||||
module_path = root_candidate
|
||||
|
||||
if module_path and (prompts_dir / module_path).exists():
|
||||
template = jinja_env.get_template(module_path)
|
||||
var_name = module_name.split("/")[-1]
|
||||
module_content[var_name] = template.render()
|
||||
logger.info(f"Loaded prompt module: {module_name} -> {var_name}")
|
||||
else:
|
||||
logger.warning(f"Prompt module not found: {module_name}")
|
||||
|
||||
except (FileNotFoundError, OSError, ValueError) as e:
|
||||
logger.warning(f"Failed to load prompt module {module_name}: {e}")
|
||||
|
||||
return module_content
|
||||
@@ -1,10 +1,19 @@
|
||||
import os
|
||||
from strix.config import Config
|
||||
|
||||
from .runtime import AbstractRuntime
|
||||
|
||||
|
||||
class SandboxInitializationError(Exception):
|
||||
"""Raised when sandbox initialization fails (e.g., Docker issues)."""
|
||||
|
||||
def __init__(self, message: str, details: str | None = None):
|
||||
super().__init__(message)
|
||||
self.message = message
|
||||
self.details = details
|
||||
|
||||
|
||||
def get_runtime() -> AbstractRuntime:
|
||||
runtime_backend = os.getenv("STRIX_RUNTIME_BACKEND", "docker")
|
||||
runtime_backend = Config.get("strix_runtime_backend")
|
||||
|
||||
if runtime_backend == "docker":
|
||||
from .docker_runtime import DockerRuntime
|
||||
@@ -16,4 +25,4 @@ def get_runtime() -> AbstractRuntime:
|
||||
)
|
||||
|
||||
|
||||
__all__ = ["AbstractRuntime", "get_runtime"]
|
||||
__all__ = ["AbstractRuntime", "SandboxInitializationError", "get_runtime"]
|
||||
|
||||
@@ -4,27 +4,49 @@ import os
|
||||
import secrets
|
||||
import socket
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from concurrent.futures import TimeoutError as FuturesTimeoutError
|
||||
from pathlib import Path
|
||||
from typing import cast
|
||||
from typing import Any, cast
|
||||
|
||||
import docker
|
||||
from docker.errors import DockerException, ImageNotFound, NotFound
|
||||
from docker.models.containers import Container
|
||||
from requests.exceptions import ConnectionError as RequestsConnectionError
|
||||
from requests.exceptions import Timeout as RequestsTimeout
|
||||
|
||||
from strix.config import Config
|
||||
|
||||
from . import SandboxInitializationError
|
||||
from .runtime import AbstractRuntime, SandboxInfo
|
||||
|
||||
|
||||
STRIX_IMAGE = os.getenv("STRIX_IMAGE", "ghcr.io/usestrix/strix-sandbox:0.1.10")
|
||||
HOST_GATEWAY_HOSTNAME = "host.docker.internal"
|
||||
DOCKER_TIMEOUT = 60 # seconds
|
||||
TOOL_SERVER_HEALTH_REQUEST_TIMEOUT = 5 # seconds per health check request
|
||||
TOOL_SERVER_HEALTH_RETRIES = 10 # number of retries for health check
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DockerRuntime(AbstractRuntime):
|
||||
def __init__(self) -> None:
|
||||
try:
|
||||
self.client = docker.from_env()
|
||||
except DockerException as e:
|
||||
self.client = docker.from_env(timeout=DOCKER_TIMEOUT)
|
||||
except (DockerException, RequestsConnectionError, RequestsTimeout) as e:
|
||||
logger.exception("Failed to connect to Docker daemon")
|
||||
raise RuntimeError("Docker is not available or not configured correctly.") from e
|
||||
if isinstance(e, RequestsConnectionError | RequestsTimeout):
|
||||
raise SandboxInitializationError(
|
||||
"Docker daemon unresponsive",
|
||||
f"Connection timed out after {DOCKER_TIMEOUT} seconds. "
|
||||
"Please ensure Docker Desktop is installed and running, "
|
||||
"and try running strix again.",
|
||||
) from e
|
||||
raise SandboxInitializationError(
|
||||
"Docker is not available",
|
||||
"Docker is not available or not configured correctly. "
|
||||
"Please ensure Docker Desktop is installed and running, "
|
||||
"and try running strix again.",
|
||||
) from e
|
||||
|
||||
self._scan_container: Container | None = None
|
||||
self._tool_server_port: int | None = None
|
||||
@@ -38,6 +60,23 @@ class DockerRuntime(AbstractRuntime):
|
||||
s.bind(("", 0))
|
||||
return cast("int", s.getsockname()[1])
|
||||
|
||||
def _exec_run_with_timeout(
|
||||
self, container: Container, cmd: str, timeout: int = DOCKER_TIMEOUT, **kwargs: Any
|
||||
) -> Any:
|
||||
with ThreadPoolExecutor(max_workers=1) as executor:
|
||||
future = executor.submit(container.exec_run, cmd, **kwargs)
|
||||
try:
|
||||
return future.result(timeout=timeout)
|
||||
except FuturesTimeoutError:
|
||||
logger.exception(f"exec_run timed out after {timeout}s: {cmd[:100]}...")
|
||||
raise SandboxInitializationError(
|
||||
"Container command timed out",
|
||||
f"Command timed out after {timeout} seconds. "
|
||||
"Docker may be overloaded or unresponsive. "
|
||||
"Please ensure Docker Desktop is installed and running, "
|
||||
"and try running strix again.",
|
||||
) from None
|
||||
|
||||
def _get_scan_id(self, agent_id: str) -> str:
|
||||
try:
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
@@ -80,10 +119,13 @@ class DockerRuntime(AbstractRuntime):
|
||||
def _create_container_with_retry(self, scan_id: str, max_retries: int = 3) -> Container:
|
||||
last_exception = None
|
||||
container_name = f"strix-scan-{scan_id}"
|
||||
image_name = Config.get("strix_image")
|
||||
if not image_name:
|
||||
raise ValueError("STRIX_IMAGE must be configured")
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
self._verify_image_available(STRIX_IMAGE)
|
||||
self._verify_image_available(image_name)
|
||||
|
||||
try:
|
||||
existing_container = self.client.containers.get(container_name)
|
||||
@@ -105,7 +147,7 @@ class DockerRuntime(AbstractRuntime):
|
||||
self._tool_server_token = tool_server_token
|
||||
|
||||
container = self.client.containers.run(
|
||||
STRIX_IMAGE,
|
||||
image_name,
|
||||
command="sleep infinity",
|
||||
detach=True,
|
||||
name=container_name,
|
||||
@@ -121,7 +163,9 @@ class DockerRuntime(AbstractRuntime):
|
||||
"CAIDO_PORT": str(caido_port),
|
||||
"TOOL_SERVER_PORT": str(tool_server_port),
|
||||
"TOOL_SERVER_TOKEN": tool_server_token,
|
||||
"HOST_GATEWAY": HOST_GATEWAY_HOSTNAME,
|
||||
},
|
||||
extra_hosts=self._get_extra_hosts(),
|
||||
tty=True,
|
||||
)
|
||||
|
||||
@@ -131,7 +175,7 @@ class DockerRuntime(AbstractRuntime):
|
||||
self._initialize_container(
|
||||
container, caido_port, tool_server_port, tool_server_token
|
||||
)
|
||||
except DockerException as e:
|
||||
except (DockerException, RequestsConnectionError, RequestsTimeout) as e:
|
||||
last_exception = e
|
||||
if attempt == max_retries - 1:
|
||||
logger.exception(f"Failed to create container after {max_retries} attempts")
|
||||
@@ -147,8 +191,19 @@ class DockerRuntime(AbstractRuntime):
|
||||
else:
|
||||
return container
|
||||
|
||||
raise RuntimeError(
|
||||
f"Failed to create Docker container after {max_retries} attempts: {last_exception}"
|
||||
if isinstance(last_exception, RequestsConnectionError | RequestsTimeout):
|
||||
raise SandboxInitializationError(
|
||||
"Failed to create sandbox container",
|
||||
f"Docker daemon unresponsive after {max_retries} attempts "
|
||||
f"(timed out after {DOCKER_TIMEOUT}s). "
|
||||
"Please ensure Docker Desktop is installed and running, "
|
||||
"and try running strix again.",
|
||||
) from last_exception
|
||||
raise SandboxInitializationError(
|
||||
"Failed to create sandbox container",
|
||||
f"Container creation failed after {max_retries} attempts: {last_exception}. "
|
||||
"Please ensure Docker Desktop is installed and running, "
|
||||
"and try running strix again.",
|
||||
) from last_exception
|
||||
|
||||
def _get_or_create_scan_container(self, scan_id: str) -> Container: # noqa: PLR0912
|
||||
@@ -193,7 +248,7 @@ class DockerRuntime(AbstractRuntime):
|
||||
|
||||
except NotFound:
|
||||
pass
|
||||
except DockerException as e:
|
||||
except (DockerException, RequestsConnectionError, RequestsTimeout) as e:
|
||||
logger.warning(f"Failed to get container by name {container_name}: {e}")
|
||||
else:
|
||||
return container
|
||||
@@ -217,7 +272,7 @@ class DockerRuntime(AbstractRuntime):
|
||||
|
||||
logger.info(f"Found existing container by label for scan {scan_id}")
|
||||
return container
|
||||
except DockerException as e:
|
||||
except (DockerException, RequestsConnectionError, RequestsTimeout) as e:
|
||||
logger.warning("Failed to find existing container by label for scan %s: %s", scan_id, e)
|
||||
|
||||
logger.info("Creating new Docker container for scan %s", scan_id)
|
||||
@@ -227,15 +282,18 @@ class DockerRuntime(AbstractRuntime):
|
||||
self, container: Container, caido_port: int, tool_server_port: int, tool_server_token: str
|
||||
) -> None:
|
||||
logger.info("Initializing Caido proxy on port %s", caido_port)
|
||||
result = container.exec_run(
|
||||
self._exec_run_with_timeout(
|
||||
container,
|
||||
f"bash -c 'export CAIDO_PORT={caido_port} && /usr/local/bin/docker-entrypoint.sh true'",
|
||||
detach=False,
|
||||
)
|
||||
|
||||
time.sleep(5)
|
||||
|
||||
result = container.exec_run(
|
||||
"bash -c 'source /etc/profile.d/proxy.sh && echo $CAIDO_API_TOKEN'", user="pentester"
|
||||
result = self._exec_run_with_timeout(
|
||||
container,
|
||||
"bash -c 'source /etc/profile.d/proxy.sh && echo $CAIDO_API_TOKEN'",
|
||||
user="pentester",
|
||||
)
|
||||
caido_token = result.output.decode().strip() if result.exit_code == 0 else ""
|
||||
|
||||
@@ -248,7 +306,57 @@ class DockerRuntime(AbstractRuntime):
|
||||
user="pentester",
|
||||
)
|
||||
|
||||
time.sleep(5)
|
||||
time.sleep(2)
|
||||
|
||||
host = self._resolve_docker_host()
|
||||
health_url = f"http://{host}:{tool_server_port}/health"
|
||||
self._wait_for_tool_server_health(health_url)
|
||||
|
||||
def _wait_for_tool_server_health(
|
||||
self,
|
||||
health_url: str,
|
||||
max_retries: int = TOOL_SERVER_HEALTH_RETRIES,
|
||||
request_timeout: int = TOOL_SERVER_HEALTH_REQUEST_TIMEOUT,
|
||||
) -> None:
|
||||
import httpx
|
||||
|
||||
logger.info(f"Waiting for tool server health at {health_url}")
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
with httpx.Client(trust_env=False, timeout=request_timeout) as client:
|
||||
response = client.get(health_url)
|
||||
response.raise_for_status()
|
||||
health_data = response.json()
|
||||
|
||||
if health_data.get("status") == "healthy":
|
||||
logger.info(
|
||||
f"Tool server is healthy after {attempt + 1} attempt(s): {health_data}"
|
||||
)
|
||||
return
|
||||
|
||||
logger.warning(f"Tool server returned unexpected status: {health_data}")
|
||||
|
||||
except httpx.ConnectError:
|
||||
logger.debug(
|
||||
f"Tool server not ready (attempt {attempt + 1}/{max_retries}): "
|
||||
f"Connection refused"
|
||||
)
|
||||
except httpx.TimeoutException:
|
||||
logger.debug(
|
||||
f"Tool server not ready (attempt {attempt + 1}/{max_retries}): "
|
||||
f"Request timed out"
|
||||
)
|
||||
except (httpx.RequestError, httpx.HTTPStatusError) as e:
|
||||
logger.debug(f"Tool server not ready (attempt {attempt + 1}/{max_retries}): {e}")
|
||||
|
||||
sleep_time = min(2**attempt * 0.5, 5)
|
||||
time.sleep(sleep_time)
|
||||
|
||||
raise SandboxInitializationError(
|
||||
"Tool server failed to start",
|
||||
"Please ensure Docker Desktop is installed and running, and try running strix again.",
|
||||
)
|
||||
|
||||
def _copy_local_directory_to_container(
|
||||
self, container: Container, local_path: str, target_name: str | None = None
|
||||
@@ -381,6 +489,9 @@ class DockerRuntime(AbstractRuntime):
|
||||
|
||||
return "127.0.0.1"
|
||||
|
||||
def _get_extra_hosts(self) -> dict[str, str]:
|
||||
return {HOST_GATEWAY_HOSTNAME: "host-gateway"}
|
||||
|
||||
async def destroy_sandbox(self, container_id: str) -> None:
|
||||
logger.info("Destroying scan container %s", container_id)
|
||||
try:
|
||||
|
||||
64
strix/skills/README.md
Normal file
64
strix/skills/README.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# 📚 Strix Skills
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Skills are specialized knowledge packages that enhance Strix agents with deep expertise in specific vulnerability types, technologies, and testing methodologies. Each skill provides advanced techniques, practical examples, and validation methods that go beyond baseline security knowledge.
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### How Skills Work
|
||||
|
||||
When an agent is created, it can load up to 5 specialized skills relevant to the specific subtask and context at hand:
|
||||
|
||||
```python
|
||||
# Agent creation with specialized skills
|
||||
create_agent(
|
||||
task="Test authentication mechanisms in API",
|
||||
name="Auth Specialist",
|
||||
skills="authentication_jwt,business_logic"
|
||||
)
|
||||
```
|
||||
|
||||
The skills are dynamically injected into the agent's system prompt, allowing it to operate with deep expertise tailored to the specific vulnerability types or technologies required for the task at hand.
|
||||
|
||||
---
|
||||
|
||||
## 📁 Skill Categories
|
||||
|
||||
| Category | Purpose |
|
||||
|----------|---------|
|
||||
| **`/vulnerabilities`** | Advanced testing techniques for core vulnerability classes like authentication bypasses, business logic flaws, and race conditions |
|
||||
| **`/frameworks`** | Specific testing methods for popular frameworks e.g. Django, Express, FastAPI, and Next.js |
|
||||
| **`/technologies`** | Specialized techniques for third-party services such as Supabase, Firebase, Auth0, and payment gateways |
|
||||
| **`/protocols`** | Protocol-specific testing patterns for GraphQL, WebSocket, OAuth, and other communication standards |
|
||||
| **`/cloud`** | Cloud provider security testing for AWS, Azure, GCP, and Kubernetes environments |
|
||||
| **`/reconnaissance`** | Advanced information gathering and enumeration techniques for comprehensive attack surface mapping |
|
||||
| **`/custom`** | Community-contributed skills for specialized or industry-specific testing scenarios |
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Creating New Skills
|
||||
|
||||
### What Should a Skill Contain?
|
||||
|
||||
A good skill is a structured knowledge package that typically includes:
|
||||
|
||||
- **Advanced techniques** - Non-obvious methods specific to the task and domain
|
||||
- **Practical examples** - Working payloads, commands, or test cases with variations
|
||||
- **Validation methods** - How to confirm findings and avoid false positives
|
||||
- **Context-specific insights** - Environment and version nuances, configuration-dependent behavior, and edge cases
|
||||
|
||||
Skills use XML-style tags for structure and focus on deep, specialized knowledge that significantly enhances agent capabilities for that specific context.
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Community contributions are more than welcome — contribute new skills via [pull requests](https://github.com/usestrix/strix/pulls) or [GitHub issues](https://github.com/usestrix/strix/issues) to help expand the collection and improve extensibility for Strix agents.
|
||||
|
||||
---
|
||||
|
||||
> [!NOTE]
|
||||
> **Work in Progress** - We're actively expanding the skills collection with specialized techniques and new categories.
|
||||
110
strix/skills/__init__.py
Normal file
110
strix/skills/__init__.py
Normal file
@@ -0,0 +1,110 @@
|
||||
from jinja2 import Environment
|
||||
|
||||
from strix.utils.resource_paths import get_strix_resource_path
|
||||
|
||||
|
||||
def get_available_skills() -> dict[str, list[str]]:
|
||||
skills_dir = get_strix_resource_path("skills")
|
||||
available_skills: dict[str, list[str]] = {}
|
||||
|
||||
if not skills_dir.exists():
|
||||
return available_skills
|
||||
|
||||
for category_dir in skills_dir.iterdir():
|
||||
if category_dir.is_dir() and not category_dir.name.startswith("__"):
|
||||
category_name = category_dir.name
|
||||
skills = []
|
||||
|
||||
for file_path in category_dir.glob("*.jinja"):
|
||||
skill_name = file_path.stem
|
||||
skills.append(skill_name)
|
||||
|
||||
if skills:
|
||||
available_skills[category_name] = sorted(skills)
|
||||
|
||||
return available_skills
|
||||
|
||||
|
||||
def get_all_skill_names() -> set[str]:
|
||||
all_skills = set()
|
||||
for category_skills in get_available_skills().values():
|
||||
all_skills.update(category_skills)
|
||||
return all_skills
|
||||
|
||||
|
||||
def validate_skill_names(skill_names: list[str]) -> dict[str, list[str]]:
|
||||
available_skills = get_all_skill_names()
|
||||
valid_skills = []
|
||||
invalid_skills = []
|
||||
|
||||
for skill_name in skill_names:
|
||||
if skill_name in available_skills:
|
||||
valid_skills.append(skill_name)
|
||||
else:
|
||||
invalid_skills.append(skill_name)
|
||||
|
||||
return {"valid": valid_skills, "invalid": invalid_skills}
|
||||
|
||||
|
||||
def generate_skills_description() -> str:
|
||||
available_skills = get_available_skills()
|
||||
|
||||
if not available_skills:
|
||||
return "No skills available"
|
||||
|
||||
all_skill_names = get_all_skill_names()
|
||||
|
||||
if not all_skill_names:
|
||||
return "No skills available"
|
||||
|
||||
sorted_skills = sorted(all_skill_names)
|
||||
skills_str = ", ".join(sorted_skills)
|
||||
|
||||
description = f"List of skills to load for this agent (max 5). Available skills: {skills_str}. "
|
||||
|
||||
example_skills = sorted_skills[:2]
|
||||
if example_skills:
|
||||
example = f"Example: {', '.join(example_skills)} for specialized agent"
|
||||
description += example
|
||||
|
||||
return description
|
||||
|
||||
|
||||
def load_skills(skill_names: list[str], jinja_env: Environment) -> dict[str, str]:
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
skill_content = {}
|
||||
skills_dir = get_strix_resource_path("skills")
|
||||
|
||||
available_skills = get_available_skills()
|
||||
|
||||
for skill_name in skill_names:
|
||||
try:
|
||||
skill_path = None
|
||||
|
||||
if "/" in skill_name:
|
||||
skill_path = f"{skill_name}.jinja"
|
||||
else:
|
||||
for category, skills in available_skills.items():
|
||||
if skill_name in skills:
|
||||
skill_path = f"{category}/{skill_name}.jinja"
|
||||
break
|
||||
|
||||
if not skill_path:
|
||||
root_candidate = f"{skill_name}.jinja"
|
||||
if (skills_dir / root_candidate).exists():
|
||||
skill_path = root_candidate
|
||||
|
||||
if skill_path and (skills_dir / skill_path).exists():
|
||||
template = jinja_env.get_template(skill_path)
|
||||
var_name = skill_name.split("/")[-1]
|
||||
skill_content[var_name] = template.render()
|
||||
logger.info(f"Loaded skill: {skill_name} -> {var_name}")
|
||||
else:
|
||||
logger.warning(f"Skill not found: {skill_name}")
|
||||
|
||||
except (FileNotFoundError, OSError, ValueError) as e:
|
||||
logger.warning(f"Failed to load skill {skill_name}: {e}")
|
||||
|
||||
return skill_content
|
||||
@@ -31,6 +31,18 @@
|
||||
</high_value_targets>
|
||||
|
||||
<advanced_techniques>
|
||||
<route_enumeration>
|
||||
- __BUILD_MANIFEST.sortedPages: Execute `console.log(__BUILD_MANIFEST.sortedPages.join('\n'))` in browser console to instantly reveal all registered routes (Pages Router and static App Router paths compiled at build time)
|
||||
- __NEXT_DATA__: Inspect `<script id="__NEXT_DATA__">` for serverside props, pageProps, buildId, and dynamic route params on current page; reveals data flow and prop structure
|
||||
- Source maps exposure: Check `/_next/static/` for exposed .map files revealing full route structure, server action IDs, API endpoints, and internal function names
|
||||
- Client bundle mining: Search main-*.js and page chunks for route definitions; grep for 'pathname:', 'href:', '__next_route__', 'serverActions', and API endpoint strings
|
||||
- Static chunk enumeration: Probe `/_next/static/chunks/pages/` and `/_next/static/chunks/app/` for build artifacts; filenames map directly to routes (e.g., `admin.js` → `/admin`)
|
||||
- Build manifest fetch: GET `/_next/static/<buildId>/_buildManifest.js` and `/_next/static/<buildId>/_ssgManifest.js` for complete route and static generation metadata
|
||||
- Sitemap/robots leakage: Check `/sitemap.xml`, `/robots.txt`, and `/sitemap-*.xml` for unintended exposure of admin/internal/preview paths
|
||||
- Server action discovery: Inspect Network tab for POST requests with `Next-Action` header; extract action IDs from response streams and client hydration data
|
||||
- Environment variable leakage: Execute `Object.keys(process.env).filter(k => k.startsWith('NEXT_PUBLIC_'))` in console to list public env vars; grep bundles for 'API_KEY', 'SECRET', 'TOKEN', 'PASSWORD' to find accidentally leaked credentials
|
||||
</route_enumeration>
|
||||
|
||||
<middleware_bypass>
|
||||
- Test for CVE-class middleware bypass via `x-middleware-subrequest` crafting and `x-nextjs-data` probing. Look for 307 + `x-middleware-rewrite`/`x-nextjs-redirect` headers and attempt bypass on protected routes.
|
||||
- Attempt direct route access on Node vs Edge runtimes; confirm protection parity.
|
||||
@@ -80,6 +92,14 @@
|
||||
- Identify `dangerouslySetInnerHTML`, Markdown renderers, and user-controlled href/src attributes. Validate CSP/Trusted Types coverage for SSR/CSR/hydration.
|
||||
- Attack hydration boundaries: server vs client render mismatches can enable gadget-based XSS.
|
||||
</client_and_dom>
|
||||
|
||||
<data_fetching_over_exposure>
|
||||
- getServerSideProps/getStaticProps leakage: Execute `JSON.parse(document.getElementById('__NEXT_DATA__').textContent).props.pageProps` in console to inspect all server-fetched data; look for sensitive fields (emails, tokens, internal IDs, full user objects) passed to client but not rendered in UI
|
||||
- Over-fetched database queries: Check if pageProps include entire user records, relations, or admin-only fields when only username is displayed; common when using ORM select-all patterns
|
||||
- API response pass-through: Verify if API responses are sanitized before passing to props; developers often forward entire responses including metadata, cursors, or debug info
|
||||
- Environment-dependent data: Test if staging/dev accidentally exposes more fields in props than production due to inconsistent serialization logic
|
||||
- Nested object inspection: Drill into nested props objects; look for `_metadata`, `_internal`, `__typename` (GraphQL), or framework-added fields containing sensitive context
|
||||
</data_fetching_over_exposure>
|
||||
</advanced_techniques>
|
||||
|
||||
<bypass_techniques>
|
||||
@@ -87,6 +107,8 @@
|
||||
- Method override/tunneling: `_method`, `X-HTTP-Method-Override`, GET on endpoints unexpectedly accepting writes.
|
||||
- Case/param aliasing and query duplication affecting middleware vs handler parsing.
|
||||
- Cache key confusion at CDN/proxy (lack of Vary on auth cookies/headers) to leak personalized SSR/ISR content.
|
||||
- API route path normalization: Test `/api/users` vs `/api/users/` vs `/api//users` vs `/api/./users`; middleware may normalize differently than route handlers, allowing protection bypass. Try double slashes, trailing slashes, and dot segments.
|
||||
- Parameter pollution: Send duplicate query params (`?id=1&id=2`) or array notation (`?filter[]=a&filter[]=b`) to exploit parsing differences between middleware (which may check first value) and handler (which may use last or array).
|
||||
</bypass_techniques>
|
||||
|
||||
<special_contexts>
|
||||
@@ -107,6 +129,10 @@
|
||||
3. Demonstrate server action invocation outside UI with insufficient authorization checks.
|
||||
4. Show middleware bypass (where applicable) with explicit headers and resulting protected content.
|
||||
5. Include runtime parity checks (Edge vs Node) proving inconsistent enforcement.
|
||||
6. For route enumeration: verify discovered routes return 200/403 (deployed) not 404 (build artifacts); test with authenticated vs unauthenticated requests.
|
||||
7. For leaked credentials: test API keys with minimal read-only calls; filter placeholders (YOUR_API_KEY, demo-token); confirm keys match provider patterns (sk_live_*, pk_prod_*).
|
||||
8. For __NEXT_DATA__ over-exposure: test cross-user (User A's props should not contain User B's PII); verify exposed fields are not in DOM; validate token validity with API calls.
|
||||
9. For path normalization bypasses: show differential responses (403 vs 200 for path variants); redirects (307/308) don't count—only direct access bypasses matter.
|
||||
</validation>
|
||||
|
||||
<pro_tips>
|
||||
38
strix/telemetry/README.md
Normal file
38
strix/telemetry/README.md
Normal file
@@ -0,0 +1,38 @@
|
||||
### Overview
|
||||
|
||||
To help make Strix better for everyone, we collect anonymized data that helps us understand how to better improve our AI security agent for our users, guide the addition of new features, and fix common errors and bugs. This feedback loop is crucial for improving Strix's capabilities and user experience.
|
||||
|
||||
We use [PostHog](https://posthog.com), an open-source analytics platform, for data collection and analysis. Our telemetry implementation is fully transparent - you can review the [source code](https://github.com/usestrix/strix/blob/main/strix/telemetry/posthog.py) to see exactly what we track.
|
||||
|
||||
### Telemetry Policy
|
||||
|
||||
Privacy is our priority. All collected data is anonymized by default. Each session gets a random UUID that is not persisted or tied to you. Your code, scan targets, vulnerability details, and findings always remain private and are never collected.
|
||||
|
||||
### What We Track
|
||||
|
||||
We collect only very **basic** usage data including:
|
||||
|
||||
**Session Errors:** Duration and error types (not messages or stack traces)\
|
||||
**System Context:** OS type, architecture, Strix version\
|
||||
**Scan Context:** Scan mode (quick/standard/deep), scan type (whitebox/blackbox)\
|
||||
**Model Usage:** Which LLM model is being used (not prompts or responses)\
|
||||
**Aggregate Metrics:** Vulnerability counts by severity, agent/tool counts, token usage and cost estimates
|
||||
|
||||
For complete transparency, you can inspect our [telemetry implementation](https://github.com/usestrix/strix/blob/main/strix/telemetry/posthog.py) to see the exact events we track.
|
||||
|
||||
### What We **Never** Collect
|
||||
|
||||
- IP addresses, usernames, or any identifying information
|
||||
- Scan targets, file paths, target URLs, or domains
|
||||
- Vulnerability details, descriptions, or code
|
||||
- LLM requests and responses
|
||||
|
||||
### How to Opt Out
|
||||
|
||||
Telemetry in Strix is entirely **optional**:
|
||||
|
||||
```bash
|
||||
export STRIX_TELEMETRY=0
|
||||
```
|
||||
|
||||
You can set this environment variable before running Strix to disable **all** telemetry.
|
||||
@@ -1,4 +1,10 @@
|
||||
from . import posthog
|
||||
from .tracer import Tracer, get_global_tracer, set_global_tracer
|
||||
|
||||
|
||||
__all__ = ["Tracer", "get_global_tracer", "set_global_tracer"]
|
||||
__all__ = [
|
||||
"Tracer",
|
||||
"get_global_tracer",
|
||||
"posthog",
|
||||
"set_global_tracer",
|
||||
]
|
||||
|
||||
137
strix/telemetry/posthog.py
Normal file
137
strix/telemetry/posthog.py
Normal file
@@ -0,0 +1,137 @@
|
||||
import json
|
||||
import platform
|
||||
import sys
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
from uuid import uuid4
|
||||
|
||||
from strix.config import Config
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from strix.telemetry.tracer import Tracer
|
||||
|
||||
_POSTHOG_PUBLIC_API_KEY = "phc_7rO3XRuNT5sgSKAl6HDIrWdSGh1COzxw0vxVIAR6vVZ"
|
||||
_POSTHOG_HOST = "https://us.i.posthog.com"
|
||||
|
||||
_SESSION_ID = uuid4().hex[:16]
|
||||
|
||||
|
||||
def _is_enabled() -> bool:
|
||||
return (Config.get("strix_telemetry") or "1").lower() not in ("0", "false", "no", "off")
|
||||
|
||||
|
||||
def _is_first_run() -> bool:
|
||||
marker = Path.home() / ".strix" / ".seen"
|
||||
if marker.exists():
|
||||
return False
|
||||
try:
|
||||
marker.parent.mkdir(parents=True, exist_ok=True)
|
||||
marker.touch()
|
||||
except Exception: # noqa: BLE001, S110
|
||||
pass # nosec B110
|
||||
return True
|
||||
|
||||
|
||||
def _get_version() -> str:
|
||||
try:
|
||||
from importlib.metadata import version
|
||||
|
||||
return version("strix-agent")
|
||||
except Exception: # noqa: BLE001
|
||||
return "unknown"
|
||||
|
||||
|
||||
def _send(event: str, properties: dict[str, Any]) -> None:
|
||||
if not _is_enabled():
|
||||
return
|
||||
try:
|
||||
payload = {
|
||||
"api_key": _POSTHOG_PUBLIC_API_KEY,
|
||||
"event": event,
|
||||
"distinct_id": _SESSION_ID,
|
||||
"properties": properties,
|
||||
}
|
||||
req = urllib.request.Request( # noqa: S310
|
||||
f"{_POSTHOG_HOST}/capture/",
|
||||
data=json.dumps(payload).encode(),
|
||||
headers={"Content-Type": "application/json"},
|
||||
)
|
||||
with urllib.request.urlopen(req, timeout=10): # noqa: S310 # nosec B310
|
||||
pass
|
||||
except Exception: # noqa: BLE001, S110
|
||||
pass # nosec B110
|
||||
|
||||
|
||||
def _base_props() -> dict[str, Any]:
|
||||
return {
|
||||
"os": platform.system().lower(),
|
||||
"arch": platform.machine(),
|
||||
"python": f"{sys.version_info.major}.{sys.version_info.minor}",
|
||||
"strix_version": _get_version(),
|
||||
}
|
||||
|
||||
|
||||
def start(
|
||||
model: str | None,
|
||||
scan_mode: str | None,
|
||||
is_whitebox: bool,
|
||||
interactive: bool,
|
||||
has_instructions: bool,
|
||||
) -> None:
|
||||
_send(
|
||||
"scan_started",
|
||||
{
|
||||
**_base_props(),
|
||||
"model": model or "unknown",
|
||||
"scan_mode": scan_mode or "unknown",
|
||||
"scan_type": "whitebox" if is_whitebox else "blackbox",
|
||||
"interactive": interactive,
|
||||
"has_instructions": has_instructions,
|
||||
"first_run": _is_first_run(),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def finding(severity: str) -> None:
|
||||
_send(
|
||||
"finding_reported",
|
||||
{
|
||||
**_base_props(),
|
||||
"severity": severity.lower(),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def end(tracer: "Tracer", exit_reason: str = "completed") -> None:
|
||||
vulnerabilities_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0, "info": 0}
|
||||
for v in tracer.vulnerability_reports:
|
||||
sev = v.get("severity", "info").lower()
|
||||
if sev in vulnerabilities_counts:
|
||||
vulnerabilities_counts[sev] += 1
|
||||
|
||||
llm = tracer.get_total_llm_stats()
|
||||
total = llm.get("total", {})
|
||||
|
||||
_send(
|
||||
"scan_ended",
|
||||
{
|
||||
**_base_props(),
|
||||
"exit_reason": exit_reason,
|
||||
"duration_seconds": round(tracer._calculate_duration()),
|
||||
"vulnerabilities_total": len(tracer.vulnerability_reports),
|
||||
**{f"vulnerabilities_{k}": v for k, v in vulnerabilities_counts.items()},
|
||||
"agent_count": len(tracer.agents),
|
||||
"tool_count": tracer.get_real_tool_count(),
|
||||
"llm_tokens": llm.get("total_tokens", 0),
|
||||
"llm_cost": total.get("cost", 0.0),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def error(error_type: str, error_msg: str | None = None) -> None:
|
||||
props = {**_base_props(), "error_type": error_type}
|
||||
if error_msg:
|
||||
props["error_msg"] = error_msg
|
||||
_send("error", props)
|
||||
@@ -4,6 +4,8 @@ from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from strix.telemetry import posthog
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from collections.abc import Callable
|
||||
@@ -33,6 +35,8 @@ class Tracer:
|
||||
self.agents: dict[str, dict[str, Any]] = {}
|
||||
self.tool_executions: dict[int, dict[str, Any]] = {}
|
||||
self.chat_messages: list[dict[str, Any]] = []
|
||||
self.streaming_content: dict[str, str] = {}
|
||||
self.interrupted_content: dict[str, str] = {}
|
||||
|
||||
self.vulnerability_reports: list[dict[str, Any]] = []
|
||||
self.final_scan_result: str | None = None
|
||||
@@ -52,7 +56,7 @@ class Tracer:
|
||||
self._next_message_id = 1
|
||||
self._saved_vuln_ids: set[str] = set()
|
||||
|
||||
self.vulnerability_found_callback: Callable[[str, str, str, str], None] | None = None
|
||||
self.vulnerability_found_callback: Callable[[dict[str, Any]], None] | None = None
|
||||
|
||||
def set_run_name(self, run_name: str) -> None:
|
||||
self.run_name = run_name
|
||||
@@ -69,48 +73,118 @@ class Tracer:
|
||||
|
||||
return self._run_dir
|
||||
|
||||
def add_vulnerability_report(
|
||||
def add_vulnerability_report( # noqa: PLR0912
|
||||
self,
|
||||
title: str,
|
||||
content: str,
|
||||
severity: str,
|
||||
description: str | None = None,
|
||||
impact: str | None = None,
|
||||
target: str | None = None,
|
||||
technical_analysis: str | None = None,
|
||||
poc_description: str | None = None,
|
||||
poc_script_code: str | None = None,
|
||||
remediation_steps: str | None = None,
|
||||
cvss: float | None = None,
|
||||
cvss_breakdown: dict[str, str] | None = None,
|
||||
endpoint: str | None = None,
|
||||
method: str | None = None,
|
||||
cve: str | None = None,
|
||||
code_file: str | None = None,
|
||||
code_before: str | None = None,
|
||||
code_after: str | None = None,
|
||||
code_diff: str | None = None,
|
||||
) -> str:
|
||||
report_id = f"vuln-{len(self.vulnerability_reports) + 1:04d}"
|
||||
|
||||
report = {
|
||||
report: dict[str, Any] = {
|
||||
"id": report_id,
|
||||
"title": title.strip(),
|
||||
"content": content.strip(),
|
||||
"severity": severity.lower().strip(),
|
||||
"timestamp": datetime.now(UTC).strftime("%Y-%m-%d %H:%M:%S UTC"),
|
||||
}
|
||||
|
||||
if description:
|
||||
report["description"] = description.strip()
|
||||
if impact:
|
||||
report["impact"] = impact.strip()
|
||||
if target:
|
||||
report["target"] = target.strip()
|
||||
if technical_analysis:
|
||||
report["technical_analysis"] = technical_analysis.strip()
|
||||
if poc_description:
|
||||
report["poc_description"] = poc_description.strip()
|
||||
if poc_script_code:
|
||||
report["poc_script_code"] = poc_script_code.strip()
|
||||
if remediation_steps:
|
||||
report["remediation_steps"] = remediation_steps.strip()
|
||||
if cvss is not None:
|
||||
report["cvss"] = cvss
|
||||
if cvss_breakdown:
|
||||
report["cvss_breakdown"] = cvss_breakdown
|
||||
if endpoint:
|
||||
report["endpoint"] = endpoint.strip()
|
||||
if method:
|
||||
report["method"] = method.strip()
|
||||
if cve:
|
||||
report["cve"] = cve.strip()
|
||||
if code_file:
|
||||
report["code_file"] = code_file.strip()
|
||||
if code_before:
|
||||
report["code_before"] = code_before.strip()
|
||||
if code_after:
|
||||
report["code_after"] = code_after.strip()
|
||||
if code_diff:
|
||||
report["code_diff"] = code_diff.strip()
|
||||
|
||||
self.vulnerability_reports.append(report)
|
||||
logger.info(f"Added vulnerability report: {report_id} - {title}")
|
||||
posthog.finding(severity)
|
||||
|
||||
if self.vulnerability_found_callback:
|
||||
self.vulnerability_found_callback(
|
||||
report_id, title.strip(), content.strip(), severity.lower().strip()
|
||||
)
|
||||
self.vulnerability_found_callback(report)
|
||||
|
||||
self.save_run_data()
|
||||
return report_id
|
||||
|
||||
def set_final_scan_result(
|
||||
self,
|
||||
content: str,
|
||||
success: bool = True,
|
||||
) -> None:
|
||||
self.final_scan_result = content.strip()
|
||||
def get_existing_vulnerabilities(self) -> list[dict[str, Any]]:
|
||||
return list(self.vulnerability_reports)
|
||||
|
||||
def update_scan_final_fields(
|
||||
self,
|
||||
executive_summary: str,
|
||||
methodology: str,
|
||||
technical_analysis: str,
|
||||
recommendations: str,
|
||||
) -> None:
|
||||
self.scan_results = {
|
||||
"scan_completed": True,
|
||||
"content": content,
|
||||
"success": success,
|
||||
"executive_summary": executive_summary.strip(),
|
||||
"methodology": methodology.strip(),
|
||||
"technical_analysis": technical_analysis.strip(),
|
||||
"recommendations": recommendations.strip(),
|
||||
"success": True,
|
||||
}
|
||||
|
||||
logger.info(f"Set final scan result: success={success}")
|
||||
self.final_scan_result = f"""# Executive Summary
|
||||
|
||||
{executive_summary.strip()}
|
||||
|
||||
# Methodology
|
||||
|
||||
{methodology.strip()}
|
||||
|
||||
# Technical Analysis
|
||||
|
||||
{technical_analysis.strip()}
|
||||
|
||||
# Recommendations
|
||||
|
||||
{recommendations.strip()}
|
||||
"""
|
||||
|
||||
logger.info("Updated scan final fields")
|
||||
self.save_run_data(mark_complete=True)
|
||||
posthog.end(self, exit_reason="finished_by_tool")
|
||||
|
||||
def log_agent_creation(
|
||||
self, agent_id: str, name: str, task: str, parent_id: str | None = None
|
||||
@@ -202,7 +276,7 @@ class Tracer:
|
||||
)
|
||||
self.get_run_dir()
|
||||
|
||||
def save_run_data(self, mark_complete: bool = False) -> None:
|
||||
def save_run_data(self, mark_complete: bool = False) -> None: # noqa: PLR0912, PLR0915
|
||||
try:
|
||||
run_dir = self.get_run_dir()
|
||||
if mark_complete:
|
||||
@@ -230,24 +304,71 @@ class Tracer:
|
||||
if report["id"] not in self._saved_vuln_ids
|
||||
]
|
||||
|
||||
for report in new_reports:
|
||||
vuln_file = vuln_dir / f"{report['id']}.md"
|
||||
with vuln_file.open("w", encoding="utf-8") as f:
|
||||
f.write(f"# {report['title']}\n\n")
|
||||
f.write(f"**ID:** {report['id']}\n")
|
||||
f.write(f"**Severity:** {report['severity'].upper()}\n")
|
||||
f.write(f"**Found:** {report['timestamp']}\n\n")
|
||||
f.write("## Description\n\n")
|
||||
f.write(f"{report['content']}\n")
|
||||
self._saved_vuln_ids.add(report["id"])
|
||||
|
||||
if self.vulnerability_reports:
|
||||
severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3, "info": 4}
|
||||
sorted_reports = sorted(
|
||||
self.vulnerability_reports,
|
||||
key=lambda x: (severity_order.get(x["severity"], 5), x["timestamp"]),
|
||||
)
|
||||
|
||||
for report in new_reports:
|
||||
vuln_file = vuln_dir / f"{report['id']}.md"
|
||||
with vuln_file.open("w", encoding="utf-8") as f:
|
||||
f.write(f"# {report.get('title', 'Untitled Vulnerability')}\n\n")
|
||||
f.write(f"**ID:** {report.get('id', 'unknown')}\n")
|
||||
f.write(f"**Severity:** {report.get('severity', 'unknown').upper()}\n")
|
||||
f.write(f"**Found:** {report.get('timestamp', 'unknown')}\n")
|
||||
|
||||
metadata_fields: list[tuple[str, Any]] = [
|
||||
("Target", report.get("target")),
|
||||
("Endpoint", report.get("endpoint")),
|
||||
("Method", report.get("method")),
|
||||
("CVE", report.get("cve")),
|
||||
]
|
||||
cvss_score = report.get("cvss")
|
||||
if cvss_score is not None:
|
||||
metadata_fields.append(("CVSS", cvss_score))
|
||||
|
||||
for label, value in metadata_fields:
|
||||
if value:
|
||||
f.write(f"**{label}:** {value}\n")
|
||||
|
||||
f.write("\n## Description\n\n")
|
||||
desc = report.get("description") or "No description provided."
|
||||
f.write(f"{desc}\n\n")
|
||||
|
||||
if report.get("impact"):
|
||||
f.write("## Impact\n\n")
|
||||
f.write(f"{report['impact']}\n\n")
|
||||
|
||||
if report.get("technical_analysis"):
|
||||
f.write("## Technical Analysis\n\n")
|
||||
f.write(f"{report['technical_analysis']}\n\n")
|
||||
|
||||
if report.get("poc_description") or report.get("poc_script_code"):
|
||||
f.write("## Proof of Concept\n\n")
|
||||
if report.get("poc_description"):
|
||||
f.write(f"{report['poc_description']}\n\n")
|
||||
if report.get("poc_script_code"):
|
||||
f.write("```\n")
|
||||
f.write(f"{report['poc_script_code']}\n")
|
||||
f.write("```\n\n")
|
||||
|
||||
if report.get("code_file") or report.get("code_diff"):
|
||||
f.write("## Code Analysis\n\n")
|
||||
if report.get("code_file"):
|
||||
f.write(f"**File:** {report['code_file']}\n\n")
|
||||
if report.get("code_diff"):
|
||||
f.write("**Changes:**\n")
|
||||
f.write("```diff\n")
|
||||
f.write(f"{report['code_diff']}\n")
|
||||
f.write("```\n\n")
|
||||
|
||||
if report.get("remediation_steps"):
|
||||
f.write("## Remediation\n\n")
|
||||
f.write(f"{report['remediation_steps']}\n\n")
|
||||
|
||||
self._saved_vuln_ids.add(report["id"])
|
||||
|
||||
vuln_csv_file = run_dir / "vulnerabilities.csv"
|
||||
with vuln_csv_file.open("w", encoding="utf-8", newline="") as f:
|
||||
import csv
|
||||
@@ -291,14 +412,14 @@ class Tracer:
|
||||
def get_agent_tools(self, agent_id: str) -> list[dict[str, Any]]:
|
||||
return [
|
||||
exec_data
|
||||
for exec_data in self.tool_executions.values()
|
||||
for exec_data in list(self.tool_executions.values())
|
||||
if exec_data.get("agent_id") == agent_id
|
||||
]
|
||||
|
||||
def get_real_tool_count(self) -> int:
|
||||
return sum(
|
||||
1
|
||||
for exec_data in self.tool_executions.values()
|
||||
for exec_data in list(self.tool_executions.values())
|
||||
if exec_data.get("tool_name") not in ["scan_start_info", "subagent_start_info"]
|
||||
)
|
||||
|
||||
@@ -309,10 +430,8 @@ class Tracer:
|
||||
"input_tokens": 0,
|
||||
"output_tokens": 0,
|
||||
"cached_tokens": 0,
|
||||
"cache_creation_tokens": 0,
|
||||
"cost": 0.0,
|
||||
"requests": 0,
|
||||
"failed_requests": 0,
|
||||
}
|
||||
|
||||
for agent_instance in _agent_instances.values():
|
||||
@@ -321,10 +440,8 @@ class Tracer:
|
||||
total_stats["input_tokens"] += agent_stats.input_tokens
|
||||
total_stats["output_tokens"] += agent_stats.output_tokens
|
||||
total_stats["cached_tokens"] += agent_stats.cached_tokens
|
||||
total_stats["cache_creation_tokens"] += agent_stats.cache_creation_tokens
|
||||
total_stats["cost"] += agent_stats.cost
|
||||
total_stats["requests"] += agent_stats.requests
|
||||
total_stats["failed_requests"] += agent_stats.failed_requests
|
||||
|
||||
total_stats["cost"] = round(total_stats["cost"], 4)
|
||||
|
||||
@@ -333,5 +450,28 @@ class Tracer:
|
||||
"total_tokens": total_stats["input_tokens"] + total_stats["output_tokens"],
|
||||
}
|
||||
|
||||
def update_streaming_content(self, agent_id: str, content: str) -> None:
|
||||
self.streaming_content[agent_id] = content
|
||||
|
||||
def clear_streaming_content(self, agent_id: str) -> None:
|
||||
self.streaming_content.pop(agent_id, None)
|
||||
|
||||
def get_streaming_content(self, agent_id: str) -> str | None:
|
||||
return self.streaming_content.get(agent_id)
|
||||
|
||||
def finalize_streaming_as_interrupted(self, agent_id: str) -> str | None:
|
||||
content = self.streaming_content.pop(agent_id, None)
|
||||
if content and content.strip():
|
||||
self.interrupted_content[agent_id] = content
|
||||
self.log_chat_message(
|
||||
content=content,
|
||||
role="assistant",
|
||||
agent_id=agent_id,
|
||||
metadata={"interrupted": True},
|
||||
)
|
||||
return content
|
||||
|
||||
return self.interrupted_content.pop(agent_id, None)
|
||||
|
||||
def cleanup(self) -> None:
|
||||
self.save_run_data(mark_complete=True)
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
import os
|
||||
|
||||
from strix.config import Config
|
||||
|
||||
from .executor import (
|
||||
execute_tool,
|
||||
execute_tool_invocation,
|
||||
@@ -22,9 +24,9 @@ from .registry import (
|
||||
|
||||
SANDBOX_MODE = os.getenv("STRIX_SANDBOX_MODE", "false").lower() == "true"
|
||||
|
||||
HAS_PERPLEXITY_API = bool(os.getenv("PERPLEXITY_API_KEY"))
|
||||
HAS_PERPLEXITY_API = bool(Config.get("perplexity_api_key"))
|
||||
|
||||
DISABLE_BROWSER = os.getenv("STRIX_DISABLE_BROWSER", "false").lower() == "true"
|
||||
DISABLE_BROWSER = (Config.get("strix_disable_browser") or "false").lower() == "true"
|
||||
|
||||
if not SANDBOX_MODE:
|
||||
from .agents_graph import * # noqa: F403
|
||||
|
||||
@@ -190,36 +190,35 @@ def create_agent(
|
||||
task: str,
|
||||
name: str,
|
||||
inherit_context: bool = True,
|
||||
prompt_modules: str | None = None,
|
||||
skills: str | None = None,
|
||||
) -> dict[str, Any]:
|
||||
try:
|
||||
parent_id = agent_state.agent_id
|
||||
|
||||
module_list = []
|
||||
if prompt_modules:
|
||||
module_list = [m.strip() for m in prompt_modules.split(",") if m.strip()]
|
||||
skill_list = []
|
||||
if skills:
|
||||
skill_list = [s.strip() for s in skills.split(",") if s.strip()]
|
||||
|
||||
if len(module_list) > 5:
|
||||
if len(skill_list) > 5:
|
||||
return {
|
||||
"success": False,
|
||||
"error": (
|
||||
"Cannot specify more than 5 prompt modules for an agent "
|
||||
"(use comma-separated format)"
|
||||
"Cannot specify more than 5 skills for an agent (use comma-separated format)"
|
||||
),
|
||||
"agent_id": None,
|
||||
}
|
||||
|
||||
if module_list:
|
||||
from strix.prompts import get_all_module_names, validate_module_names
|
||||
if skill_list:
|
||||
from strix.skills import get_all_skill_names, validate_skill_names
|
||||
|
||||
validation = validate_module_names(module_list)
|
||||
validation = validate_skill_names(skill_list)
|
||||
if validation["invalid"]:
|
||||
available_modules = list(get_all_module_names())
|
||||
available_skills = list(get_all_skill_names())
|
||||
return {
|
||||
"success": False,
|
||||
"error": (
|
||||
f"Invalid prompt modules: {validation['invalid']}. "
|
||||
f"Available modules: {', '.join(available_modules)}"
|
||||
f"Invalid skills: {validation['invalid']}. "
|
||||
f"Available skills: {', '.join(available_skills)}"
|
||||
),
|
||||
"agent_id": None,
|
||||
}
|
||||
@@ -240,7 +239,7 @@ def create_agent(
|
||||
if hasattr(parent_agent.llm_config, "scan_mode"):
|
||||
scan_mode = parent_agent.llm_config.scan_mode
|
||||
|
||||
llm_config = LLMConfig(prompt_modules=module_list, timeout=timeout, scan_mode=scan_mode)
|
||||
llm_config = LLMConfig(skills=skill_list, timeout=timeout, scan_mode=scan_mode)
|
||||
|
||||
agent_config = {
|
||||
"llm_config": llm_config,
|
||||
|
||||
@@ -79,8 +79,8 @@ Only create a new agent if no existing agent is handling the specific task.</des
|
||||
<parameter name="inherit_context" type="boolean" required="false">
|
||||
<description>Whether the new agent should inherit parent's conversation history and context</description>
|
||||
</parameter>
|
||||
<parameter name="prompt_modules" type="string" required="false">
|
||||
<description>Comma-separated list of prompt modules to use for the agent (MAXIMUM 5 modules allowed). Most agents should have at least one module in order to be useful. Agents should be highly specialized - use 1-3 related modules; up to 5 for complex contexts. {{DYNAMIC_MODULES_DESCRIPTION}}</description>
|
||||
<parameter name="skills" type="string" required="false">
|
||||
<description>Comma-separated list of skills to use for the agent (MAXIMUM 5 skills allowed). Most agents should have at least one skill in order to be useful. Agents should be highly specialized - use 1-3 related skills; up to 5 for complex contexts. {{DYNAMIC_SKILLS_DESCRIPTION}}</description>
|
||||
</parameter>
|
||||
</parameters>
|
||||
<returns type="Dict[str, Any]">
|
||||
@@ -92,30 +92,30 @@ Only create a new agent if no existing agent is handling the specific task.</des
|
||||
<parameter=task>Validate and exploit the suspected SQL injection vulnerability found in
|
||||
the login form. Confirm exploitability and document proof of concept.</parameter>
|
||||
<parameter=name>SQLi Validator</parameter>
|
||||
<parameter=prompt_modules>sql_injection</parameter>
|
||||
<parameter=skills>sql_injection</parameter>
|
||||
</function>
|
||||
|
||||
<function=create_agent>
|
||||
<parameter=task>Test authentication mechanisms, JWT implementation, and session management
|
||||
for security vulnerabilities and bypass techniques.</parameter>
|
||||
<parameter=name>Auth Specialist</parameter>
|
||||
<parameter=prompt_modules>authentication_jwt, business_logic</parameter>
|
||||
<parameter=skills>authentication_jwt, business_logic</parameter>
|
||||
</function>
|
||||
|
||||
# Example of single-module specialization (most focused)
|
||||
# Example of single-skill specialization (most focused)
|
||||
<function=create_agent>
|
||||
<parameter=task>Perform comprehensive XSS testing including reflected, stored, and DOM-based
|
||||
variants across all identified input points.</parameter>
|
||||
<parameter=name>XSS Specialist</parameter>
|
||||
<parameter=prompt_modules>xss</parameter>
|
||||
<parameter=skills>xss</parameter>
|
||||
</function>
|
||||
|
||||
# Example of up to 5 related modules (borderline acceptable)
|
||||
# Example of up to 5 related skills (borderline acceptable)
|
||||
<function=create_agent>
|
||||
<parameter=task>Test for server-side vulnerabilities including SSRF, XXE, and potential
|
||||
RCE vectors in file upload and XML processing endpoints.</parameter>
|
||||
<parameter=name>Server-Side Attack Specialist</parameter>
|
||||
<parameter=prompt_modules>ssrf, xxe, rce</parameter>
|
||||
<parameter=skills>ssrf, xxe, rce</parameter>
|
||||
</function>
|
||||
</examples>
|
||||
</tool>
|
||||
|
||||
@@ -4,6 +4,8 @@ from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from strix.config import Config
|
||||
|
||||
|
||||
if os.getenv("STRIX_SANDBOX_MODE", "false").lower() == "false":
|
||||
from strix.runtime import get_runtime
|
||||
@@ -12,13 +14,14 @@ from .argument_parser import convert_arguments
|
||||
from .registry import (
|
||||
get_tool_by_name,
|
||||
get_tool_names,
|
||||
get_tool_param_schema,
|
||||
needs_agent_state,
|
||||
should_execute_in_sandbox,
|
||||
)
|
||||
|
||||
|
||||
SANDBOX_EXECUTION_TIMEOUT = float(os.getenv("STRIX_SANDBOX_EXECUTION_TIMEOUT", "500"))
|
||||
SANDBOX_CONNECT_TIMEOUT = float(os.getenv("STRIX_SANDBOX_CONNECT_TIMEOUT", "10"))
|
||||
SANDBOX_EXECUTION_TIMEOUT = float(Config.get("strix_sandbox_execution_timeout") or "120")
|
||||
SANDBOX_CONNECT_TIMEOUT = float(Config.get("strix_sandbox_connect_timeout") or "10")
|
||||
|
||||
|
||||
async def execute_tool(tool_name: str, agent_state: Any | None = None, **kwargs: Any) -> Any:
|
||||
@@ -108,14 +111,51 @@ async def _execute_tool_locally(tool_name: str, agent_state: Any | None, **kwarg
|
||||
|
||||
def validate_tool_availability(tool_name: str | None) -> tuple[bool, str]:
|
||||
if tool_name is None:
|
||||
return False, "Tool name is missing"
|
||||
available = ", ".join(sorted(get_tool_names()))
|
||||
return False, f"Tool name is missing. Available tools: {available}"
|
||||
|
||||
if tool_name not in get_tool_names():
|
||||
return False, f"Tool '{tool_name}' is not available"
|
||||
available = ", ".join(sorted(get_tool_names()))
|
||||
return False, f"Tool '{tool_name}' is not available. Available tools: {available}"
|
||||
|
||||
return True, ""
|
||||
|
||||
|
||||
def _validate_tool_arguments(tool_name: str, kwargs: dict[str, Any]) -> str | None:
|
||||
param_schema = get_tool_param_schema(tool_name)
|
||||
if not param_schema or not param_schema.get("has_params"):
|
||||
return None
|
||||
|
||||
allowed_params: set[str] = param_schema.get("params", set())
|
||||
required_params: set[str] = param_schema.get("required", set())
|
||||
optional_params = allowed_params - required_params
|
||||
|
||||
schema_hint = _format_schema_hint(tool_name, required_params, optional_params)
|
||||
|
||||
unknown_params = set(kwargs.keys()) - allowed_params
|
||||
if unknown_params:
|
||||
unknown_list = ", ".join(sorted(unknown_params))
|
||||
return f"Tool '{tool_name}' received unknown parameter(s): {unknown_list}\n{schema_hint}"
|
||||
|
||||
missing_required = [
|
||||
param for param in required_params if param not in kwargs or kwargs.get(param) in (None, "")
|
||||
]
|
||||
if missing_required:
|
||||
missing_list = ", ".join(sorted(missing_required))
|
||||
return f"Tool '{tool_name}' missing required parameter(s): {missing_list}\n{schema_hint}"
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _format_schema_hint(tool_name: str, required: set[str], optional: set[str]) -> str:
|
||||
parts = [f"Valid parameters for '{tool_name}':"]
|
||||
if required:
|
||||
parts.append(f" Required: {', '.join(sorted(required))}")
|
||||
if optional:
|
||||
parts.append(f" Optional: {', '.join(sorted(optional))}")
|
||||
return "\n".join(parts)
|
||||
|
||||
|
||||
async def execute_tool_with_validation(
|
||||
tool_name: str | None, agent_state: Any | None = None, **kwargs: Any
|
||||
) -> Any:
|
||||
@@ -125,6 +165,10 @@ async def execute_tool_with_validation(
|
||||
|
||||
assert tool_name is not None
|
||||
|
||||
arg_error = _validate_tool_arguments(tool_name, kwargs)
|
||||
if arg_error:
|
||||
return f"Error: {arg_error}"
|
||||
|
||||
try:
|
||||
result = await execute_tool(tool_name, agent_state, **kwargs)
|
||||
except Exception as e: # noqa: BLE001
|
||||
|
||||
@@ -4,49 +4,40 @@ from strix.tools.registry import register_tool
|
||||
|
||||
|
||||
def _validate_root_agent(agent_state: Any) -> dict[str, Any] | None:
|
||||
if (
|
||||
agent_state is not None
|
||||
and hasattr(agent_state, "parent_id")
|
||||
and agent_state.parent_id is not None
|
||||
):
|
||||
if agent_state and hasattr(agent_state, "parent_id") and agent_state.parent_id is not None:
|
||||
return {
|
||||
"success": False,
|
||||
"message": (
|
||||
"This tool can only be used by the root/main agent. "
|
||||
"Subagents must use agent_finish instead."
|
||||
),
|
||||
"error": "finish_scan_wrong_agent",
|
||||
"message": "This tool can only be used by the root/main agent",
|
||||
"suggestion": "If you are a subagent, use agent_finish from agents_graph tool instead",
|
||||
}
|
||||
return None
|
||||
|
||||
|
||||
def _validate_content(content: str) -> dict[str, Any] | None:
|
||||
if not content or not content.strip():
|
||||
return {"success": False, "message": "Content cannot be empty"}
|
||||
return None
|
||||
|
||||
|
||||
def _check_active_agents(agent_state: Any = None) -> dict[str, Any] | None:
|
||||
try:
|
||||
from strix.tools.agents_graph.agents_graph_actions import _agent_graph
|
||||
|
||||
current_agent_id = None
|
||||
if agent_state and hasattr(agent_state, "agent_id"):
|
||||
if agent_state and agent_state.agent_id:
|
||||
current_agent_id = agent_state.agent_id
|
||||
else:
|
||||
return None
|
||||
|
||||
running_agents = []
|
||||
active_agents = []
|
||||
stopping_agents = []
|
||||
|
||||
for agent_id, node in _agent_graph.get("nodes", {}).items():
|
||||
for agent_id, node in _agent_graph["nodes"].items():
|
||||
if agent_id == current_agent_id:
|
||||
continue
|
||||
|
||||
status = node.get("status", "")
|
||||
status = node.get("status", "unknown")
|
||||
if status == "running":
|
||||
running_agents.append(
|
||||
active_agents.append(
|
||||
{
|
||||
"id": agent_id,
|
||||
"name": node.get("name", "Unknown"),
|
||||
"task": node.get("task", "No task description"),
|
||||
"task": node.get("task", "Unknown task")[:300],
|
||||
"status": status,
|
||||
}
|
||||
)
|
||||
elif status == "stopping":
|
||||
@@ -54,121 +45,105 @@ def _check_active_agents(agent_state: Any = None) -> dict[str, Any] | None:
|
||||
{
|
||||
"id": agent_id,
|
||||
"name": node.get("name", "Unknown"),
|
||||
"task": node.get("task", "Unknown task")[:300],
|
||||
"status": status,
|
||||
}
|
||||
)
|
||||
|
||||
if running_agents or stopping_agents:
|
||||
message_parts = ["Cannot finish scan while other agents are still active:"]
|
||||
if active_agents or stopping_agents:
|
||||
response: dict[str, Any] = {
|
||||
"success": False,
|
||||
"error": "agents_still_active",
|
||||
"message": "Cannot finish scan: agents are still active",
|
||||
}
|
||||
|
||||
if running_agents:
|
||||
message_parts.append("\n\nRunning agents:")
|
||||
message_parts.extend(
|
||||
[
|
||||
f" - {agent['name']} ({agent['id']}): {agent['task']}"
|
||||
for agent in running_agents
|
||||
]
|
||||
)
|
||||
if active_agents:
|
||||
response["active_agents"] = active_agents
|
||||
|
||||
if stopping_agents:
|
||||
message_parts.append("\n\nStopping agents:")
|
||||
message_parts.extend(
|
||||
[f" - {agent['name']} ({agent['id']})" for agent in stopping_agents]
|
||||
)
|
||||
response["stopping_agents"] = stopping_agents
|
||||
|
||||
message_parts.extend(
|
||||
[
|
||||
"\n\nSuggested actions:",
|
||||
"1. Use wait_for_message to wait for all agents to complete",
|
||||
"2. Send messages to agents asking them to finish if urgent",
|
||||
"3. Use view_agent_graph to monitor agent status",
|
||||
response["suggestions"] = [
|
||||
"Use wait_for_message to wait for all agents to complete",
|
||||
"Use send_message_to_agent if you need agents to complete immediately",
|
||||
"Check agent_status to see current agent states",
|
||||
]
|
||||
)
|
||||
|
||||
return {
|
||||
"success": False,
|
||||
"message": "\n".join(message_parts),
|
||||
"active_agents": {
|
||||
"running": len(running_agents),
|
||||
"stopping": len(stopping_agents),
|
||||
"details": {
|
||||
"running": running_agents,
|
||||
"stopping": stopping_agents,
|
||||
},
|
||||
},
|
||||
}
|
||||
response["total_active"] = len(active_agents) + len(stopping_agents)
|
||||
|
||||
return response
|
||||
|
||||
except ImportError:
|
||||
pass
|
||||
except Exception:
|
||||
import logging
|
||||
|
||||
logging.warning("Could not check agent graph status - agents_graph module unavailable")
|
||||
logging.exception("Error checking active agents")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _finalize_with_tracer(content: str, success: bool) -> dict[str, Any]:
|
||||
try:
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
|
||||
tracer = get_global_tracer()
|
||||
if tracer:
|
||||
tracer.set_final_scan_result(
|
||||
content=content.strip(),
|
||||
success=success,
|
||||
)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"scan_completed": True,
|
||||
"message": "Scan completed successfully"
|
||||
if success
|
||||
else "Scan completed with errors",
|
||||
"vulnerabilities_found": len(tracer.vulnerability_reports),
|
||||
}
|
||||
|
||||
import logging
|
||||
|
||||
logging.warning("Global tracer not available - final scan result not stored")
|
||||
|
||||
return { # noqa: TRY300
|
||||
"success": True,
|
||||
"scan_completed": True,
|
||||
"message": "Scan completed successfully (not persisted)"
|
||||
if success
|
||||
else "Scan completed with errors (not persisted)",
|
||||
"warning": "Final result could not be persisted - tracer unavailable",
|
||||
}
|
||||
|
||||
except ImportError:
|
||||
return {
|
||||
"success": True,
|
||||
"scan_completed": True,
|
||||
"message": "Scan completed successfully (not persisted)"
|
||||
if success
|
||||
else "Scan completed with errors (not persisted)",
|
||||
"warning": "Final result could not be persisted - tracer module unavailable",
|
||||
}
|
||||
|
||||
|
||||
@register_tool(sandbox_execution=False)
|
||||
def finish_scan(
|
||||
content: str,
|
||||
success: bool = True,
|
||||
executive_summary: str,
|
||||
methodology: str,
|
||||
technical_analysis: str,
|
||||
recommendations: str,
|
||||
agent_state: Any = None,
|
||||
) -> dict[str, Any]:
|
||||
try:
|
||||
validation_error = _validate_root_agent(agent_state)
|
||||
if validation_error:
|
||||
return validation_error
|
||||
|
||||
validation_error = _validate_content(content)
|
||||
if validation_error:
|
||||
return validation_error
|
||||
|
||||
active_agents_error = _check_active_agents(agent_state)
|
||||
if active_agents_error:
|
||||
return active_agents_error
|
||||
|
||||
return _finalize_with_tracer(content, success)
|
||||
validation_errors = []
|
||||
|
||||
except (ValueError, TypeError, KeyError) as e:
|
||||
if not executive_summary or not executive_summary.strip():
|
||||
validation_errors.append("Executive summary cannot be empty")
|
||||
if not methodology or not methodology.strip():
|
||||
validation_errors.append("Methodology cannot be empty")
|
||||
if not technical_analysis or not technical_analysis.strip():
|
||||
validation_errors.append("Technical analysis cannot be empty")
|
||||
if not recommendations or not recommendations.strip():
|
||||
validation_errors.append("Recommendations cannot be empty")
|
||||
|
||||
if validation_errors:
|
||||
return {"success": False, "message": "Validation failed", "errors": validation_errors}
|
||||
|
||||
try:
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
|
||||
tracer = get_global_tracer()
|
||||
if tracer:
|
||||
tracer.update_scan_final_fields(
|
||||
executive_summary=executive_summary.strip(),
|
||||
methodology=methodology.strip(),
|
||||
technical_analysis=technical_analysis.strip(),
|
||||
recommendations=recommendations.strip(),
|
||||
)
|
||||
|
||||
vulnerability_count = len(tracer.vulnerability_reports)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"scan_completed": True,
|
||||
"message": "Scan completed successfully",
|
||||
"vulnerabilities_found": vulnerability_count,
|
||||
}
|
||||
|
||||
import logging
|
||||
|
||||
logging.warning("Current tracer not available - scan results not stored")
|
||||
|
||||
except (ImportError, AttributeError) as e:
|
||||
return {"success": False, "message": f"Failed to complete scan: {e!s}"}
|
||||
else:
|
||||
return {
|
||||
"success": True,
|
||||
"scan_completed": True,
|
||||
"message": "Scan completed (not persisted)",
|
||||
"warning": "Results could not be persisted - tracer unavailable",
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
<tools>
|
||||
<tool name="finish_scan">
|
||||
<description>Complete the main security scan and generate final report.
|
||||
<description>Complete the security scan by providing the final assessment fields as full penetration test report.
|
||||
|
||||
IMPORTANT: This tool can ONLY be used by the root/main agent.
|
||||
Subagents must use agent_finish from agents_graph tool instead.
|
||||
@@ -8,11 +8,20 @@ Subagents must use agent_finish from agents_graph tool instead.
|
||||
IMPORTANT: This tool will NOT allow finishing if any agents are still running or stopping.
|
||||
You must wait for all agents to complete before using this tool.
|
||||
|
||||
This tool MUST be called at the very end of the security assessment to:
|
||||
- Verify all agents have completed their tasks
|
||||
- Generate the final comprehensive scan report
|
||||
- Mark the entire scan as completed
|
||||
- Stop the agent from running
|
||||
This tool directly updates the scan report data:
|
||||
- executive_summary
|
||||
- methodology
|
||||
- technical_analysis
|
||||
- recommendations
|
||||
|
||||
All fields are REQUIRED and map directly to the final report.
|
||||
|
||||
This must be the last tool called in the scan. It will:
|
||||
1. Verify you are the root agent
|
||||
2. Check all subagents have completed
|
||||
3. Update the scan with your provided fields
|
||||
4. Mark the scan as completed
|
||||
5. Stop agent execution
|
||||
|
||||
Use this tool when:
|
||||
- You are the main/root agent conducting the security assessment
|
||||
@@ -23,23 +32,39 @@ Use this tool when:
|
||||
IMPORTANT: Calling this tool multiple times will OVERWRITE any previous scan report.
|
||||
Make sure you include ALL findings and details in a single comprehensive report.
|
||||
|
||||
If agents are still running, this tool will:
|
||||
If agents are still running, the tool will:
|
||||
- Show you which agents are still active
|
||||
- Suggest using wait_for_message to wait for completion
|
||||
- Suggest messaging agents if immediate completion is needed
|
||||
|
||||
Put ALL details in the content - methodology, tools used, vulnerability counts, key findings, recommendations,
|
||||
compliance notes, risk assessments, next steps, etc. Be comprehensive and include everything relevant.</description>
|
||||
NOTE: Make sure the vulnerabilities found were reported with create_vulnerability_report tool, otherwise they will not be tracked and you will not be rewarded.
|
||||
But make sure to not report the same vulnerability multiple times.
|
||||
|
||||
Professional, customer-facing penetration test report rules (PDF-ready):
|
||||
- Do NOT include internal or system details: never mention local/absolute paths (e.g., "/workspace"), internal tools, agents, orchestrators, sandboxes, models, system prompts/instructions, connection/tooling issues, or tester environment details.
|
||||
- Tone and style: formal, objective, third-person, concise. No internal checklists or engineering runbooks. Content must read as a polished client deliverable.
|
||||
- Structure across fields should align to standard pentest reports:
|
||||
- Executive summary: business impact, risk posture, notable criticals, remediation theme.
|
||||
- Methodology: industry-standard methods (e.g., OWASP, OSSTMM, NIST), scope, constraints—no internal execution notes.
|
||||
- Technical analysis: consolidated findings overview referencing created vulnerability reports; avoid raw logs.
|
||||
- Recommendations: prioritized, actionable, aligned to risk and best practices.
|
||||
</description>
|
||||
<parameters>
|
||||
<parameter name="content" type="string" required="true">
|
||||
<description>Complete scan report including executive summary, methodology, findings, vulnerability details, recommendations, compliance notes, risk assessment, and conclusions. Include everything relevant to the assessment.</description>
|
||||
<parameter name="executive_summary" type="string" required="true">
|
||||
<description>High-level summary for executives: key findings, overall security posture, critical risks, business impact</description>
|
||||
</parameter>
|
||||
<parameter name="success" type="boolean" required="false">
|
||||
<description>Whether the scan completed successfully without critical errors</description>
|
||||
<parameter name="methodology" type="string" required="true">
|
||||
<description>Testing methodology: approach, tools used, scope, techniques employed</description>
|
||||
</parameter>
|
||||
<parameter name="technical_analysis" type="string" required="true">
|
||||
<description>Detailed technical findings and security assessment results over the scan</description>
|
||||
</parameter>
|
||||
<parameter name="recommendations" type="string" required="true">
|
||||
<description>Actionable security recommendations and remediation priorities</description>
|
||||
</parameter>
|
||||
</parameters>
|
||||
<returns type="Dict[str, Any]">
|
||||
<description>Response containing success status and completion message. If agents are still running, returns details about active agents and suggested actions.</description>
|
||||
<description>Response containing success status, vulnerability count, and completion message. If agents are still running, returns details about active agents and suggested actions.</description>
|
||||
</returns>
|
||||
</tool>
|
||||
</tools>
|
||||
|
||||
@@ -55,6 +55,7 @@
|
||||
- Print statements and stdout are captured
|
||||
- Variables persist between executions in the same session
|
||||
- Imports, function definitions, etc. persist in the session
|
||||
- IMPORTANT (multiline): Put real line breaks in <parameter=code>. Do NOT emit literal "\n" sequences.
|
||||
- IPython magic commands are fully supported (%pip, %time, %whos, %%writefile, etc.)
|
||||
- Line magics (%) and cell magics (%%) work as expected
|
||||
6. CLOSE: Terminates the session completely and frees memory
|
||||
@@ -73,6 +74,14 @@
|
||||
print("Security analysis session started")</parameter>
|
||||
</function>
|
||||
|
||||
<function=python_action>
|
||||
<parameter=action>execute</parameter>
|
||||
<parameter=code>import requests
|
||||
url = "https://example.com"
|
||||
resp = requests.get(url, timeout=10)
|
||||
print(resp.status_code)</parameter>
|
||||
</function>
|
||||
|
||||
# Analyze security data in the default session
|
||||
<function=python_action>
|
||||
<parameter=action>execute</parameter>
|
||||
|
||||
@@ -7,9 +7,14 @@ from inspect import signature
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import defusedxml.ElementTree as DefusedET
|
||||
|
||||
from strix.utils.resource_paths import get_strix_resource_path
|
||||
|
||||
|
||||
tools: list[dict[str, Any]] = []
|
||||
_tools_by_name: dict[str, Callable[..., Any]] = {}
|
||||
_tool_param_schemas: dict[str, dict[str, Any]] = {}
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -23,17 +28,17 @@ class ImplementedInClientSideOnlyError(Exception):
|
||||
|
||||
|
||||
def _process_dynamic_content(content: str) -> str:
|
||||
if "{{DYNAMIC_MODULES_DESCRIPTION}}" in content:
|
||||
if "{{DYNAMIC_SKILLS_DESCRIPTION}}" in content:
|
||||
try:
|
||||
from strix.prompts import generate_modules_description
|
||||
from strix.skills import generate_skills_description
|
||||
|
||||
modules_description = generate_modules_description()
|
||||
content = content.replace("{{DYNAMIC_MODULES_DESCRIPTION}}", modules_description)
|
||||
skills_description = generate_skills_description()
|
||||
content = content.replace("{{DYNAMIC_SKILLS_DESCRIPTION}}", skills_description)
|
||||
except ImportError:
|
||||
logger.warning("Could not import prompts utilities for dynamic schema generation")
|
||||
logger.warning("Could not import skills utilities for dynamic schema generation")
|
||||
content = content.replace(
|
||||
"{{DYNAMIC_MODULES_DESCRIPTION}}",
|
||||
"List of prompt modules to load for this agent (max 5). Module discovery failed.",
|
||||
"{{DYNAMIC_SKILLS_DESCRIPTION}}",
|
||||
"List of skills to load for this agent (max 5). Skill discovery failed.",
|
||||
)
|
||||
|
||||
return content
|
||||
@@ -82,6 +87,34 @@ def _load_xml_schema(path: Path) -> Any:
|
||||
return tools_dict
|
||||
|
||||
|
||||
def _parse_param_schema(tool_xml: str) -> dict[str, Any]:
|
||||
params: set[str] = set()
|
||||
required: set[str] = set()
|
||||
|
||||
params_start = tool_xml.find("<parameters>")
|
||||
params_end = tool_xml.find("</parameters>")
|
||||
|
||||
if params_start == -1 or params_end == -1:
|
||||
return {"params": set(), "required": set(), "has_params": False}
|
||||
|
||||
params_section = tool_xml[params_start : params_end + len("</parameters>")]
|
||||
|
||||
try:
|
||||
root = DefusedET.fromstring(params_section)
|
||||
except DefusedET.ParseError:
|
||||
return {"params": set(), "required": set(), "has_params": False}
|
||||
|
||||
for param in root.findall(".//parameter"):
|
||||
name = param.attrib.get("name")
|
||||
if not name:
|
||||
continue
|
||||
params.add(name)
|
||||
if param.attrib.get("required", "false").lower() == "true":
|
||||
required.add(name)
|
||||
|
||||
return {"params": params, "required": required, "has_params": bool(params or required)}
|
||||
|
||||
|
||||
def _get_module_name(func: Callable[..., Any]) -> str:
|
||||
module = inspect.getmodule(func)
|
||||
if not module:
|
||||
@@ -95,6 +128,27 @@ def _get_module_name(func: Callable[..., Any]) -> str:
|
||||
return "unknown"
|
||||
|
||||
|
||||
def _get_schema_path(func: Callable[..., Any]) -> Path | None:
|
||||
module = inspect.getmodule(func)
|
||||
if not module or not module.__name__:
|
||||
return None
|
||||
|
||||
module_name = module.__name__
|
||||
|
||||
if ".tools." not in module_name:
|
||||
return None
|
||||
|
||||
parts = module_name.split(".tools.")[-1].split(".")
|
||||
if len(parts) < 2:
|
||||
return None
|
||||
|
||||
folder = parts[0]
|
||||
file_stem = parts[1]
|
||||
schema_file = f"{file_stem}_schema.xml"
|
||||
|
||||
return get_strix_resource_path("tools", folder, schema_file)
|
||||
|
||||
|
||||
def register_tool(
|
||||
func: Callable[..., Any] | None = None, *, sandbox_execution: bool = True
|
||||
) -> Callable[..., Any]:
|
||||
@@ -109,11 +163,8 @@ def register_tool(
|
||||
sandbox_mode = os.getenv("STRIX_SANDBOX_MODE", "false").lower() == "true"
|
||||
if not sandbox_mode:
|
||||
try:
|
||||
module_path = Path(inspect.getfile(f))
|
||||
schema_file_name = f"{module_path.stem}_schema.xml"
|
||||
schema_path = module_path.parent / schema_file_name
|
||||
|
||||
xml_tools = _load_xml_schema(schema_path)
|
||||
schema_path = _get_schema_path(f)
|
||||
xml_tools = _load_xml_schema(schema_path) if schema_path else None
|
||||
|
||||
if xml_tools is not None and f.__name__ in xml_tools:
|
||||
func_dict["xml_schema"] = xml_tools[f.__name__]
|
||||
@@ -131,6 +182,11 @@ def register_tool(
|
||||
"</tool>"
|
||||
)
|
||||
|
||||
if not sandbox_mode:
|
||||
xml_schema = func_dict.get("xml_schema")
|
||||
param_schema = _parse_param_schema(xml_schema if isinstance(xml_schema, str) else "")
|
||||
_tool_param_schemas[str(func_dict["name"])] = param_schema
|
||||
|
||||
tools.append(func_dict)
|
||||
_tools_by_name[str(func_dict["name"])] = f
|
||||
|
||||
@@ -153,6 +209,10 @@ def get_tool_names() -> list[str]:
|
||||
return list(_tools_by_name.keys())
|
||||
|
||||
|
||||
def get_tool_param_schema(name: str) -> dict[str, Any] | None:
|
||||
return _tool_param_schemas.get(name)
|
||||
|
||||
|
||||
def needs_agent_state(tool_name: str) -> bool:
|
||||
tool_func = get_tool_by_name(tool_name)
|
||||
if not tool_func:
|
||||
@@ -194,3 +254,4 @@ def get_tools_prompt() -> str:
|
||||
def clear_registry() -> None:
|
||||
tools.clear()
|
||||
_tools_by_name.clear()
|
||||
_tool_param_schemas.clear()
|
||||
|
||||
@@ -3,61 +3,248 @@ from typing import Any
|
||||
from strix.tools.registry import register_tool
|
||||
|
||||
|
||||
def calculate_cvss_and_severity(
|
||||
attack_vector: str,
|
||||
attack_complexity: str,
|
||||
privileges_required: str,
|
||||
user_interaction: str,
|
||||
scope: str,
|
||||
confidentiality: str,
|
||||
integrity: str,
|
||||
availability: str,
|
||||
) -> tuple[float, str, str]:
|
||||
try:
|
||||
from cvss import CVSS3
|
||||
|
||||
vector = (
|
||||
f"CVSS:3.1/AV:{attack_vector}/AC:{attack_complexity}/"
|
||||
f"PR:{privileges_required}/UI:{user_interaction}/S:{scope}/"
|
||||
f"C:{confidentiality}/I:{integrity}/A:{availability}"
|
||||
)
|
||||
|
||||
c = CVSS3(vector)
|
||||
scores = c.scores()
|
||||
severities = c.severities()
|
||||
|
||||
base_score = scores[0]
|
||||
base_severity = severities[0]
|
||||
|
||||
severity = base_severity.lower()
|
||||
|
||||
except Exception:
|
||||
import logging
|
||||
|
||||
logging.exception("Failed to calculate CVSS")
|
||||
return 7.5, "high", ""
|
||||
else:
|
||||
return base_score, severity, vector
|
||||
|
||||
|
||||
def _validate_required_fields(**kwargs: str | None) -> list[str]:
|
||||
validation_errors: list[str] = []
|
||||
|
||||
required_fields = {
|
||||
"title": "Title cannot be empty",
|
||||
"description": "Description cannot be empty",
|
||||
"impact": "Impact cannot be empty",
|
||||
"target": "Target cannot be empty",
|
||||
"technical_analysis": "Technical analysis cannot be empty",
|
||||
"poc_description": "PoC description cannot be empty",
|
||||
"poc_script_code": "PoC script/code is REQUIRED - provide the actual exploit/payload",
|
||||
"remediation_steps": "Remediation steps cannot be empty",
|
||||
}
|
||||
|
||||
for field_name, error_msg in required_fields.items():
|
||||
value = kwargs.get(field_name)
|
||||
if not value or not str(value).strip():
|
||||
validation_errors.append(error_msg)
|
||||
|
||||
return validation_errors
|
||||
|
||||
|
||||
def _validate_cvss_parameters(**kwargs: str) -> list[str]:
|
||||
validation_errors: list[str] = []
|
||||
|
||||
cvss_validations = {
|
||||
"attack_vector": ["N", "A", "L", "P"],
|
||||
"attack_complexity": ["L", "H"],
|
||||
"privileges_required": ["N", "L", "H"],
|
||||
"user_interaction": ["N", "R"],
|
||||
"scope": ["U", "C"],
|
||||
"confidentiality": ["N", "L", "H"],
|
||||
"integrity": ["N", "L", "H"],
|
||||
"availability": ["N", "L", "H"],
|
||||
}
|
||||
|
||||
for param_name, valid_values in cvss_validations.items():
|
||||
value = kwargs.get(param_name)
|
||||
if value not in valid_values:
|
||||
validation_errors.append(
|
||||
f"Invalid {param_name}: {value}. Must be one of: {valid_values}"
|
||||
)
|
||||
|
||||
return validation_errors
|
||||
|
||||
|
||||
@register_tool(sandbox_execution=False)
|
||||
def create_vulnerability_report(
|
||||
title: str,
|
||||
content: str,
|
||||
severity: str,
|
||||
description: str,
|
||||
impact: str,
|
||||
target: str,
|
||||
technical_analysis: str,
|
||||
poc_description: str,
|
||||
poc_script_code: str,
|
||||
remediation_steps: str,
|
||||
# CVSS Breakdown Components
|
||||
attack_vector: str,
|
||||
attack_complexity: str,
|
||||
privileges_required: str,
|
||||
user_interaction: str,
|
||||
scope: str,
|
||||
confidentiality: str,
|
||||
integrity: str,
|
||||
availability: str,
|
||||
# Optional fields
|
||||
endpoint: str | None = None,
|
||||
method: str | None = None,
|
||||
cve: str | None = None,
|
||||
code_file: str | None = None,
|
||||
code_before: str | None = None,
|
||||
code_after: str | None = None,
|
||||
code_diff: str | None = None,
|
||||
) -> dict[str, Any]:
|
||||
validation_error = None
|
||||
if not title or not title.strip():
|
||||
validation_error = "Title cannot be empty"
|
||||
elif not content or not content.strip():
|
||||
validation_error = "Content cannot be empty"
|
||||
elif not severity or not severity.strip():
|
||||
validation_error = "Severity cannot be empty"
|
||||
else:
|
||||
valid_severities = ["critical", "high", "medium", "low", "info"]
|
||||
if severity.lower() not in valid_severities:
|
||||
validation_error = (
|
||||
f"Invalid severity '{severity}'. Must be one of: {', '.join(valid_severities)}"
|
||||
validation_errors = _validate_required_fields(
|
||||
title=title,
|
||||
description=description,
|
||||
impact=impact,
|
||||
target=target,
|
||||
technical_analysis=technical_analysis,
|
||||
poc_description=poc_description,
|
||||
poc_script_code=poc_script_code,
|
||||
remediation_steps=remediation_steps,
|
||||
)
|
||||
|
||||
if validation_error:
|
||||
return {"success": False, "message": validation_error}
|
||||
validation_errors.extend(
|
||||
_validate_cvss_parameters(
|
||||
attack_vector=attack_vector,
|
||||
attack_complexity=attack_complexity,
|
||||
privileges_required=privileges_required,
|
||||
user_interaction=user_interaction,
|
||||
scope=scope,
|
||||
confidentiality=confidentiality,
|
||||
integrity=integrity,
|
||||
availability=availability,
|
||||
)
|
||||
)
|
||||
|
||||
if validation_errors:
|
||||
return {"success": False, "message": "Validation failed", "errors": validation_errors}
|
||||
|
||||
cvss_score, severity, cvss_vector = calculate_cvss_and_severity(
|
||||
attack_vector,
|
||||
attack_complexity,
|
||||
privileges_required,
|
||||
user_interaction,
|
||||
scope,
|
||||
confidentiality,
|
||||
integrity,
|
||||
availability,
|
||||
)
|
||||
|
||||
try:
|
||||
from strix.telemetry.tracer import get_global_tracer
|
||||
|
||||
tracer = get_global_tracer()
|
||||
if tracer:
|
||||
from strix.llm.dedupe import check_duplicate
|
||||
|
||||
existing_reports = tracer.get_existing_vulnerabilities()
|
||||
|
||||
candidate = {
|
||||
"title": title,
|
||||
"description": description,
|
||||
"impact": impact,
|
||||
"target": target,
|
||||
"technical_analysis": technical_analysis,
|
||||
"poc_description": poc_description,
|
||||
"poc_script_code": poc_script_code,
|
||||
"endpoint": endpoint,
|
||||
"method": method,
|
||||
}
|
||||
|
||||
dedupe_result = check_duplicate(candidate, existing_reports)
|
||||
|
||||
if dedupe_result.get("is_duplicate"):
|
||||
duplicate_id = dedupe_result.get("duplicate_id", "")
|
||||
|
||||
duplicate_title = ""
|
||||
for report in existing_reports:
|
||||
if report.get("id") == duplicate_id:
|
||||
duplicate_title = report.get("title", "Unknown")
|
||||
break
|
||||
|
||||
return {
|
||||
"success": False,
|
||||
"message": (
|
||||
f"Potential duplicate of '{duplicate_title}' "
|
||||
f"(id={duplicate_id[:8]}...). Do not re-report the same vulnerability."
|
||||
),
|
||||
"duplicate_of": duplicate_id,
|
||||
"duplicate_title": duplicate_title,
|
||||
"confidence": dedupe_result.get("confidence", 0.0),
|
||||
"reason": dedupe_result.get("reason", ""),
|
||||
}
|
||||
|
||||
cvss_breakdown = {
|
||||
"attack_vector": attack_vector,
|
||||
"attack_complexity": attack_complexity,
|
||||
"privileges_required": privileges_required,
|
||||
"user_interaction": user_interaction,
|
||||
"scope": scope,
|
||||
"confidentiality": confidentiality,
|
||||
"integrity": integrity,
|
||||
"availability": availability,
|
||||
}
|
||||
|
||||
report_id = tracer.add_vulnerability_report(
|
||||
title=title,
|
||||
content=content,
|
||||
description=description,
|
||||
severity=severity,
|
||||
impact=impact,
|
||||
target=target,
|
||||
technical_analysis=technical_analysis,
|
||||
poc_description=poc_description,
|
||||
poc_script_code=poc_script_code,
|
||||
remediation_steps=remediation_steps,
|
||||
cvss=cvss_score,
|
||||
cvss_breakdown=cvss_breakdown,
|
||||
endpoint=endpoint,
|
||||
method=method,
|
||||
cve=cve,
|
||||
code_file=code_file,
|
||||
code_before=code_before,
|
||||
code_after=code_after,
|
||||
code_diff=code_diff,
|
||||
)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"Vulnerability report '{title}' created successfully",
|
||||
"report_id": report_id,
|
||||
"severity": severity.lower(),
|
||||
"severity": severity,
|
||||
"cvss_score": cvss_score,
|
||||
}
|
||||
|
||||
import logging
|
||||
|
||||
logging.warning("Global tracer not available - vulnerability report not stored")
|
||||
logging.warning("Current tracer not available - vulnerability report not stored")
|
||||
|
||||
return { # noqa: TRY300
|
||||
"success": True,
|
||||
"message": f"Vulnerability report '{title}' created successfully (not persisted)",
|
||||
"warning": "Report could not be persisted - tracer unavailable",
|
||||
}
|
||||
|
||||
except ImportError:
|
||||
except (ImportError, AttributeError) as e:
|
||||
return {"success": False, "message": f"Failed to create vulnerability report: {e!s}"}
|
||||
else:
|
||||
return {
|
||||
"success": True,
|
||||
"message": f"Vulnerability report '{title}' created successfully (not persisted)",
|
||||
"warning": "Report could not be persisted - tracer module unavailable",
|
||||
"message": f"Vulnerability report '{title}' created (not persisted)",
|
||||
"warning": "Report could not be persisted - tracer unavailable",
|
||||
}
|
||||
except (ValueError, TypeError) as e:
|
||||
return {"success": False, "message": f"Failed to create vulnerability report: {e!s}"}
|
||||
|
||||
@@ -2,8 +2,9 @@
|
||||
<tool name="create_vulnerability_report">
|
||||
<description>Create a vulnerability report for a discovered security issue.
|
||||
|
||||
Use this tool to document a specific verified security vulnerability.
|
||||
Put ALL details in the content field - affected URLs, parameters, proof of concept, remediation steps, CVE references, CVSS scores, technical details, impact assessment, etc.
|
||||
IMPORTANT: This tool includes automatic LLM-based deduplication. Reports that describe the same vulnerability (same root cause on the same asset) as an existing report will be rejected.
|
||||
|
||||
Use this tool to document a specific fully verified security vulnerability.
|
||||
|
||||
DO NOT USE:
|
||||
- For general security observations without specific vulnerabilities
|
||||
@@ -11,20 +12,124 @@ DO NOT USE:
|
||||
- When you don't have a proof of concept, or still not 100% sure if it's a vulnerability
|
||||
- For tracking multiple vulnerabilities (create separate reports)
|
||||
- For reporting multiple vulnerabilities at once. Use a separate create_vulnerability_report for each vulnerability.
|
||||
- To re-report a vulnerability that was already reported (even with different details)
|
||||
|
||||
White-box requirement (when you have access to the code): You MUST include code_file, code_before, code_after, and code_diff. These must contain the actual code (before/after) and a complete, apply-able unified diff.
|
||||
|
||||
DEDUPLICATION: If this tool returns with success=false and mentions a duplicate, DO NOT attempt to re-submit. The vulnerability has already been reported. Move on to testing other areas.
|
||||
|
||||
Professional, customer-facing report rules (PDF-ready):
|
||||
- Do NOT include internal or system details: never mention local or absolute paths (e.g., "/workspace"), internal tools, agents, orchestrators, sandboxes, models, system prompts/instructions, connection issues, internal errors/logs/stack traces, or tester machine environment details.
|
||||
- Tone and style: formal, objective, third-person, vendor-neutral, concise. No runbooks, checklists, or engineering notes. Avoid headings like "QUICK", "Approach", or "Techniques" that read like internal guidance.
|
||||
- Use a standard penetration testing report structure per finding:
|
||||
1) Overview
|
||||
2) Severity and CVSS (vector only)
|
||||
3) Affected asset(s)
|
||||
4) Technical details
|
||||
5) Proof of concept (repro steps plus code)
|
||||
6) Impact
|
||||
7) Remediation
|
||||
8) Evidence (optional request/response excerpts, etc.) in the technical analysis field.
|
||||
- Numbered steps are allowed ONLY within the proof of concept. Elsewhere, use clear, concise paragraphs suitable for customer-facing reports.
|
||||
- Language must be precise and non-vague; avoid hedging.
|
||||
</description>
|
||||
<parameters>
|
||||
<parameter name="title" type="string" required="true">
|
||||
<description>Clear, concise title of the vulnerability</description>
|
||||
<description>Clear, specific title (e.g., "SQL Injection in /api/users Login Parameter"). But not too long. Don't mention CVE number in the title.</description>
|
||||
</parameter>
|
||||
<parameter name="content" type="string" required="true">
|
||||
<description>Complete vulnerability details including affected URLs, technical details, impact, proof of concept, remediation steps, and any relevant references. Be comprehensive and include everything relevant.</description>
|
||||
<parameter name="description" type="string" required="true">
|
||||
<description>Comprehensive description of the vulnerability and how it was discovered</description>
|
||||
</parameter>
|
||||
<parameter name="severity" type="string" required="true">
|
||||
<description>Severity level: critical, high, medium, low, or info</description>
|
||||
<parameter name="impact" type="string" required="true">
|
||||
<description>Impact assessment: what attacker can do, business risk, data at risk</description>
|
||||
</parameter>
|
||||
<parameter name="target" type="string" required="true">
|
||||
<description>Affected target: URL, domain, or Git repository</description>
|
||||
</parameter>
|
||||
<parameter name="technical_analysis" type="string" required="true">
|
||||
<description>Technical explanation of the vulnerability mechanism and root cause</description>
|
||||
</parameter>
|
||||
<parameter name="poc_description" type="string" required="true">
|
||||
<description>Step-by-step instructions to reproduce the vulnerability</description>
|
||||
</parameter>
|
||||
<parameter name="poc_script_code" type="string" required="true">
|
||||
<description>Actual proof of concept code, exploit, payload, or script that demonstrates the vulnerability. Python code.</description>
|
||||
</parameter>
|
||||
<parameter name="remediation_steps" type="string" required="true">
|
||||
<description>Specific, actionable steps to fix the vulnerability</description>
|
||||
</parameter>
|
||||
<parameter name="attack_vector" type="string" required="true">
|
||||
<description>CVSS Attack Vector - How the vulnerability is exploited:
|
||||
N = Network (remotely exploitable)
|
||||
A = Adjacent (same network segment)
|
||||
L = Local (local access required)
|
||||
P = Physical (physical access required)</description>
|
||||
</parameter>
|
||||
<parameter name="attack_complexity" type="string" required="true">
|
||||
<description>CVSS Attack Complexity - Conditions beyond attacker's control:
|
||||
L = Low (no special conditions)
|
||||
H = High (special conditions must exist)</description>
|
||||
</parameter>
|
||||
<parameter name="privileges_required" type="string" required="true">
|
||||
<description>CVSS Privileges Required - Level of privileges needed:
|
||||
N = None (no privileges needed)
|
||||
L = Low (basic user privileges)
|
||||
H = High (admin privileges)</description>
|
||||
</parameter>
|
||||
<parameter name="user_interaction" type="string" required="true">
|
||||
<description>CVSS User Interaction - Does exploit require user action:
|
||||
N = None (no user interaction needed)
|
||||
R = Required (user must perform some action)</description>
|
||||
</parameter>
|
||||
<parameter name="scope" type="string" required="true">
|
||||
<description>CVSS Scope - Can the vulnerability affect resources beyond its security scope:
|
||||
U = Unchanged (only affects the vulnerable component)
|
||||
C = Changed (affects resources beyond vulnerable component)</description>
|
||||
</parameter>
|
||||
<parameter name="confidentiality" type="string" required="true">
|
||||
<description>CVSS Confidentiality Impact - Impact to confidentiality:
|
||||
N = None (no impact)
|
||||
L = Low (some information disclosure)
|
||||
H = High (all information disclosed)</description>
|
||||
</parameter>
|
||||
<parameter name="integrity" type="string" required="true">
|
||||
<description>CVSS Integrity Impact - Impact to integrity:
|
||||
N = None (no impact)
|
||||
L = Low (data can be modified but scope is limited)
|
||||
H = High (total loss of integrity)</description>
|
||||
</parameter>
|
||||
<parameter name="availability" type="string" required="true">
|
||||
<description>CVSS Availability Impact - Impact to availability:
|
||||
N = None (no impact)
|
||||
L = Low (reduced performance or interruptions)
|
||||
H = High (total loss of availability)</description>
|
||||
</parameter>
|
||||
<parameter name="endpoint" type="string" required="false">
|
||||
<description>API endpoint(s) or URL path(s) (e.g., "/api/login") - for web vulnerabilities, or Git repository path(s) - for code vulnerabilities</description>
|
||||
</parameter>
|
||||
<parameter name="method" type="string" required="false">
|
||||
<description>HTTP method(s) (GET, POST, etc.) - for web vulnerabilities.</description>
|
||||
</parameter>
|
||||
<parameter name="cve" type="string" required="false">
|
||||
<description>CVE identifier (e.g., "CVE-2024-1234"). Make sure it's a valid CVE. Use web search or vulnerability databases to make sure it's a valid CVE number.</description>
|
||||
</parameter>
|
||||
<parameter name="code_file" type="string" required="false">
|
||||
<description>MANDATORY for white-box testing: exact affected source file path(s).</description>
|
||||
</parameter>
|
||||
<parameter name="code_before" type="string" required="false">
|
||||
<description>MANDATORY for white-box testing: actual vulnerable code snippet(s) copied verbatim from the repository.</description>
|
||||
</parameter>
|
||||
<parameter name="code_after" type="string" required="false">
|
||||
<description>MANDATORY for white-box testing: corrected code snippet(s) exactly as they should appear after the fix.</description>
|
||||
</parameter>
|
||||
<parameter name="code_diff" type="string" required="false">
|
||||
<description>MANDATORY for white-box testing: unified diff showing the code changes. Must be a complete, apply-able unified diff (git format) covering all affected files, with proper file headers, line numbers, and sufficient context.</description>
|
||||
</parameter>
|
||||
</parameters>
|
||||
<returns type="Dict[str, Any]">
|
||||
<description>Response containing success status and message</description>
|
||||
<description>Response containing:
|
||||
- On success: success=true, message, report_id, severity, cvss_score
|
||||
- On duplicate detection: success=false, message (with duplicate info), duplicate_of (ID), duplicate_title, confidence (0-1), reason (why it's a duplicate)</description>
|
||||
</returns>
|
||||
</tool>
|
||||
</tools>
|
||||
|
||||
@@ -95,6 +95,12 @@
|
||||
<parameter=command>ls -la</parameter>
|
||||
</function>
|
||||
|
||||
<function=terminal_execute>
|
||||
<parameter=command>cd /workspace
|
||||
pwd
|
||||
ls -la</parameter>
|
||||
</function>
|
||||
|
||||
# Run a command with custom timeout
|
||||
<function=terminal_execute>
|
||||
<parameter=command>npm install</parameter>
|
||||
|
||||
0
strix/utils/__init__.py
Normal file
0
strix/utils/__init__.py
Normal file
13
strix/utils/resource_paths.py
Normal file
13
strix/utils/resource_paths.py
Normal file
@@ -0,0 +1,13 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_strix_resource_path(*parts: str) -> Path:
|
||||
frozen_base = getattr(sys, "_MEIPASS", None)
|
||||
if frozen_base:
|
||||
base = Path(frozen_base) / "strix"
|
||||
if base.exists():
|
||||
return base.joinpath(*parts)
|
||||
|
||||
base = Path(__file__).resolve().parent.parent
|
||||
return base.joinpath(*parts)
|
||||
Reference in New Issue
Block a user