32 Commits

Author SHA1 Message Date
Ahmed Allam
9825fb46ec chore: Bump version for 0.4.0 release 2025-11-25 20:18:44 +04:00
Alexander De Battista Kvamme
c0e547928e Real-time display panel for agent stats (#134)
Co-authored-by: Ahmed Allam <ahmed39652003@gmail.com>
2025-11-25 12:06:20 +00:00
Trusthoodies
78d0148d58 Add open redirect, subdomain takeover, and info disclosure prompt modules (#132)
Co-authored-by: Ahmed Allam <ahmed39652003@gmail.com>
2025-11-25 10:32:55 +00:00
dependabot[bot]
eebb76de3b chore(deps): bump pypdf from 6.1.3 to 6.4.0
Bumps [pypdf](https://github.com/py-pdf/pypdf) from 6.1.3 to 6.4.0.
- [Release notes](https://github.com/py-pdf/pypdf/releases)
- [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md)
- [Commits](https://github.com/py-pdf/pypdf/compare/6.1.3...6.4.0)

---
updated-dependencies:
- dependency-name: pypdf
  dependency-version: 6.4.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-25 12:44:38 +04:00
Ahmed Allam
2ae1b3ddd1 Update README 2025-11-23 22:29:44 +04:00
Ahmed Allam
a11cd09a93 feat: support file-based instructions for detailed test configuration 2025-11-23 00:46:37 +04:00
Ahmed Allam
68ebdb2b6d feat: enhance run name generation to include target information 2025-11-22 22:54:07 +04:00
Ahmed Allam
5befb32318 feat: implement incremental pentest data persistence 2025-11-22 22:54:07 +04:00
cyberseall
86e6ed49bb feat(llm): make LLM request queue rate limits configurable and more conservative
Co-authored-by: Ahmed Allam <ahmed39652003@gmail.com>
2025-11-22 17:07:43 +00:00
Ahmed Allam
0c811845f1 docs: update README 2025-11-21 23:07:11 +04:00
Ahmed Allam
383d53c7a9 feat(agent): implement agent identity guidline and improve system prompt 2025-11-15 16:21:05 +04:00
Ahmed Allam
478bf5d4d3 refactor(llm): remove unused temperature parameter from LLMConfig 2025-11-15 12:44:40 +04:00
Ahmed Allam
d1f7741965 feat(llm): enhance model features handling with pattern matching 2025-11-15 12:43:43 +04:00
Ahmed Allam
821929cd3e fix(agent): increase waiting time threshold from 120 to 600 seconds 2025-11-15 12:39:46 +04:00
Ahmed Allam
5de16d2953 chore: Bump LiteLLM version 2025-11-15 12:37:22 +04:00
Ahmed Allam
6a2a62c121 chore: Fix formatting in README.md 2025-11-14 16:07:54 +00:00
Ahmed Allam
426dd27454 chore: Minor readme tweaks. Bump version for 0.3.4 release 2025-11-14 20:02:48 +04:00
Mark Percival
cedc65409e fix: link 2025-11-14 20:02:48 +04:00
Mark Percival
72d5a73386 Chore: Update README 2025-11-14 20:02:48 +04:00
Ahmed Allam
dab69af033 fix(runtime): correct DOCKER_HOST parsing for sandbox URL 2025-11-14 02:41:00 +04:00
Ahmed Allam
6abb53dc02 feat: support scanning IP addresses 2025-11-14 01:38:58 +04:00
Ahmed Allam
f1d2961779 Update README 2025-11-12 19:29:01 +04:00
purpl3horse
2b7a8e3ee7 Update README.md
Instruction argument was written in plural in the readme ( a typo )
2025-11-12 19:03:27 +04:00
Ahmed Allam
3e7466a533 chore: Bump version for 0.3.3 release 2025-11-12 18:58:03 +04:00
Ahmed Allam
1abfb360e4 feat: add configurable timeout for LLM requests 2025-11-12 18:58:03 +04:00
Ahmed Allam
795ed02955 docs: update README with recommended models 2025-11-12 15:01:15 +04:00
Alexei Macheret Artur
2cb0c31897 chore(deps): bump starlette from 0.46.2 to 0.49.1 (#75)
Bumps [starlette](https://github.com/Kludex/starlette) from 0.46.2 to 0.49.1.
- [Release notes](https://github.com/Kludex/starlette/releases)
- [Changelog](https://github.com/Kludex/starlette/blob/main/docs/release-notes.md)
- [Commits](https://github.com/Kludex/starlette/compare/0.46.2...0.49.1)

---
updated-dependencies:
- dependency-name: starlette
  dependency-version: 0.49.1
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-10 14:19:18 +04:00
m4ki3lf0
1c8780cf81 Update Readme
Co-authored-by: m4ki3lf0 <m4ki3lf0@git.com>
Co-authored-by: Ahmed Allam <ahmed39652003@gmail.com>
2025-11-10 09:49:37 +00:00
Ahmed Allam
b6d9d941cf Update README 2025-11-08 15:07:53 +04:00
Ahmed Allam
edd628bbc1 Chore: fix discord link in readme 2025-11-07 18:03:47 +04:00
Ahmed Allam
d76c7c55b2 Fix: update litellm dependency version 2025-11-05 12:40:44 +02:00
Ahmed Allam
b5ddba3867 docs: Update README 2025-11-05 01:21:48 +02:00
27 changed files with 1373 additions and 286 deletions

BIN
.github/logo.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

1
.gitignore vendored
View File

@@ -79,6 +79,7 @@ logs/
tensorboard/
# Agent execution traces
strix_runs/
agent_runs/
# Misc

View File

@@ -101,7 +101,7 @@ We welcome feature ideas! Please:
## 🤝 Community
- **Discord**: [Join our community](https://discord.gg/J48Fzuh7)
- **Discord**: [Join our community](https://discord.gg/YjKFvEZSdZ)
- **Issues**: [GitHub Issues](https://github.com/usestrix/strix/issues)
## ✨ Recognition
@@ -113,4 +113,4 @@ We value all contributions! Contributors will be:
---
**Questions?** Reach out on [Discord](https://discord.gg/J48Fzuh7) or create an issue. We're here to help!
**Questions?** Reach out on [Discord](https://discord.gg/YjKFvEZSdZ) or create an issue. We're here to help!

199
README.md
View File

@@ -1,82 +1,120 @@
<div align="center">
<p align="center">
<a href="https://usestrix.com/">
<img src=".github/logo.png" width="150" alt="Strix Logo">
</a>
</p>
# Strix
<h1 align="center">Strix</h1>
### Open-source AI hackers for your apps
[![Strix](https://img.shields.io/badge/Strix-usestrix.com-1a1a1a.svg)](https://usestrix.com)
[![Apache 2.0](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
[![Discord](https://img.shields.io/badge/Discord-join-5865F2?logo=discord&logoColor=white)](https://discord.gg/J48Fzuh7)
[![PyPI Downloads](https://static.pepy.tech/personalized-badge/strix-agent?period=total&units=INTERNATIONAL_SYSTEM&left_color=GRAY&right_color=BLACK&left_text=Downloads)](https://pepy.tech/projects/strix-agent)
[![GitHub stars](https://img.shields.io/github/stars/usestrix/strix.svg?style=social&label=Star)](https://github.com/usestrix/strix)
</div>
<h2 align="center">Open-source AI Hackers to secure your Apps</h2>
<div align="center">
<img src=".github/screenshot.png" alt="Strix Demo" width="800" style="border-radius: 16px; box-shadow: 0 20px 40px rgba(0, 0, 0, 0.3), 0 0 0 1px rgba(255, 255, 255, 0.1), inset 0 1px 0 rgba(255, 255, 255, 0.2); transform: perspective(1000px) rotateX(2deg); transition: transform 0.3s ease;">
[![Python](https://img.shields.io/pypi/pyversions/strix-agent?color=3776AB)](https://pypi.org/project/strix-agent/)
[![PyPI](https://img.shields.io/pypi/v/strix-agent?color=10b981)](https://pypi.org/project/strix-agent/)
[![PyPI Downloads](https://static.pepy.tech/personalized-badge/strix-agent?period=total&units=INTERNATIONAL_SYSTEM&left_color=GREY&right_color=RED&left_text=Downloads)](https://pepy.tech/projects/strix-agent)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
[![GitHub Stars](https://img.shields.io/github/stars/usestrix/strix)](https://github.com/usestrix/strix)
[![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?&logo=discord&logoColor=white)](https://discord.gg/YjKFvEZSdZ)
[![Website](https://img.shields.io/badge/Website-usestrix.com-2d3748.svg)](https://usestrix.com)
<a href="https://trendshift.io/repositories/15362" target="_blank"><img src="https://trendshift.io/api/badge/repositories/15362" alt="usestrix%2Fstrix | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</div>
<br>
<div align="center">
<img src=".github/screenshot.png" alt="Strix Demo" width="800" style="border-radius: 16px;">
</div>
<br>
> [!TIP]
> **New!** Strix now integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production!
---
## 🦉 Strix Overview
Strix are autonomous AI agents that act just like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual exploitation. Built for developers and security teams who need fast, accurate security testing without the overhead of manual pentesting or the false positives of static analysis tools.
Strix are autonomous AI agents that act just like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual proof-of-concepts. Built for developers and security teams who need fast, accurate security testing without the overhead of manual pentesting or the false positives of static analysis tools.
- **Full hacker toolkit** out of the box
- **Teams of agents** that collaborate and scale
- **Real validation** via exploitation and PoC, not false positives
- **Developerfirst** CLI with actionable reports
- **Autofix & reporting** to accelerate remediation
**Key Capabilities:**
- 🔧 **Full hacker toolkit** out of the box
- 🤝 **Teams of agents** that collaborate and scale
- **Real validation** with PoCs, not false positives
- 💻 **Developerfirst** CLI with actionable reports
- 🔄 **Autofix & reporting** to accelerate remediation
## 🎯 Use Cases
- **Application Security Testing** - Detect and validate critical vulnerabilities in your applications
- **Rapid Penetration Testing** - Get penetration tests done in hours, not weeks, with compliance reports
- **Bug Bounty Automation** - Automate bug bounty research and generate PoCs for faster reporting
- **CI/CD Integration** - Run tests in CI/CD to block vulnerabilities before reaching production
---
### 🎯 Use Cases
## 🚀 Quick Start
- Detect and validate critical vulnerabilities in your applications.
- Get penetration tests done in hours, not weeks, with compliance reports.
- Automate bug bounty research and generate PoCs for faster reporting.
- Run tests in CI/CD to block vulnerabilities before reaching production.
---
### 🚀 Quick Start
Prerequisites:
**Prerequisites:**
- Docker (running)
- Python 3.12+
- An LLM provider key (or a local LLM)
- An LLM provider key (e.g. [get OpenAI API key](https://platform.openai.com/api-keys) or use a local LLM)
### Installation & First Scan
```bash
# Install
# Install Strix
pipx install strix-agent
# Configure AI provider
# Configure your AI provider
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
# Run security assessment
# Run your first security assessment
strix --target ./app-directory
```
First run pulls the sandbox Docker image. Results are saved under `agent_runs/<run-name>`.
> [!NOTE]
> First run automatically pulls the sandbox Docker image. Results are saved to `strix_runs/<run-name>`
### ☁️ Cloud Hosted
## ☁️ Run Strix in Cloud
Want to skip the setup? Try our cloud-hosted version: **[usestrix.com](https://usestrix.com)**
Want to skip the local setup, API keys, and unpredictable LLM costs? Run the hosted cloud version of Strix at **[app.usestrix.com](https://app.usestrix.com)**.
Launch a scan in just a few minutes—no setup or configuration required—and youll get:
- **A full pentest report** with validated findings and clear remediation steps
- **Shareable dashboards** your team can use to track fixes over time
- **CI/CD and GitHub integrations** to block risky changes before production
- **Continuous monitoring** so new vulnerabilities are caught quickly
[**Run your first pentest now →**](https://app.usestrix.com)
---
## ✨ Features
### 🛠️ Agentic Security Tools
- **🔌 Full HTTP Proxy** - Full request/response manipulation and analysis
- **🌐 Browser Automation** - Multi-tab browser for testing of XSS, CSRF, auth flows
- **💻 Terminal Environments** - Interactive shells for command execution and testing
- **🐍 Python Runtime** - Custom exploit development and validation
- **🔍 Reconnaissance** - Automated OSINT and attack surface mapping
- **📁 Code Analysis** - Static and dynamic analysis capabilities
- **📝 Knowledge Management** - Structured findings and attack documentation
Strix agents come equipped with a comprehensive security testing toolkit:
- **Full HTTP Proxy** - Full request/response manipulation and analysis
- **Browser Automation** - Multi-tab browser for testing of XSS, CSRF, auth flows
- **Terminal Environments** - Interactive shells for command execution and testing
- **Python Runtime** - Custom exploit development and validation
- **Reconnaissance** - Automated OSINT and attack surface mapping
- **Code Analysis** - Static and dynamic analysis capabilities
- **Knowledge Management** - Structured findings and attack documentation
### 🎯 Comprehensive Vulnerability Detection
Strix can identify and validate a wide range of security vulnerabilities:
- **Access Control** - IDOR, privilege escalation, auth bypass
- **Injection Attacks** - SQL, NoSQL, command injection
- **Server-Side** - SSRF, XXE, deserialization flaws
@@ -87,55 +125,48 @@ Want to skip the setup? Try our cloud-hosted version: **[usestrix.com](https://u
### 🕸️ Graph of Agents
Advanced multi-agent orchestration for comprehensive security testing:
- **Distributed Workflows** - Specialized agents for different attacks and assets
- **Scalable Testing** - Parallel execution for fast comprehensive coverage
- **Dynamic Coordination** - Agents collaborate and share discoveries
---
## 💻 Usage Examples
### Basic Usage
```bash
# Local codebase analysis
# Scan a local codebase
strix --target ./app-directory
# Repository security review
# Security review of a GitHub repository
strix --target https://github.com/org/repo
# Web application assessment
# Black-box web application assessment
strix --target https://your-app.com
# Multi-target white-box testing (source code + deployed app)
strix -t https://github.com/org/app -t https://your-app.com
# Test multiple environments simultaneously
strix -t https://dev.your-app.com -t https://staging.your-app.com -t https://prod.your-app.com
# Focused testing with instructions
strix --target api.your-app.com --instruction "Prioritize authentication and authorization testing"
# Testing with credentials
strix --target https://your-app.com --instruction "Test with credentials: testuser/testpass. Focus on privilege escalation and access control bypasses."
```
### ⚙️ Configuration
### Advanced Testing Scenarios
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
# Grey-box authenticated testing
strix --target https://your-app.com --instruction "Perform authenticated testing using credentials: user:pass"
# Optional
export LLM_API_BASE="your-api-base-url" # if using a local model, e.g. Ollama, LMStudio
export PERPLEXITY_API_KEY="your-api-key" # for search capabilities
# Multi-target testing (source code + deployed app)
strix -t https://github.com/org/app -t https://your-app.com
# Focused testing with custom instructions
strix --target api.your-app.com --instruction "Focus on business logic flaws and IDOR vulnerabilities"
```
[📚 View supported AI models](https://docs.litellm.ai/docs/providers)
### 🤖 Headless Mode
Run Strix programmatically without interactive UI using the `-n/--non-interactive` flag—perfect for servers and automated jobs. The CLI prints real-time vulnerability findings, and the final report before exiting. Exits with non-zero code when vulnerabilities are found.
```bash
strix -n --target https://your-app.com --instruction "Focus on authentication and authorization vulnerabilities"
strix -n --target https://your-app.com
```
### 🔄 CI/CD (GitHub Actions)
@@ -165,26 +196,18 @@ jobs:
run: strix -n -t ./
```
## 🏆 Enterprise Platform
### ⚙️ Configuration
Our managed platform provides:
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
- **📈 Executive Dashboards**
- **🧠 Custom Fine-Tuned Models**
- **⚙️ CI/CD Integration**
- **🔍 Large-Scale Scanning**
- **🔌 Third-Party Integrations**
- **🎯 Enterprise Support**
# Optional
export LLM_API_BASE="your-api-base-url" # if using a local model, e.g. Ollama, LMStudio
export PERPLEXITY_API_KEY="your-api-key" # for search capabilities
```
[**Get Enterprise Demo →**](https://usestrix.com)
## 🔒 Security Architecture
- **Container Isolation** - All testing in sandboxed Docker environments
- **Local Processing** - Testing runs locally, no data sent to external services
> [!WARNING]
> Only test systems you own or have permission to test. You are responsible for using Strix ethically and legally.
[OpenAI's GPT-5](https://openai.com/api/) (`openai/gpt-5`) and [Anthropic's Claude Sonnet 4.5](https://claude.com/platform/api) (`anthropic/claude-sonnet-4-5`) are the recommended models for best results with Strix. We also support many [other options](https://docs.litellm.ai/docs/providers), including cloud and local models, though their performance and reliability may vary.
## 🤝 Contributing
@@ -197,18 +220,22 @@ See our [Contributing Guide](CONTRIBUTING.md) for details on:
- Submitting pull requests
- Code style guidelines
### Prompt Modules Collection
Help expand our collection of specialized prompt modules for AI agents:
- Advanced testing techniques for vulnerabilities, frameworks, and technologies
- See [Prompt Modules Documentation](strix/prompts/README.md) for guidelines
- Submit via [pull requests](https://github.com/usestrix/strix/pulls) or [issues](https://github.com/usestrix/strix/issues)
## 👥 Join Our Community
Have questions? Found a bug? Want to contribute? **[Join our Discord!](https://discord.gg/YjKFvEZSdZ)**
## 🌟 Support the Project
**Love Strix?** Give us a ⭐ on GitHub!
## 👥 Join Our Community
Have questions? Found a bug? Want to contribute? **[Join our Discord!](https://discord.gg/J48Fzuh7)**
> [!WARNING]
> Only test apps you own or have permission to test. You are responsible for using Strix ethically and legally.
</div>

193
poetry.lock generated
View File

@@ -148,6 +148,18 @@ files = [
{file = "alabaster-1.0.0.tar.gz", hash = "sha256:c00dca57bca26fa62a6d7d0a9fcce65f3e026e9bfe33e9c538fd3fbb2144fd9e"},
]
[[package]]
name = "annotated-doc"
version = "0.0.3"
description = "Document parameters, class attributes, return types, and variables inline, with Annotated."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "annotated_doc-0.0.3-py3-none-any.whl", hash = "sha256:348ec6664a76f1fd3be81f43dffbee4c7e8ce931ba71ec67cc7f4ade7fbbb580"},
{file = "annotated_doc-0.0.3.tar.gz", hash = "sha256:e18370014c70187422c33e945053ff4c286f453a984eba84d0dbfa0c935adeda"},
]
[[package]]
name = "annotated-types"
version = "0.7.0"
@@ -1237,24 +1249,26 @@ tests = ["asttokens (>=2.1.0)", "coverage", "coverage-enable-subprocess", "ipyth
[[package]]
name = "fastapi"
version = "0.115.14"
version = "0.121.0"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "fastapi-0.115.14-py3-none-any.whl", hash = "sha256:6c0c8bf9420bd58f565e585036d971872472b4f7d3f6c73b698e10cffdefb3ca"},
{file = "fastapi-0.115.14.tar.gz", hash = "sha256:b1de15cdc1c499a4da47914db35d0e4ef8f1ce62b624e94e0e5824421df99739"},
{file = "fastapi-0.121.0-py3-none-any.whl", hash = "sha256:8bdf1b15a55f4e4b0d6201033da9109ea15632cb76cf156e7b8b4019f2172106"},
{file = "fastapi-0.121.0.tar.gz", hash = "sha256:06663356a0b1ee93e875bbf05a31fb22314f5bed455afaaad2b2dad7f26e98fa"},
]
[package.dependencies]
annotated-doc = ">=0.0.2"
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0 || >2.0.0,<2.0.1 || >2.0.1,<2.1.0 || >2.1.0,<3.0.0"
starlette = ">=0.40.0,<0.47.0"
starlette = ">=0.40.0,<0.50.0"
typing-extensions = ">=4.8.0"
[package.extras]
all = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=3.1.5)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
standard = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "httpx (>=0.23.0)", "jinja2 (>=3.1.5)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"]
all = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=3.1.5)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
standard = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "jinja2 (>=3.1.5)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"]
standard-no-fastapi-cloud-cli = ["email-validator (>=2.0.0)", "fastapi-cli[standard-no-fastapi-cloud-cli] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "jinja2 (>=3.1.5)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"]
[[package]]
name = "fastapi-sso"
@@ -1274,6 +1288,94 @@ httpx = ">=0.23.0"
oauthlib = ">=3.1.0"
pydantic = {version = ">=1.8.0", extras = ["email"]}
[[package]]
name = "fastuuid"
version = "0.14.0"
description = "Python bindings to Rust's UUID library."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "fastuuid-0.14.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:6e6243d40f6c793c3e2ee14c13769e341b90be5ef0c23c82fa6515a96145181a"},
{file = "fastuuid-0.14.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:13ec4f2c3b04271f62be2e1ce7e95ad2dd1cf97e94503a3760db739afbd48f00"},
{file = "fastuuid-0.14.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b2fdd48b5e4236df145a149d7125badb28e0a383372add3fbaac9a6b7a394470"},
{file = "fastuuid-0.14.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f74631b8322d2780ebcf2d2d75d58045c3e9378625ec51865fe0b5620800c39d"},
{file = "fastuuid-0.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:83cffc144dc93eb604b87b179837f2ce2af44871a7b323f2bfed40e8acb40ba8"},
{file = "fastuuid-0.14.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1a771f135ab4523eb786e95493803942a5d1fc1610915f131b363f55af53b219"},
{file = "fastuuid-0.14.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4edc56b877d960b4eda2c4232f953a61490c3134da94f3c28af129fb9c62a4f6"},
{file = "fastuuid-0.14.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:bcc96ee819c282e7c09b2eed2b9bd13084e3b749fdb2faf58c318d498df2efbe"},
{file = "fastuuid-0.14.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7a3c0bca61eacc1843ea97b288d6789fbad7400d16db24e36a66c28c268cfe3d"},
{file = "fastuuid-0.14.0-cp310-cp310-win32.whl", hash = "sha256:7f2f3efade4937fae4e77efae1af571902263de7b78a0aee1a1653795a093b2a"},
{file = "fastuuid-0.14.0-cp310-cp310-win_amd64.whl", hash = "sha256:ae64ba730d179f439b0736208b4c279b8bc9c089b102aec23f86512ea458c8a4"},
{file = "fastuuid-0.14.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:73946cb950c8caf65127d4e9a325e2b6be0442a224fd51ba3b6ac44e1912ce34"},
{file = "fastuuid-0.14.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:12ac85024637586a5b69645e7ed986f7535106ed3013640a393a03e461740cb7"},
{file = "fastuuid-0.14.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:05a8dde1f395e0c9b4be515b7a521403d1e8349443e7641761af07c7ad1624b1"},
{file = "fastuuid-0.14.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09378a05020e3e4883dfdab438926f31fea15fd17604908f3d39cbeb22a0b4dc"},
{file = "fastuuid-0.14.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bbb0c4b15d66b435d2538f3827f05e44e2baafcc003dd7d8472dc67807ab8fd8"},
{file = "fastuuid-0.14.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:cd5a7f648d4365b41dbf0e38fe8da4884e57bed4e77c83598e076ac0c93995e7"},
{file = "fastuuid-0.14.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:c0a94245afae4d7af8c43b3159d5e3934c53f47140be0be624b96acd672ceb73"},
{file = "fastuuid-0.14.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:2b29e23c97e77c3a9514d70ce343571e469098ac7f5a269320a0f0b3e193ab36"},
{file = "fastuuid-0.14.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1e690d48f923c253f28151b3a6b4e335f2b06bf669c68a02665bc150b7839e94"},
{file = "fastuuid-0.14.0-cp311-cp311-win32.whl", hash = "sha256:a6f46790d59ab38c6aa0e35c681c0484b50dc0acf9e2679c005d61e019313c24"},
{file = "fastuuid-0.14.0-cp311-cp311-win_amd64.whl", hash = "sha256:e150eab56c95dc9e3fefc234a0eedb342fac433dacc273cd4d150a5b0871e1fa"},
{file = "fastuuid-0.14.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:77e94728324b63660ebf8adb27055e92d2e4611645bf12ed9d88d30486471d0a"},
{file = "fastuuid-0.14.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:caa1f14d2102cb8d353096bc6ef6c13b2c81f347e6ab9d6fbd48b9dea41c153d"},
{file = "fastuuid-0.14.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:d23ef06f9e67163be38cece704170486715b177f6baae338110983f99a72c070"},
{file = "fastuuid-0.14.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0c9ec605ace243b6dbe3bd27ebdd5d33b00d8d1d3f580b39fdd15cd96fd71796"},
{file = "fastuuid-0.14.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:808527f2407f58a76c916d6aa15d58692a4a019fdf8d4c32ac7ff303b7d7af09"},
{file = "fastuuid-0.14.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2fb3c0d7fef6674bbeacdd6dbd386924a7b60b26de849266d1ff6602937675c8"},
{file = "fastuuid-0.14.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ab3f5d36e4393e628a4df337c2c039069344db5f4b9d2a3c9cea48284f1dd741"},
{file = "fastuuid-0.14.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:b9a0ca4f03b7e0b01425281ffd44e99d360e15c895f1907ca105854ed85e2057"},
{file = "fastuuid-0.14.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:3acdf655684cc09e60fb7e4cf524e8f42ea760031945aa8086c7eae2eeeabeb8"},
{file = "fastuuid-0.14.0-cp312-cp312-win32.whl", hash = "sha256:9579618be6280700ae36ac42c3efd157049fe4dd40ca49b021280481c78c3176"},
{file = "fastuuid-0.14.0-cp312-cp312-win_amd64.whl", hash = "sha256:d9e4332dc4ba054434a9594cbfaf7823b57993d7d8e7267831c3e059857cf397"},
{file = "fastuuid-0.14.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:77a09cb7427e7af74c594e409f7731a0cf887221de2f698e1ca0ebf0f3139021"},
{file = "fastuuid-0.14.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:9bd57289daf7b153bfa3e8013446aa144ce5e8c825e9e366d455155ede5ea2dc"},
{file = "fastuuid-0.14.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:ac60fc860cdf3c3f327374db87ab8e064c86566ca8c49d2e30df15eda1b0c2d5"},
{file = "fastuuid-0.14.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ab32f74bd56565b186f036e33129da77db8be09178cd2f5206a5d4035fb2a23f"},
{file = "fastuuid-0.14.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:33e678459cf4addaedd9936bbb038e35b3f6b2061330fd8f2f6a1d80414c0f87"},
{file = "fastuuid-0.14.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1e3cc56742f76cd25ecb98e4b82a25f978ccffba02e4bdce8aba857b6d85d87b"},
{file = "fastuuid-0.14.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:cb9a030f609194b679e1660f7e32733b7a0f332d519c5d5a6a0a580991290022"},
{file = "fastuuid-0.14.0-cp313-cp313-musllinux_1_1_i686.whl", hash = "sha256:09098762aad4f8da3a888eb9ae01c84430c907a297b97166b8abc07b640f2995"},
{file = "fastuuid-0.14.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:1383fff584fa249b16329a059c68ad45d030d5a4b70fb7c73a08d98fd53bcdab"},
{file = "fastuuid-0.14.0-cp313-cp313-win32.whl", hash = "sha256:a0809f8cc5731c066c909047f9a314d5f536c871a7a22e815cc4967c110ac9ad"},
{file = "fastuuid-0.14.0-cp313-cp313-win_amd64.whl", hash = "sha256:0df14e92e7ad3276327631c9e7cec09e32572ce82089c55cb1bb8df71cf394ed"},
{file = "fastuuid-0.14.0-cp314-cp314-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:b852a870a61cfc26c884af205d502881a2e59cc07076b60ab4a951cc0c94d1ad"},
{file = "fastuuid-0.14.0-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:c7502d6f54cd08024c3ea9b3514e2d6f190feb2f46e6dbcd3747882264bb5f7b"},
{file = "fastuuid-0.14.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1ca61b592120cf314cfd66e662a5b54a578c5a15b26305e1b8b618a6f22df714"},
{file = "fastuuid-0.14.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa75b6657ec129d0abded3bec745e6f7ab642e6dba3a5272a68247e85f5f316f"},
{file = "fastuuid-0.14.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8a0dfea3972200f72d4c7df02c8ac70bad1bb4c58d7e0ec1e6f341679073a7f"},
{file = "fastuuid-0.14.0-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1bf539a7a95f35b419f9ad105d5a8a35036df35fdafae48fb2fd2e5f318f0d75"},
{file = "fastuuid-0.14.0-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:9a133bf9cc78fdbd1179cb58a59ad0100aa32d8675508150f3658814aeefeaa4"},
{file = "fastuuid-0.14.0-cp314-cp314-musllinux_1_1_i686.whl", hash = "sha256:f54d5b36c56a2d5e1a31e73b950b28a0d83eb0c37b91d10408875a5a29494bad"},
{file = "fastuuid-0.14.0-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:ec27778c6ca3393ef662e2762dba8af13f4ec1aaa32d08d77f71f2a70ae9feb8"},
{file = "fastuuid-0.14.0-cp314-cp314-win32.whl", hash = "sha256:e23fc6a83f112de4be0cc1990e5b127c27663ae43f866353166f87df58e73d06"},
{file = "fastuuid-0.14.0-cp314-cp314-win_amd64.whl", hash = "sha256:df61342889d0f5e7a32f7284e55ef95103f2110fee433c2ae7c2c0956d76ac8a"},
{file = "fastuuid-0.14.0-cp38-cp38-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:47c821f2dfe95909ead0085d4cb18d5149bca704a2b03e03fb3f81a5202d8cea"},
{file = "fastuuid-0.14.0-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:3964bab460c528692c70ab6b2e469dd7a7b152fbe8c18616c58d34c93a6cf8d4"},
{file = "fastuuid-0.14.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c501561e025b7aea3508719c5801c360c711d5218fc4ad5d77bf1c37c1a75779"},
{file = "fastuuid-0.14.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2dce5d0756f046fa792a40763f36accd7e466525c5710d2195a038f93ff96346"},
{file = "fastuuid-0.14.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:193ca10ff553cf3cc461572da83b5780fc0e3eea28659c16f89ae5202f3958d4"},
{file = "fastuuid-0.14.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0737606764b29785566f968bd8005eace73d3666bd0862f33a760796e26d1ede"},
{file = "fastuuid-0.14.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e0976c0dff7e222513d206e06341503f07423aceb1db0b83ff6851c008ceee06"},
{file = "fastuuid-0.14.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:6fbc49a86173e7f074b1a9ec8cf12ca0d54d8070a85a06ebf0e76c309b84f0d0"},
{file = "fastuuid-0.14.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:de01280eabcd82f7542828ecd67ebf1551d37203ecdfd7ab1f2e534edb78d505"},
{file = "fastuuid-0.14.0-cp38-cp38-win32.whl", hash = "sha256:af5967c666b7d6a377098849b07f83462c4fedbafcf8eb8bc8ff05dcbe8aa209"},
{file = "fastuuid-0.14.0-cp38-cp38-win_amd64.whl", hash = "sha256:c3091e63acf42f56a6f74dc65cfdb6f99bfc79b5913c8a9ac498eb7ca09770a8"},
{file = "fastuuid-0.14.0-cp39-cp39-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:2ec3d94e13712a133137b2805073b65ecef4a47217d5bac15d8ac62376cefdb4"},
{file = "fastuuid-0.14.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:139d7ff12bb400b4a0c76be64c28cbe2e2edf60b09826cbfd85f33ed3d0bbe8b"},
{file = "fastuuid-0.14.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d55b7e96531216fc4f071909e33e35e5bfa47962ae67d9e84b00a04d6e8b7173"},
{file = "fastuuid-0.14.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c0eb25f0fd935e376ac4334927a59e7c823b36062080e2e13acbaf2af15db836"},
{file = "fastuuid-0.14.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:089c18018fdbdda88a6dafd7d139f8703a1e7c799618e33ea25eb52503d28a11"},
{file = "fastuuid-0.14.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2fc37479517d4d70c08696960fad85494a8a7a0af4e93e9a00af04d74c59f9e3"},
{file = "fastuuid-0.14.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:73657c9f778aba530bc96a943d30e1a7c80edb8278df77894fe9457540df4f85"},
{file = "fastuuid-0.14.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:d31f8c257046b5617fc6af9c69be066d2412bdef1edaa4bdf6a214cf57806105"},
{file = "fastuuid-0.14.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5816d41f81782b209843e52fdef757a361b448d782452d96abedc53d545da722"},
{file = "fastuuid-0.14.0-cp39-cp39-win32.whl", hash = "sha256:448aa6833f7a84bfe37dd47e33df83250f404d591eb83527fa2cac8d1e57d7f3"},
{file = "fastuuid-0.14.0-cp39-cp39-win_amd64.whl", hash = "sha256:84b0779c5abbdec2a9511d5ffbfcd2e53079bf889824b32be170c0d8ef5fc74c"},
{file = "fastuuid-0.14.0.tar.gz", hash = "sha256:178947fc2f995b38497a74172adee64fdeb8b7ec18f2a5934d037641ba265d26"},
]
[[package]]
name = "filelock"
version = "3.19.1"
@@ -1631,6 +1733,8 @@ files = [
{file = "greenlet-3.2.4-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c2ca18a03a8cfb5b25bc1cbe20f3d9a4c80d8c3b13ba3df49ac3961af0b1018d"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fe0a28a7b952a21e2c062cd5756d34354117796c6d9215a87f55e38d15402c5"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8854167e06950ca75b898b104b63cc646573aa5fef1353d4508ecdd1ee76254f"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f47617f698838ba98f4ff4189aef02e7343952df3a615f847bb575c3feb177a7"},
{file = "greenlet-3.2.4-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af41be48a4f60429d5cad9d22175217805098a9ef7c40bfef44f7669fb9d74d8"},
{file = "greenlet-3.2.4-cp310-cp310-win_amd64.whl", hash = "sha256:73f49b5368b5359d04e18d15828eecc1806033db5233397748f4ca813ff1056c"},
{file = "greenlet-3.2.4-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:96378df1de302bc38e99c3a9aa311967b7dc80ced1dcc6f171e99842987882a2"},
{file = "greenlet-3.2.4-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1ee8fae0519a337f2329cb78bd7a8e128ec0f881073d43f023c7b8d4831d5246"},
@@ -1640,6 +1744,8 @@ files = [
{file = "greenlet-3.2.4-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2523e5246274f54fdadbce8494458a2ebdcdbc7b802318466ac5606d3cded1f8"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:1987de92fec508535687fb807a5cea1560f6196285a4cde35c100b8cd632cc52"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:55e9c5affaa6775e2c6b67659f3a71684de4c549b3dd9afca3bc773533d284fa"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c9c6de1940a7d828635fbd254d69db79e54619f165ee7ce32fda763a9cb6a58c"},
{file = "greenlet-3.2.4-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:03c5136e7be905045160b1b9fdca93dd6727b180feeafda6818e6496434ed8c5"},
{file = "greenlet-3.2.4-cp311-cp311-win_amd64.whl", hash = "sha256:9c40adce87eaa9ddb593ccb0fa6a07caf34015a29bf8d344811665b573138db9"},
{file = "greenlet-3.2.4-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:3b67ca49f54cede0186854a008109d6ee71f66bd57bb36abd6d0a0267b540cdd"},
{file = "greenlet-3.2.4-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ddf9164e7a5b08e9d22511526865780a576f19ddd00d62f8a665949327fde8bb"},
@@ -1649,6 +1755,8 @@ files = [
{file = "greenlet-3.2.4-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b3812d8d0c9579967815af437d96623f45c0f2ae5f04e366de62a12d83a8fb0"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:abbf57b5a870d30c4675928c37278493044d7c14378350b3aa5d484fa65575f0"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:20fb936b4652b6e307b8f347665e2c615540d4b42b3b4c8a321d8286da7e520f"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ee7a6ec486883397d70eec05059353b8e83eca9168b9f3f9a361971e77e0bcd0"},
{file = "greenlet-3.2.4-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:326d234cbf337c9c3def0676412eb7040a35a768efc92504b947b3e9cfc7543d"},
{file = "greenlet-3.2.4-cp312-cp312-win_amd64.whl", hash = "sha256:a7d4e128405eea3814a12cc2605e0e6aedb4035bf32697f72deca74de4105e02"},
{file = "greenlet-3.2.4-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:1a921e542453fe531144e91e1feedf12e07351b1cf6c9e8a3325ea600a715a31"},
{file = "greenlet-3.2.4-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:cd3c8e693bff0fff6ba55f140bf390fa92c994083f838fece0f63be121334945"},
@@ -1658,6 +1766,8 @@ files = [
{file = "greenlet-3.2.4-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6e343822feb58ac4d0a1211bd9399de2b3a04963ddeec21530fc426cc121f19b"},
{file = "greenlet-3.2.4-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ca7f6f1f2649b89ce02f6f229d7c19f680a6238af656f61e0115b24857917929"},
{file = "greenlet-3.2.4-cp313-cp313-win_amd64.whl", hash = "sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b"},
{file = "greenlet-3.2.4-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f"},
@@ -1665,6 +1775,8 @@ files = [
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735"},
{file = "greenlet-3.2.4-cp314-cp314-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337"},
{file = "greenlet-3.2.4-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269"},
{file = "greenlet-3.2.4-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:015d48959d4add5d6c9f6c5210ee3803a830dce46356e3bc326d6776bde54681"},
{file = "greenlet-3.2.4-cp314-cp314-win_amd64.whl", hash = "sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01"},
{file = "greenlet-3.2.4-cp39-cp39-macosx_11_0_universal2.whl", hash = "sha256:b6a7c19cf0d2742d0809a4c05975db036fdff50cd294a93632d6a310bf9ac02c"},
{file = "greenlet-3.2.4-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:27890167f55d2387576d1f41d9487ef171849ea0359ce1510ca6e06c8bece11d"},
@@ -1674,6 +1786,8 @@ files = [
{file = "greenlet-3.2.4-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c9913f1a30e4526f432991f89ae263459b1c64d1608c0d22a5c79c287b3c70df"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b90654e092f928f110e0007f572007c9727b5265f7632c2fa7415b4689351594"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:81701fd84f26330f0d5f4944d4e92e61afe6319dcd9775e39396e39d7c3e5f98"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:28a3c6b7cd72a96f61b0e4b2a36f681025b60ae4779cc73c1535eb5f29560b10"},
{file = "greenlet-3.2.4-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:52206cd642670b0b320a1fd1cbfd95bca0e043179c1d8a045f2c6109dfe973be"},
{file = "greenlet-3.2.4-cp39-cp39-win32.whl", hash = "sha256:65458b409c1ed459ea899e939f0e1cdb14f58dbc803f2f93c5eab5694d32671b"},
{file = "greenlet-3.2.4-cp39-cp39-win_amd64.whl", hash = "sha256:d2e685ade4dafd447ede19c31277a224a239a0a1a4eca4e6390efedf20260cfb"},
{file = "greenlet-3.2.4.tar.gz", hash = "sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d"},
@@ -2373,14 +2487,14 @@ test = ["coverage", "pytest", "pytest-cov"]
[[package]]
name = "litellm"
version = "1.75.8"
version = "1.79.3"
description = "Library to easily interface with LLM API providers"
optional = false
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
groups = ["main"]
files = [
{file = "litellm-1.75.8-py3-none-any.whl", hash = "sha256:0bf004488df8506381ec6e35e1486e2870e8d578a7c3f2427cd497558ce07a2e"},
{file = "litellm-1.75.8.tar.gz", hash = "sha256:92061bd263ff8c33c8fff70ba92cd046adb7ea041a605826a915d108742fe59e"},
{file = "litellm-1.79.3-py3-none-any.whl", hash = "sha256:16314049d109e5cadb2abdccaf2e07ea03d2caa3a9b3f54f34b5b825092b4eeb"},
{file = "litellm-1.79.3.tar.gz", hash = "sha256:4da4716f8da3e1b77838262c36d3016146860933e0489171658a9d4a3fd59b1b"},
]
[package.dependencies]
@@ -2391,16 +2505,17 @@ azure-storage-blob = {version = ">=12.25.1,<13.0.0", optional = true, markers =
backoff = {version = "*", optional = true, markers = "extra == \"proxy\""}
boto3 = {version = "1.36.0", optional = true, markers = "extra == \"proxy\""}
click = "*"
cryptography = {version = ">=43.0.1,<44.0.0", optional = true, markers = "extra == \"proxy\""}
fastapi = {version = ">=0.115.5,<0.116.0", optional = true, markers = "extra == \"proxy\""}
cryptography = {version = "*", optional = true, markers = "extra == \"proxy\""}
fastapi = {version = ">=0.120.1", optional = true, markers = "extra == \"proxy\""}
fastapi-sso = {version = ">=0.16.0,<0.17.0", optional = true, markers = "extra == \"proxy\""}
fastuuid = ">=0.13.0"
gunicorn = {version = ">=23.0.0,<24.0.0", optional = true, markers = "extra == \"proxy\""}
httpx = ">=0.23.0"
importlib-metadata = ">=6.8.0"
jinja2 = ">=3.1.2,<4.0.0"
jsonschema = ">=4.22.0,<5.0.0"
litellm-enterprise = {version = "0.1.19", optional = true, markers = "extra == \"proxy\""}
litellm-proxy-extras = {version = "0.2.17", optional = true, markers = "extra == \"proxy\""}
litellm-enterprise = {version = "0.1.20", optional = true, markers = "extra == \"proxy\""}
litellm-proxy-extras = {version = "0.4.3", optional = true, markers = "extra == \"proxy\""}
mcp = {version = ">=1.10.0,<2.0.0", optional = true, markers = "python_version >= \"3.10\" and extra == \"proxy\""}
openai = ">=1.99.5"
orjson = {version = ">=3.9.7,<4.0.0", optional = true, markers = "extra == \"proxy\""}
@@ -2413,6 +2528,7 @@ python-multipart = {version = ">=0.0.18,<0.0.19", optional = true, markers = "ex
pyyaml = {version = ">=6.0.1,<7.0.0", optional = true, markers = "extra == \"proxy\""}
rich = {version = "13.7.1", optional = true, markers = "extra == \"proxy\""}
rq = {version = "*", optional = true, markers = "extra == \"proxy\""}
soundfile = {version = ">=0.12.1,<0.13.0", optional = true, markers = "extra == \"proxy\""}
tiktoken = ">=0.7.0"
tokenizers = "*"
uvicorn = {version = ">=0.29.0,<0.30.0", optional = true, markers = "extra == \"proxy\""}
@@ -2423,30 +2539,32 @@ websockets = {version = ">=13.1.0,<14.0.0", optional = true, markers = "extra ==
caching = ["diskcache (>=5.6.1,<6.0.0)"]
extra-proxy = ["azure-identity (>=1.15.0,<2.0.0)", "azure-keyvault-secrets (>=4.8.0,<5.0.0)", "google-cloud-iam (>=2.19.1,<3.0.0)", "google-cloud-kms (>=2.21.3,<3.0.0)", "prisma (==0.11.0)", "redisvl (>=0.4.1,<0.5.0) ; python_version >= \"3.9\" and python_version < \"3.14\"", "resend (>=0.8.0,<0.9.0)"]
mlflow = ["mlflow (>3.1.4) ; python_version >= \"3.10\""]
proxy = ["PyJWT (>=2.8.0,<3.0.0)", "apscheduler (>=3.10.4,<4.0.0)", "azure-identity (>=1.15.0,<2.0.0)", "azure-storage-blob (>=12.25.1,<13.0.0)", "backoff", "boto3 (==1.36.0)", "cryptography (>=43.0.1,<44.0.0)", "fastapi (>=0.115.5,<0.116.0)", "fastapi-sso (>=0.16.0,<0.17.0)", "gunicorn (>=23.0.0,<24.0.0)", "litellm-enterprise (==0.1.19)", "litellm-proxy-extras (==0.2.17)", "mcp (>=1.10.0,<2.0.0) ; python_version >= \"3.10\"", "orjson (>=3.9.7,<4.0.0)", "polars (>=1.31.0,<2.0.0) ; python_version >= \"3.10\"", "pynacl (>=1.5.0,<2.0.0)", "python-multipart (>=0.0.18,<0.0.19)", "pyyaml (>=6.0.1,<7.0.0)", "rich (==13.7.1)", "rq", "uvicorn (>=0.29.0,<0.30.0)", "uvloop (>=0.21.0,<0.22.0) ; sys_platform != \"win32\"", "websockets (>=13.1.0,<14.0.0)"]
proxy = ["PyJWT (>=2.8.0,<3.0.0)", "apscheduler (>=3.10.4,<4.0.0)", "azure-identity (>=1.15.0,<2.0.0)", "azure-storage-blob (>=12.25.1,<13.0.0)", "backoff", "boto3 (==1.36.0)", "cryptography", "fastapi (>=0.120.1)", "fastapi-sso (>=0.16.0,<0.17.0)", "gunicorn (>=23.0.0,<24.0.0)", "litellm-enterprise (==0.1.20)", "litellm-proxy-extras (==0.4.3)", "mcp (>=1.10.0,<2.0.0) ; python_version >= \"3.10\"", "orjson (>=3.9.7,<4.0.0)", "polars (>=1.31.0,<2.0.0) ; python_version >= \"3.10\"", "pynacl (>=1.5.0,<2.0.0)", "python-multipart (>=0.0.18,<0.0.19)", "pyyaml (>=6.0.1,<7.0.0)", "rich (==13.7.1)", "rq", "soundfile (>=0.12.1,<0.13.0)", "uvicorn (>=0.29.0,<0.30.0)", "uvloop (>=0.21.0,<0.22.0) ; sys_platform != \"win32\"", "websockets (>=13.1.0,<14.0.0)"]
semantic-router = ["semantic-router ; python_version >= \"3.9\""]
utils = ["numpydoc"]
[[package]]
name = "litellm-enterprise"
version = "0.1.19"
version = "0.1.20"
description = "Package for LiteLLM Enterprise features"
optional = false
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
groups = ["main"]
files = [
{file = "litellm_enterprise-0.1.19.tar.gz", hash = "sha256:a70794a9c66f069f6eb73b283639f783ac4138ec2684058a696e8d6210cdc4fa"},
{file = "litellm_enterprise-0.1.20-py3-none-any.whl", hash = "sha256:744a79956a8cd7748ef4c3f40d5a564c61519834e706beafbc0b931162773ae8"},
{file = "litellm_enterprise-0.1.20.tar.gz", hash = "sha256:f6b8dd75b53bd835c68caf6402a8bae744a150db7bb6b0e617178c6056ac6c01"},
]
[[package]]
name = "litellm-proxy-extras"
version = "0.2.17"
version = "0.4.3"
description = "Additional files for the LiteLLM Proxy. Reduces the size of the main litellm package."
optional = false
python-versions = "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8"
groups = ["main"]
files = [
{file = "litellm_proxy_extras-0.2.17.tar.gz", hash = "sha256:96428ba537d440a40a7db85e615284a4fd89bada24a7fc8737ef0189932cb1ed"},
{file = "litellm_proxy_extras-0.4.3-py3-none-any.whl", hash = "sha256:e7ab09aa78d04d02dc48975620defa36784e1a0baa6d04a078b98b5717fcae24"},
{file = "litellm_proxy_extras-0.4.3.tar.gz", hash = "sha256:420400d0db186319695526f6765d3d481206fe025b70bc74a1ce895a7d720bee"},
]
[[package]]
@@ -4327,14 +4445,14 @@ diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pypdf"
version = "6.1.3"
version = "6.4.0"
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pypdf-6.1.3-py3-none-any.whl", hash = "sha256:eb049195e46f014fc155f566fa20e09d70d4646a9891164ac25fa0cbcfcdbcb5"},
{file = "pypdf-6.1.3.tar.gz", hash = "sha256:8d420d1e79dc1743f31a57707cabb6dcd5b17e8b9a302af64b30202c5700ab9d"},
{file = "pypdf-6.4.0-py3-none-any.whl", hash = "sha256:55ab9837ed97fd7fcc5c131d52fcc2223bc5c6b8a1488bbf7c0e27f1f0023a79"},
{file = "pypdf-6.4.0.tar.gz", hash = "sha256:4769d471f8ddc3341193ecc5d6560fa44cf8cd0abfabf21af4e195cc0c224072"},
]
[package.extras]
@@ -5252,6 +5370,30 @@ files = [
{file = "snowballstemmer-3.0.1.tar.gz", hash = "sha256:6d5eeeec8e9f84d4d56b847692bacf79bc2c8e90c7f80ca4444ff8b6f2e52895"},
]
[[package]]
name = "soundfile"
version = "0.12.1"
description = "An audio library based on libsndfile, CFFI and NumPy"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "soundfile-0.12.1-py2.py3-none-any.whl", hash = "sha256:828a79c2e75abab5359f780c81dccd4953c45a2c4cd4f05ba3e233ddf984b882"},
{file = "soundfile-0.12.1-py2.py3-none-macosx_10_9_x86_64.whl", hash = "sha256:d922be1563ce17a69582a352a86f28ed8c9f6a8bc951df63476ffc310c064bfa"},
{file = "soundfile-0.12.1-py2.py3-none-macosx_11_0_arm64.whl", hash = "sha256:bceaab5c4febb11ea0554566784bcf4bc2e3977b53946dda2b12804b4fe524a8"},
{file = "soundfile-0.12.1-py2.py3-none-manylinux_2_17_x86_64.whl", hash = "sha256:2dc3685bed7187c072a46ab4ffddd38cef7de9ae5eb05c03df2ad569cf4dacbc"},
{file = "soundfile-0.12.1-py2.py3-none-manylinux_2_31_x86_64.whl", hash = "sha256:074247b771a181859d2bc1f98b5ebf6d5153d2c397b86ee9e29ba602a8dfe2a6"},
{file = "soundfile-0.12.1-py2.py3-none-win32.whl", hash = "sha256:59dfd88c79b48f441bbf6994142a19ab1de3b9bb7c12863402c2bc621e49091a"},
{file = "soundfile-0.12.1-py2.py3-none-win_amd64.whl", hash = "sha256:0d86924c00b62552b650ddd28af426e3ff2d4dc2e9047dae5b3d8452e0a49a77"},
{file = "soundfile-0.12.1.tar.gz", hash = "sha256:e8e1017b2cf1dda767aef19d2fd9ee5ebe07e050d430f77a0a7c66ba08b8cdae"},
]
[package.dependencies]
cffi = ">=1.0"
[package.extras]
numpy = ["numpy"]
[[package]]
name = "soupsieve"
version = "2.7"
@@ -5501,18 +5643,19 @@ files = [
[[package]]
name = "starlette"
version = "0.46.2"
version = "0.49.1"
description = "The little ASGI library that shines."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "starlette-0.46.2-py3-none-any.whl", hash = "sha256:595633ce89f8ffa71a015caed34a5b2dc1c0cdb3f0f1fbd1e69339cf2abeec35"},
{file = "starlette-0.46.2.tar.gz", hash = "sha256:7f7361f34eed179294600af672f565727419830b54b7b084efe44bb82d2fccd5"},
{file = "starlette-0.49.1-py3-none-any.whl", hash = "sha256:d92ce9f07e4a3caa3ac13a79523bd18e3bc0042bb8ff2d759a8e7dd0e1859875"},
{file = "starlette-0.49.1.tar.gz", hash = "sha256:481a43b71e24ed8c43b11ea02f5353d77840e01480881b8cb5a26b8cae64a8cb"},
]
[package.dependencies]
anyio = ">=3.6.2,<5"
typing-extensions = {version = ">=4.10.0", markers = "python_version < \"3.13\""}
[package.extras]
full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"]
@@ -6328,4 +6471,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = "^3.12"
content-hash = "7a392483dd752ed2c96b084afc2f7471a31e8e73f6bbb377a40a67ec275faf3a"
content-hash = "58d6ced9acbcc0c1118f4daf5cab60f88b33f2ec884400f5df5f535e1e455449"

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "strix-agent"
version = "0.3.1"
version = "0.4.0"
description = "Open-source AI Hackers for your apps"
authors = ["Strix <hi@usestrix.com>"]
readme = "README.md"
@@ -45,7 +45,7 @@ strix = "strix.interface.main:main"
python = "^3.12"
fastapi = "*"
uvicorn = "*"
litellm = { version = "~1.75.8", extras = ["proxy"] }
litellm = { version = "~1.79.1", extras = ["proxy"] }
openai = ">=1.99.5,<1.100.0"
tenacity = "^9.0.0"
numpydoc = "^1.8.0"

View File

@@ -18,13 +18,14 @@ class StrixAgent(BaseAgent):
super().__init__(config)
async def execute_scan(self, scan_config: dict[str, Any]) -> dict[str, Any]:
async def execute_scan(self, scan_config: dict[str, Any]) -> dict[str, Any]: # noqa: PLR0912
user_instructions = scan_config.get("user_instructions", "")
targets = scan_config.get("targets", [])
repositories = []
local_code = []
urls = []
ip_addresses = []
for target in targets:
target_type = target["type"]
@@ -53,6 +54,8 @@ class StrixAgent(BaseAgent):
elif target_type == "web_application":
urls.append(details["target_url"])
elif target_type == "ip_address":
ip_addresses.append(details["target_ip"])
task_parts = []
@@ -74,6 +77,10 @@ class StrixAgent(BaseAgent):
task_parts.append("\n\nURLs:")
task_parts.extend(f"- {url}" for url in urls)
if ip_addresses:
task_parts.append("\n\nIP Addresses:")
task_parts.extend(f"- {ip}" for ip in ip_addresses)
task_description = " ".join(task_parts)
if user_instructions:

View File

@@ -18,12 +18,16 @@ CLI OUTPUT:
INTER-AGENT MESSAGES:
- NEVER echo inter_agent_message or agent_completion_report XML content that is sent to you in your output.
- Process these internally without displaying the XML
- NEVER echo agent_identity XML blocks; treat them as internal metadata for identity only. Do not include them in outputs or tool calls.
- Minimize inter-agent messaging: only message when essential for coordination or assistance; avoid routine status updates; batch non-urgent information; prefer parent/child completion flows and shared artifacts over messaging
AUTONOMOUS BEHAVIOR:
- Work autonomously by default
- You should NOT ask for user input or confirmation - you should always proceed with your task autonomously.
- Minimize user messaging: avoid redundancy and repetition; consolidate updates into a single concise message
- NEVER send an empty or blank message. If you have no content to output or need to wait (for user input, subagent results, or any other reason), you MUST call the wait_for_message tool (or another appropriate tool) instead of emitting an empty response.
- If there is nothing to execute and no user query to answer any more: do NOT send filler/repetitive text — either call wait_for_message or finish your work (subagents: agent_finish; root: finish_scan)
- While the agent loop is running, almost every output MUST be a tool call. Do NOT send plain text messages; act via tools. If idle, use wait_for_message; when done, use agent_finish (subagents) or finish_scan (root)
</communication_rules>
<execution_guidelines>
@@ -102,7 +106,6 @@ OPERATIONAL PRINCIPLES:
- Choose appropriate tools for each context
- Chain vulnerabilities for maximum impact
- Consider business logic and context in exploitation
- **OVERUSE THE THINK TOOL** - Use it CONSTANTLY. Every 1-2 messages MINIMUM, and after each tool call!
- NEVER skip think tool - it's your most important tool for reasoning and success
- WORK RELENTLESSLY - Don't stop until you've found something significant
- Try multiple approaches simultaneously - don't wait for one to fail
@@ -210,10 +213,9 @@ SIMPLE WORKFLOW RULES:
4. **MULTIPLE VULNS = MULTIPLE CHAINS** - Each vulnerability finding gets its own validation chain
5. **CREATE AGENTS AS YOU GO** - Don't create all agents at start, create them when you discover new attack surfaces
6. **ONE JOB PER AGENT** - Each agent has ONE specific task only
7. **VIEW THE AGENT GRAPH BEFORE ACTING** - Always call view_agent_graph before creating or messaging agents to avoid duplicates and to target correctly
8. **SCALE AGENT COUNT TO SCOPE** - Number of agents should correlate with target size and difficulty; avoid both agent sprawl and under-staffing
9. **CHILDREN ARE MEANINGFUL SUBTASKS** - Child agents must be focused subtasks that directly support their parent's task; do NOT create unrelated children
10. **UNIQUENESS** - Do not create two agents with the same task; ensure clear, non-overlapping responsibilities for every agent
7. **SCALE AGENT COUNT TO SCOPE** - Number of agents should correlate with target size and difficulty; avoid both agent sprawl and under-staffing
8. **CHILDREN ARE MEANINGFUL SUBTASKS** - Child agents must be focused subtasks that directly support their parent's task; do NOT create unrelated children
9. **UNIQUENESS** - Do not create two agents with the same task; ensure clear, non-overlapping responsibilities for every agent
WHEN TO CREATE NEW AGENTS:
@@ -304,10 +306,25 @@ Tool calls use XML format:
</function>
CRITICAL RULES:
0. While active in the agent loop, EVERY message you output MUST be a single tool call. Do not send plain text-only responses.
1. One tool call per message
2. Tool call must be last in message
3. End response after </function> tag. It's your stop word. Do not continue after it.
5. Thinking is NOT optional - it's required for reasoning and success
4. Use ONLY the exact XML format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters.
5. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants).
- Correct: <function=think> ... </function>
- Incorrect: <thinking_tools.think> ... </function>
- Incorrect: <think> ... </think>
- Incorrect: {"think": {...}}
6. Parameters must use <parameter=param_name>value</parameter> exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values.
7. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block.
Example (agent creation tool):
<function=create_agent>
<parameter=task>Perform targeted XSS testing on the search endpoint</parameter>
<parameter=name>XSS Discovery Agent</parameter>
<parameter=prompt_modules>xss</parameter>
</function>
SPRAYING EXECUTION NOTE:
- When performing large payload sprays or fuzzing, encapsulate the entire spraying loop inside a single python or terminal tool call (e.g., a Python script using asyncio/aiohttp). Do not issue one tool call per payload.
@@ -359,6 +376,7 @@ SPECIALIZED TOOLS:
PROXY & INTERCEPTION:
- Caido CLI - Modern web proxy (already running). Used with proxy tool or with python tool (functions already imported).
- NOTE: If you are seeing proxy errors when sending requests, it usually means you are not sending requests to a correct url/host/port.
- Ignore Caido proxy-generated 50x HTML error pages; these are proxy issues (might happen when requesting a wrong host or SSL/TLS issues, etc).
PROGRAMMING:
- Python 3, Poetry, Go, Node.js/npm

View File

@@ -1,4 +1,5 @@
import asyncio
import contextlib
import logging
from pathlib import Path
from typing import TYPE_CHECKING, Any, Optional
@@ -75,6 +76,8 @@ class BaseAgent(metaclass=AgentMeta):
max_iterations=self.max_iterations,
)
with contextlib.suppress(Exception):
self.llm.set_agent_identity(self.agent_name, self.state.agent_id)
self._current_task: asyncio.Task[Any] | None = None
from strix.telemetry.tracer import get_global_tracer

View File

@@ -123,7 +123,7 @@ class AgentState(BaseModel):
return False
elapsed = (datetime.now(UTC) - self.waiting_start_time).total_seconds()
return elapsed > 120
return elapsed > 600
def has_empty_last_messages(self, count: int = 3) -> bool:
if len(self.messages) < count:

View File

@@ -33,18 +33,32 @@ Screen {
background: transparent;
}
#sidebar {
width: 25%;
background: transparent;
margin-left: 1;
}
#agents_tree {
width: 20%;
height: 1fr;
background: transparent;
border: round #262626;
border-title-color: #a8a29e;
border-title-style: bold;
margin-left: 1;
padding: 1;
margin-bottom: 0;
}
#stats_display {
height: auto;
max-height: 15;
background: transparent;
padding: 0;
margin: 0;
}
#chat_area_container {
width: 80%;
width: 75%;
background: transparent;
}

View File

@@ -1,9 +1,12 @@
import atexit
import signal
import sys
import threading
import time
from typing import Any
from rich.console import Console
from rich.live import Live
from rich.panel import Panel
from rich.text import Text
@@ -11,7 +14,7 @@ from strix.agents.StrixAgent import StrixAgent
from strix.llm.config import LLMConfig
from strix.telemetry.tracer import Tracer, set_global_tracer
from .utils import get_severity_color
from .utils import build_final_stats_text, build_live_stats_text, get_severity_color
async def run_cli(args: Any) -> None: # noqa: PLR0915
@@ -36,7 +39,7 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
results_text = Text()
results_text.append("📊 Results will be saved to: ", style="bold cyan")
results_text.append(f"agent_runs/{args.run_name}", style="bold white")
results_text.append(f"strix_runs/{args.run_name}", style="bold white")
note_text = Text()
note_text.append("\n\n", style="dim")
@@ -130,24 +133,80 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
set_global_tracer(tracer)
def create_live_status() -> Panel:
status_text = Text()
status_text.append("🦉 ", style="bold white")
status_text.append("Running penetration test...", style="bold #22c55e")
status_text.append("\n\n")
stats_text = build_live_stats_text(tracer)
if stats_text:
status_text.append(stats_text)
return Panel(
status_text,
title="[bold #22c55e]🔍 Live Penetration Test Status",
title_align="center",
border_style="#22c55e",
padding=(1, 2),
)
try:
console.print()
with console.status("[bold cyan]Running penetration test...", spinner="dots") as status:
agent = StrixAgent(agent_config)
result = await agent.execute_scan(scan_config)
status.stop()
if isinstance(result, dict) and not result.get("success", True):
error_msg = result.get("error", "Unknown error")
console.print()
console.print(f"[bold red]❌ Penetration test failed:[/] {error_msg}")
console.print()
sys.exit(1)
with Live(
create_live_status(), console=console, refresh_per_second=2, transient=False
) as live:
stop_updates = threading.Event()
def update_status() -> None:
while not stop_updates.is_set():
try:
live.update(create_live_status())
time.sleep(2)
except Exception: # noqa: BLE001
break
update_thread = threading.Thread(target=update_status, daemon=True)
update_thread.start()
try:
agent = StrixAgent(agent_config)
result = await agent.execute_scan(scan_config)
if isinstance(result, dict) and not result.get("success", True):
error_msg = result.get("error", "Unknown error")
console.print()
console.print(f"[bold red]❌ Penetration test failed:[/] {error_msg}")
console.print()
sys.exit(1)
finally:
stop_updates.set()
update_thread.join(timeout=1)
except Exception as e:
console.print(f"[bold red]Error during penetration test:[/] {e}")
raise
console.print()
final_stats_text = Text()
final_stats_text.append("📊 ", style="bold cyan")
final_stats_text.append("PENETRATION TEST COMPLETED", style="bold green")
final_stats_text.append("\n\n")
stats_text = build_final_stats_text(tracer)
if stats_text:
final_stats_text.append(stats_text)
final_stats_panel = Panel(
final_stats_text,
title="[bold green]✅ Final Statistics",
title_align="center",
border_style="green",
padding=(1, 2),
)
console.print(final_stats_panel)
if tracer.final_scan_result:
console.print()

View File

@@ -21,8 +21,7 @@ from strix.interface.cli import run_cli
from strix.interface.tui import run_tui
from strix.interface.utils import (
assign_workspace_subdirs,
build_llm_stats_text,
build_stats_text,
build_final_stats_text,
check_docker_connection,
clone_repository,
collect_local_sources,
@@ -208,9 +207,12 @@ async def warm_up_llm() -> None:
{"role": "user", "content": "Reply with just 'OK'."},
]
llm_timeout = int(os.getenv("LLM_TIMEOUT", "600"))
response = litellm.completion(
model=model_name,
messages=test_messages,
timeout=llm_timeout,
)
validate_llm_response(response)
@@ -257,12 +259,19 @@ Examples:
# Domain penetration test
strix --target example.com
# IP address penetration test
strix --target 192.168.1.42
# Multiple targets (e.g., white-box testing with source and deployed app)
strix --target https://github.com/user/repo --target https://example.com
strix --target ./my-project --target https://staging.example.com --target https://prod.example.com
# Custom instructions
# Custom instructions (inline)
strix --target example.com --instruction "Focus on authentication vulnerabilities"
# Custom instructions (from file)
strix --target example.com --instruction ./instructions.txt
strix --target https://app.com --instruction /path/to/detailed_instructions.md
""",
)
@@ -272,7 +281,7 @@ Examples:
type=str,
required=True,
action="append",
help="Target to test (URL, repository, local directory path, or domain name). "
help="Target to test (URL, repository, local directory path, domain name, or IP address). "
"Can be specified multiple times for multi-target scans.",
)
parser.add_argument(
@@ -283,7 +292,9 @@ Examples:
"testing approaches (e.g., 'Perform thorough authentication testing'), "
"test credentials (e.g., 'Use the following credentials to access the app: "
"admin:password123'), "
"or areas of interest (e.g., 'Check login API endpoint for security issues')",
"or areas of interest (e.g., 'Check login API endpoint for security issues'). "
"You can also provide a path to a file containing detailed instructions "
"(e.g., '--instruction ./instructions.txt').",
)
parser.add_argument(
@@ -304,6 +315,17 @@ Examples:
args = parser.parse_args()
if args.instruction:
instruction_path = Path(args.instruction)
if instruction_path.exists() and instruction_path.is_file():
try:
with instruction_path.open(encoding="utf-8") as f:
args.instruction = f.read().strip()
if not args.instruction:
parser.error(f"Instruction file '{instruction_path}' is empty")
except Exception as e: # noqa: BLE001
parser.error(f"Failed to read instruction file '{instruction_path}': {e}")
args.targets_info = []
for target in args.target:
try:
@@ -347,8 +369,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
completion_text.append("", style="dim white")
completion_text.append("Penetration test interrupted by user", style="white")
stats_text = build_stats_text(tracer)
llm_stats_text = build_llm_stats_text(tracer)
stats_text = build_final_stats_text(tracer)
target_text = Text()
if len(args.targets_info) == 1:
@@ -368,9 +389,6 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
if stats_text.plain:
panel_parts.extend(["\n", stats_text])
if llm_stats_text.plain:
panel_parts.extend(["\n", llm_stats_text])
if scan_completed or has_vulnerabilities:
results_text = Text()
results_text.append("📊 Results Saved To: ", style="bold cyan")
@@ -453,7 +471,7 @@ def main() -> None:
asyncio.run(warm_up_llm())
if not args.run_name:
args.run_name = generate_run_name()
args.run_name = generate_run_name(args.targets_info)
for target_info in args.targets_info:
if target_info["type"] == "repository":
@@ -469,7 +487,7 @@ def main() -> None:
else:
asyncio.run(run_tui(args))
results_path = Path("agent_runs") / args.run_name
results_path = Path("strix_runs") / args.run_name
display_completion_message(args, results_path)
if args.non_interactive:

View File

@@ -31,6 +31,7 @@ from textual.widgets import Button, Label, Static, TextArea, Tree
from textual.widgets.tree import TreeNode
from strix.agents.StrixAgent import StrixAgent
from strix.interface.utils import build_live_stats_text
from strix.llm.config import LLMConfig
from strix.telemetry.tracer import Tracer, set_global_tracer
@@ -393,8 +394,12 @@ class StrixTUIApp(App): # type: ignore[misc]
agents_tree.guide_depth = 3
agents_tree.guide_style = "dashed"
stats_display = Static("", id="stats_display")
sidebar = Vertical(agents_tree, stats_display, id="sidebar")
content_container.mount(chat_area_container)
content_container.mount(agents_tree)
content_container.mount(sidebar)
chat_area_container.mount(chat_history)
chat_area_container.mount(agent_status_display)
@@ -481,6 +486,8 @@ class StrixTUIApp(App): # type: ignore[misc]
self._update_agent_status_display()
self._update_stats_display()
def _update_agent_node(self, agent_id: str, agent_data: dict[str, Any]) -> bool:
if agent_id not in self.agent_nodes:
return False
@@ -658,6 +665,33 @@ class StrixTUIApp(App): # type: ignore[misc]
except (KeyError, Exception):
self._safe_widget_operation(status_display.add_class, "hidden")
def _update_stats_display(self) -> None:
try:
stats_display = self.query_one("#stats_display", Static)
except (ValueError, Exception):
return
if not self._is_widget_safe(stats_display):
return
stats_content = Text()
stats_text = build_live_stats_text(self.tracer)
if stats_text:
stats_content.append(stats_text)
from rich.panel import Panel
stats_panel = Panel(
stats_content,
title="📊 Live Stats",
title_align="left",
border_style="#22c55e",
padding=(0, 1),
)
self._safe_widget_operation(stats_display.update, stats_panel)
def _get_agent_verb(self, agent_id: str) -> str:
if agent_id not in self._agent_verbs:
self._agent_verbs[agent_id] = random.choice(self._action_verbs) # nosec B311 # noqa: S311

View File

@@ -1,3 +1,4 @@
import ipaddress
import re
import secrets
import shutil
@@ -37,14 +38,9 @@ def get_severity_color(severity: str) -> str:
return severity_colors.get(severity, "#6b7280")
def build_stats_text(tracer: Any) -> Text:
stats_text = Text()
if not tracer:
return stats_text
def _build_vulnerability_stats(stats_text: Text, tracer: Any) -> None:
"""Build vulnerability section of stats text."""
vuln_count = len(tracer.vulnerability_reports)
tool_count = tracer.get_real_tool_count()
agent_count = len(tracer.agents)
if vuln_count > 0:
severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0, "info": 0}
@@ -80,68 +76,188 @@ def build_stats_text(tracer: Any) -> Text:
stats_text.append(" (No exploitable vulnerabilities detected)", style="dim green")
stats_text.append("\n")
def _build_llm_stats(stats_text: Text, total_stats: dict[str, Any]) -> None:
"""Build LLM usage section of stats text."""
if total_stats["requests"] > 0:
stats_text.append("\n")
stats_text.append("📥 Input Tokens: ", style="bold cyan")
stats_text.append(format_token_count(total_stats["input_tokens"]), style="bold white")
if total_stats["cached_tokens"] > 0:
stats_text.append("", style="dim white")
stats_text.append("⚡ Cached Tokens: ", style="bold green")
stats_text.append(format_token_count(total_stats["cached_tokens"]), style="bold white")
stats_text.append("", style="dim white")
stats_text.append("📤 Output Tokens: ", style="bold cyan")
stats_text.append(format_token_count(total_stats["output_tokens"]), style="bold white")
if total_stats["cost"] > 0:
stats_text.append("", style="dim white")
stats_text.append("💰 Total Cost: ", style="bold cyan")
stats_text.append(f"${total_stats['cost']:.4f}", style="bold yellow")
else:
stats_text.append("\n")
stats_text.append("💰 Total Cost: ", style="bold cyan")
stats_text.append("$0.0000 ", style="bold yellow")
stats_text.append("", style="bold white")
stats_text.append("📊 Tokens: ", style="bold cyan")
stats_text.append("0", style="bold white")
def build_final_stats_text(tracer: Any) -> Text:
"""Build stats text for final output with detailed messages and LLM usage."""
stats_text = Text()
if not tracer:
return stats_text
_build_vulnerability_stats(stats_text, tracer)
tool_count = tracer.get_real_tool_count()
agent_count = len(tracer.agents)
stats_text.append("🤖 Agents Used: ", style="bold cyan")
stats_text.append(str(agent_count), style="bold white")
stats_text.append("", style="dim white")
stats_text.append("🛠️ Tools Called: ", style="bold cyan")
stats_text.append(str(tool_count), style="bold white")
llm_stats = tracer.get_total_llm_stats()
_build_llm_stats(stats_text, llm_stats["total"])
return stats_text
def build_llm_stats_text(tracer: Any) -> Text:
llm_stats_text = Text()
def build_live_stats_text(tracer: Any) -> Text:
stats_text = Text()
if not tracer:
return llm_stats_text
return stats_text
vuln_count = len(tracer.vulnerability_reports)
tool_count = tracer.get_real_tool_count()
agent_count = len(tracer.agents)
stats_text.append("🔍 Vulnerabilities: ", style="bold white")
stats_text.append(f"{vuln_count}", style="dim white")
stats_text.append("\n")
if vuln_count > 0:
severity_counts = {"critical": 0, "high": 0, "medium": 0, "low": 0, "info": 0}
for report in tracer.vulnerability_reports:
severity = report.get("severity", "").lower()
if severity in severity_counts:
severity_counts[severity] += 1
severity_parts = []
for severity in ["critical", "high", "medium", "low", "info"]:
count = severity_counts[severity]
if count > 0:
severity_color = get_severity_color(severity)
severity_text = Text()
severity_text.append(f"{severity.upper()}: ", style=severity_color)
severity_text.append(str(count), style=f"bold {severity_color}")
severity_parts.append(severity_text)
for i, part in enumerate(severity_parts):
stats_text.append(part)
if i < len(severity_parts) - 1:
stats_text.append(" | ", style="dim white")
stats_text.append("\n")
stats_text.append("🤖 Agents: ", style="bold white")
stats_text.append(str(agent_count), style="dim white")
stats_text.append("", style="dim white")
stats_text.append("🛠️ Tools: ", style="bold white")
stats_text.append(str(tool_count), style="dim white")
llm_stats = tracer.get_total_llm_stats()
total_stats = llm_stats["total"]
if total_stats["requests"] > 0:
llm_stats_text.append("📥 Input Tokens: ", style="bold cyan")
llm_stats_text.append(format_token_count(total_stats["input_tokens"]), style="bold white")
stats_text.append("\n")
if total_stats["cached_tokens"] > 0:
llm_stats_text.append("", style="dim white")
llm_stats_text.append("⚡ Cached: ", style="bold green")
llm_stats_text.append(
format_token_count(total_stats["cached_tokens"]), style="bold green"
)
stats_text.append("📥 Input: ", style="bold white")
stats_text.append(format_token_count(total_stats["input_tokens"]), style="dim white")
llm_stats_text.append("", style="dim white")
llm_stats_text.append("📤 Output Tokens: ", style="bold cyan")
llm_stats_text.append(format_token_count(total_stats["output_tokens"]), style="bold white")
stats_text.append("", style="dim white")
stats_text.append(" ", style="bold white")
stats_text.append("Cached: ", style="bold white")
stats_text.append(format_token_count(total_stats["cached_tokens"]), style="dim white")
if total_stats["cost"] > 0:
llm_stats_text.append("", style="dim white")
llm_stats_text.append("💰 Total Cost: $", style="bold cyan")
llm_stats_text.append(f"{total_stats['cost']:.4f}", style="bold yellow")
stats_text.append("\n")
return llm_stats_text
stats_text.append("📤 Output: ", style="bold white")
stats_text.append(format_token_count(total_stats["output_tokens"]), style="dim white")
stats_text.append("", style="dim white")
stats_text.append("💰 Cost: ", style="bold white")
stats_text.append(f"${total_stats['cost']:.4f}", style="dim white")
return stats_text
# Name generation utilities
def generate_run_name() -> str:
# fmt: off
adjectives = [
"stealthy", "sneaky", "crafty", "elite", "phantom", "shadow", "silent",
"rogue", "covert", "ninja", "ghost", "cyber", "digital", "binary",
"encrypted", "obfuscated", "masked", "cloaked", "invisible", "anonymous"
]
nouns = [
"exploit", "payload", "backdoor", "rootkit", "keylogger", "botnet", "trojan",
"worm", "virus", "packet", "buffer", "shell", "daemon", "spider", "crawler",
"scanner", "sniffer", "honeypot", "firewall", "breach"
]
# fmt: on
adj = secrets.choice(adjectives)
noun = secrets.choice(nouns)
number = secrets.randbelow(900) + 100
return f"{adj}-{noun}-{number}"
def _slugify_for_run_name(text: str, max_length: int = 32) -> str:
text = text.lower().strip()
text = re.sub(r"[^a-z0-9]+", "-", text)
text = text.strip("-")
if len(text) > max_length:
text = text[:max_length].rstrip("-")
return text or "pentest"
def _derive_target_label_for_run_name(targets_info: list[dict[str, Any]] | None) -> str: # noqa: PLR0911
if not targets_info:
return "pentest"
first = targets_info[0]
target_type = first.get("type")
details = first.get("details", {}) or {}
original = first.get("original", "") or ""
if target_type == "web_application":
url = details.get("target_url", original)
try:
parsed = urlparse(url)
return str(parsed.netloc or parsed.path or url)
except Exception: # noqa: BLE001
return str(url)
if target_type == "repository":
repo = details.get("target_repo", original)
parsed = urlparse(repo)
path = parsed.path or repo
name = path.rstrip("/").split("/")[-1] or path
if name.endswith(".git"):
name = name[:-4]
return str(name)
if target_type == "local_code":
path_str = details.get("target_path", original)
try:
return str(Path(path_str).name or path_str)
except Exception: # noqa: BLE001
return str(path_str)
if target_type == "ip_address":
return str(details.get("target_ip", original) or original)
return str(original or "pentest")
def generate_run_name(targets_info: list[dict[str, Any]] | None = None) -> str:
base_label = _derive_target_label_for_run_name(targets_info)
slug = _slugify_for_run_name(base_label)
random_suffix = secrets.token_hex(2)
return f"{slug}_{random_suffix}"
# Target processing utilities
def infer_target_type(target: str) -> tuple[str, dict[str, str]]:
def infer_target_type(target: str) -> tuple[str, dict[str, str]]: # noqa: PLR0911
if not target or not isinstance(target, str):
raise ValueError("Target must be a non-empty string")
@@ -167,6 +283,13 @@ def infer_target_type(target: str) -> tuple[str, dict[str, str]]:
return "repository", {"target_repo": target}
return "web_application", {"target_url": target}
try:
ip_obj = ipaddress.ip_address(target)
except ValueError:
pass
else:
return "ip_address", {"target_ip": str(ip_obj)}
path = Path(target).expanduser()
try:
if path.exists():
@@ -191,7 +314,8 @@ def infer_target_type(target: str) -> tuple[str, dict[str, str]]:
"- A valid URL (http:// or https://)\n"
"- A Git repository URL (https://github.com/... or git@github.com:...)\n"
"- A local directory path\n"
"- A domain name (e.g., example.com)"
"- A domain name (e.g., example.com)\n"
"- An IP address (e.g., 192.168.1.10)"
)

View File

@@ -5,15 +5,16 @@ class LLMConfig:
def __init__(
self,
model_name: str | None = None,
temperature: float = 0,
enable_prompt_caching: bool = True,
prompt_modules: list[str] | None = None,
timeout: int | None = None,
):
self.model_name = model_name or os.getenv("STRIX_LLM", "openai/gpt-5")
if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty")
self.temperature = max(0.0, min(1.0, temperature))
self.enable_prompt_caching = enable_prompt_caching
self.prompt_modules = prompt_modules or []
self.timeout = timeout or int(os.getenv("LLM_TIMEOUT", "600"))

View File

@@ -2,6 +2,7 @@ import logging
import os
from dataclasses import dataclass
from enum import Enum
from fnmatch import fnmatch
from pathlib import Path
from typing import Any
@@ -45,27 +46,14 @@ class LLMRequestFailedError(Exception):
self.details = details
MODELS_WITHOUT_STOP_WORDS = [
"gpt-5",
"gpt-5-mini",
"gpt-5-nano",
"o1-mini",
"o1-preview",
"o1",
"o1-2024-12-17",
"o3",
"o3-2025-04-16",
"o3-mini-2025-01-31",
"o3-mini",
"o4-mini",
"o4-mini-2025-04-16",
SUPPORTS_STOP_WORDS_FALSE_PATTERNS: list[str] = [
"o1*",
"grok-4-0709",
"grok-code-fast-1",
"deepseek-r1-0528*",
]
REASONING_EFFORT_SUPPORTED_MODELS = [
"gpt-5",
"gpt-5-mini",
"gpt-5-nano",
REASONING_EFFORT_PATTERNS: list[str] = [
"o1-2024-12-17",
"o1",
"o3",
@@ -76,9 +64,39 @@ REASONING_EFFORT_SUPPORTED_MODELS = [
"o4-mini-2025-04-16",
"gemini-2.5-flash",
"gemini-2.5-pro",
"gpt-5*",
"deepseek-r1-0528*",
"claude-sonnet-4-5*",
"claude-haiku-4-5*",
]
def normalize_model_name(model: str) -> str:
raw = (model or "").strip().lower()
if "/" in raw:
name = raw.split("/")[-1]
if ":" in name:
name = name.split(":", 1)[0]
else:
name = raw
if name.endswith("-gguf"):
name = name[: -len("-gguf")]
return name
def model_matches(model: str, patterns: list[str]) -> bool:
raw = (model or "").strip().lower()
name = normalize_model_name(model)
for pat in patterns:
pat_l = pat.lower()
if "/" in pat_l:
if fnmatch(raw, pat_l):
return True
elif fnmatch(name, pat_l):
return True
return False
class StepRole(str, Enum):
AGENT = "agent"
USER = "user"
@@ -117,13 +135,19 @@ class RequestStats:
class LLM:
def __init__(self, config: LLMConfig, agent_name: str | None = None):
def __init__(
self, config: LLMConfig, agent_name: str | None = None, agent_id: str | None = None
):
self.config = config
self.agent_name = agent_name
self.agent_id = agent_id
self._total_stats = RequestStats()
self._last_request_stats = RequestStats()
self.memory_compressor = MemoryCompressor()
self.memory_compressor = MemoryCompressor(
model_name=self.config.model_name,
timeout=self.config.timeout,
)
if agent_name:
prompt_dir = Path(__file__).parent.parent / "agents" / agent_name
@@ -156,6 +180,31 @@ class LLM:
else:
self.system_prompt = "You are a helpful AI assistant."
def set_agent_identity(self, agent_name: str | None, agent_id: str | None) -> None:
if agent_name:
self.agent_name = agent_name
if agent_id:
self.agent_id = agent_id
def _build_identity_message(self) -> dict[str, Any] | None:
if not (self.agent_name and str(self.agent_name).strip()):
return None
identity_name = self.agent_name
identity_id = self.agent_id
content = (
"\n\n"
"<agent_identity>\n"
"<meta>Internal metadata: do not echo or reference; "
"not part of history or tool calls.</meta>\n"
"<note>You are now assuming the role of this agent. "
"Act strictly as this agent and maintain self-identity for this step. "
"Now go answer the next needed step!</note>\n"
f"<agent_name>{identity_name}</agent_name>\n"
f"<agent_id>{identity_id}</agent_id>\n"
"</agent_identity>\n\n"
)
return {"role": "user", "content": content}
def _add_cache_control_to_content(
self, content: str | list[dict[str, Any]]
) -> str | list[dict[str, Any]]:
@@ -231,6 +280,10 @@ class LLM:
) -> LLMResponse:
messages = [{"role": "system", "content": self.system_prompt}]
identity_message = self._build_identity_message()
if identity_message:
messages.append(identity_message)
compressed_history = list(self.memory_compressor.compress_history(conversation_history))
conversation_history.clear()
@@ -329,27 +382,13 @@ class LLM:
if not self.config.model_name:
return True
actual_model_name = self.config.model_name.split("/")[-1].lower()
model_name_lower = self.config.model_name.lower()
return not any(
actual_model_name == unsupported_model.lower()
or model_name_lower == unsupported_model.lower()
for unsupported_model in MODELS_WITHOUT_STOP_WORDS
)
return not model_matches(self.config.model_name, SUPPORTS_STOP_WORDS_FALSE_PATTERNS)
def _should_include_reasoning_effort(self) -> bool:
if not self.config.model_name:
return False
actual_model_name = self.config.model_name.split("/")[-1].lower()
model_name_lower = self.config.model_name.lower()
return any(
actual_model_name == supported_model.lower()
or model_name_lower == supported_model.lower()
for supported_model in REASONING_EFFORT_SUPPORTED_MODELS
)
return model_matches(self.config.model_name, REASONING_EFFORT_PATTERNS)
async def _make_request(
self,
@@ -358,8 +397,7 @@ class LLM:
completion_args: dict[str, Any] = {
"model": self.config.model_name,
"messages": messages,
"temperature": self.config.temperature,
"timeout": 180,
"timeout": self.config.timeout,
}
if self._should_include_stop_param():

View File

@@ -85,6 +85,7 @@ def _extract_message_text(msg: dict[str, Any]) -> str:
def _summarize_messages(
messages: list[dict[str, Any]],
model: str,
timeout: int = 600,
) -> dict[str, Any]:
if not messages:
empty_summary = "<context_summary message_count='0'>{text}</context_summary>"
@@ -106,7 +107,7 @@ def _summarize_messages(
completion_args = {
"model": model,
"messages": [{"role": "user", "content": prompt}],
"timeout": 180,
"timeout": timeout,
}
response = litellm.completion(**completion_args)
@@ -146,9 +147,11 @@ class MemoryCompressor:
self,
max_images: int = 3,
model_name: str | None = None,
timeout: int = 600,
):
self.max_images = max_images
self.model_name = model_name or os.getenv("STRIX_LLM", "openai/gpt-5")
self.timeout = timeout
if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty")
@@ -202,7 +205,7 @@ class MemoryCompressor:
chunk_size = 10
for i in range(0, len(old_msgs), chunk_size):
chunk = old_msgs[i : i + chunk_size]
summary = _summarize_messages(chunk, model_name)
summary = _summarize_messages(chunk, model_name, self.timeout)
if summary:
compressed.append(summary)

View File

@@ -1,5 +1,6 @@
import asyncio
import logging
import os
import threading
import time
from typing import Any
@@ -26,7 +27,15 @@ def should_retry_exception(exception: Exception) -> bool:
class LLMRequestQueue:
def __init__(self, max_concurrent: int = 6, delay_between_requests: float = 1.0):
def __init__(self, max_concurrent: int = 6, delay_between_requests: float = 5.0):
rate_limit_delay = os.getenv("LLM_RATE_LIMIT_DELAY")
if rate_limit_delay:
delay_between_requests = float(rate_limit_delay)
rate_limit_concurrent = os.getenv("LLM_RATE_LIMIT_CONCURRENT")
if rate_limit_concurrent:
max_concurrent = int(rate_limit_concurrent)
self.max_concurrent = max_concurrent
self.delay_between_requests = delay_between_requests
self._semaphore = threading.BoundedSemaphore(max_concurrent)
@@ -52,8 +61,8 @@ class LLMRequestQueue:
self._semaphore.release()
@retry( # type: ignore[misc]
stop=stop_after_attempt(5),
wait=wait_exponential(multiplier=2, min=1, max=30),
stop=stop_after_attempt(7),
wait=wait_exponential(multiplier=6, min=12, max=150),
retry=retry_if_exception(should_retry_exception),
reraise=True,
)

View File

@@ -28,7 +28,7 @@ AGENT TYPES YOU CAN CREATE:
COORDINATION GUIDELINES:
- Ensure clear task boundaries and success criteria
- Terminate redundant agents when objectives overlap
- Use message passing for agent communication
- Use message passing only when essential (requests/answers or critical handoffs); avoid routine status messages and prefer batched updates
</agent_management>
<final_responsibilities>

View File

@@ -0,0 +1,222 @@
<information_disclosure_vulnerability_guide>
<title>INFORMATION DISCLOSURE</title>
<critical>Information leaks accelerate exploitation by revealing code, configuration, identifiers, and trust boundaries. Treat every response byte, artifact, and header as potential intelligence. Minimize, normalize, and scope disclosure across all channels.</critical>
<scope>
- Errors and exception pages: stack traces, file paths, SQL, framework versions
- Debug/dev tooling reachable in prod: debuggers, profilers, feature flags
- DVCS/build artifacts and temp/backup files: .git, .svn, .hg, .bak, .swp, archives
- Configuration and secrets: .env, phpinfo, appsettings.json, Docker/K8s manifests
- API schemas and introspection: OpenAPI/Swagger, GraphQL introspection, gRPC reflection
- Client bundles and source maps: webpack/Vite maps, embedded env, __NEXT_DATA__, static JSON
- Headers and response metadata: Server/X-Powered-By, tracing, ETag, Accept-Ranges, Server-Timing
- Storage/export surfaces: public buckets, signed URLs, export/download endpoints
- Observability/admin: /metrics, /actuator, /health, tracing UIs (Jaeger, Zipkin), Kibana, Admin UIs
- Directory listings and indexing: autoindex, sitemap/robots revealing hidden routes
- Cross-origin signals: CORS misconfig, Referrer-Policy leakage, Expose-Headers
- File/document metadata: EXIF, PDF/Office properties
</scope>
<methodology>
1. Build a channel map: Web, API, GraphQL, WebSocket, gRPC, mobile, background jobs, exports, CDN.
2. Establish a diff harness: compare owner vs non-owner vs anonymous across transports; normalize on status/body length/ETag/headers.
3. Trigger controlled failures: send malformed types, boundary values, missing params, and alternate content-types to elicit error detail and stack traces.
4. Enumerate artifacts: DVCS folders, backups, config endpoints, source maps, client bundles, API docs, observability routes.
5. Correlate disclosures to impact: versions→CVE, paths→LFI/RCE, keys→cloud access, schemas→auth bypass, IDs→IDOR.
</methodology>
<surfaces>
<errors_and_exceptions>
- SQL/ORM errors: reveal table/column names, DBMS, query fragments
- Stack traces: absolute paths, class/method names, framework versions, developer emails
- Template engine probes: {% raw %}{{7*7}}, ${7*7}{% endraw %} identify templating stack and code paths
- JSON/XML parsers: type mismatches and coercion logs leak internal model names
</errors_and_exceptions>
<debug_and_env_modes>
- Debug pages and flags: Django DEBUG, Laravel Telescope, Rails error pages, Flask/Werkzeug debugger, ASP.NET customErrors Off
- Profiler endpoints: /debug/pprof, /actuator, /_profiler, custom /debug APIs
- Feature/config toggles exposed in JS or headers; admin/staff banners in HTML
</debug_and_env_modes>
<dvcs_and_backups>
- DVCS: /.git/ (HEAD, config, index, objects), .svn/entries, .hg/store → reconstruct source and secrets
- Backups/temp: .bak/.old/~/.swp/.swo/.tmp/.orig, db dumps, zipped deployments under /backup/, /old/, /archive/
- Build artifacts: dist artifacts containing .map, env prints, internal URLs
</dvcs_and_backups>
<configs_and_secrets>
- Classic: web.config, appsettings.json, settings.py, config.php, phpinfo.php
- Containers/cloud: Dockerfile, docker-compose.yml, Kubernetes manifests, service account tokens, cloud credentials files
- Credentials and connection strings; internal hosts and ports; JWT secrets
</configs_and_secrets>
<api_schemas_and_introspection>
- OpenAPI/Swagger: /swagger, /api-docs, /openapi.json — enumerate hidden/privileged operations
- GraphQL: introspection enabled; field suggestions; error disclosure via invalid fields; persisted queries catalogs
- gRPC: server reflection exposing services/messages; proto download via reflection
</api_schemas_and_introspection>
<client_bundles_and_maps>
- Source maps (.map) reveal original sources, comments, and internal logic
- Client env leakage: NEXT_PUBLIC_/VITE_/REACT_APP_ variables; runtime config; embedded secrets accidentally shipped
- Next.js data: __NEXT_DATA__ and pre-fetched JSON under /_next/data can include internal IDs, flags, or PII
- Static JSON/CSV feeds used by the UI that bypass server-side auth filtering
</client_bundles_and_maps>
<headers_and_response_metadata>
- Fingerprinting: Server, X-Powered-By, X-AspNet-Version
- Tracing: X-Request-Id, traceparent, Server-Timing, debug headers
- Caching oracles: ETag/If-None-Match, Last-Modified/If-Modified-Since, Accept-Ranges/Range (partial content reveals)
- Content sniffing and MIME metadata that implies backend components
</headers_and_response_metadata>
<storage_and_exports>
- Public object storage: S3/GCS/Azure blobs with world-readable ACLs or guessable keys
- Signed URLs: long-lived, weakly scoped, re-usable across tenants; metadata leaks in headers
- Export/report endpoints returning foreign data sets or unfiltered fields
</storage_and_exports>
<observability_and_admin>
- Metrics: Prometheus /metrics exposing internal hostnames, process args, SQL, credentials by mistake
- Health/config: /actuator/health, /actuator/env, Spring Boot info endpoints
- Tracing UIs and dashboards: Jaeger/Zipkin/Kibana/Grafana exposed without auth
</observability_and_admin>
<directory_and_indexing>
- Autoindex on /uploads/, /files/, /logs/, /tmp/, /assets/
- Robots/sitemap reveal hidden paths, admin panels, export feeds
</directory_and_indexing>
<cross_origin_signals>
- Referrer leakage: missing/referrer policy leading to path/query/token leaks to third parties
- CORS: overly permissive Access-Control-Allow-Origin/Expose-Headers revealing data cross-origin; preflight error shapes
</cross_origin_signals>
<file_metadata>
- EXIF, PDF/Office properties: authors, paths, software versions, timestamps, embedded objects
</file_metadata>
</surfaces>
<advanced_techniques>
<differential_oracles>
- Compare owner vs non-owner vs anonymous for the same resource and track: status, length, ETag, Last-Modified, Cache-Control
- HEAD vs GET: header-only differences can confirm existence or type without content
- Conditional requests: 304 vs 200 behaviors leak existence/state; binary search content size via Range requests
</differential_oracles>
<cdn_and_cache_keys>
- Identity-agnostic caches: CDN/proxy keys missing Authorization/tenant headers → cross-user cached responses
- Vary misconfiguration: user-agent/language vary without auth vary leaks alternate content
- 206 partial content + stale caches leak object fragments
</cdn_and_cache_keys>
<cross_channel_mirroring>
- Inconsistent hardening between REST, GraphQL, WebSocket, and gRPC; one channel leaks schema or fields hidden in others
- SSR vs CSR: server-rendered pages omit fields while JSON API includes them; compare responses
</cross_channel_mirroring>
<introspection_and_reflection>
- GraphQL: disabled introspection still leaks via errors, fragment suggestions, and client bundles containing schema
- gRPC reflection: list services/messages and infer internal resource names and flows
</introspection_and_reflection>
<cloud_specific>
- S3/GCS/Azure: anonymous listing disabled but object reads allowed; metadata headers leak owner/project identifiers
- Pre-signed URLs: audience not bound; observe key scope and lifetime in URL params
</cloud_specific>
</advanced_techniques>
<usefulness_assessment>
- Actionable signals:
- Secrets/keys/tokens that grant new access (DB creds, cloud keys, JWT signing/refresh, signed URL secrets)
- Versions with a reachable, unpatched CVE on an exposed path
- Cross-tenant identifiers/data or per-user fields that differ by principal
- File paths, service hosts, or internal URLs that enable LFI/SSRF/RCE pivots
- Cache/CDN differentials (Vary/ETag/Range) that expose other users' content
- Schema/introspection revealing hidden operations or fields that return sensitive data
- Likely benign or intended:
- Public docs or non-sensitive metadata explicitly documented as public
- Generic server names without precise versions or exploit path
- Redacted/sanitized fields with stable length/ETag across principals
- Per-user data visible only to the owner and consistent with privacy policy
</usefulness_assessment>
<triage_rubric>
- Critical: Credentials/keys; signed URL secrets; config dumps; unrestricted admin/observability panels
- High: Versions with reachable CVEs; cross-tenant data; caches serving cross-user content; schema enabling auth bypass
- Medium: Internal paths/hosts enabling LFI/SSRF pivots; source maps revealing hidden endpoints/IDs
- Low: Generic headers, marketing versions, intended documentation without exploit path
- Guidance: Always attempt a minimal, reversible proof for Critical/High; if no safe chain exists, document precise blocker and downgrade
</triage_rubric>
<escalation_playbook>
- If DVCS/backups/configs → extract secrets; test least-privileged read; rotate after coordinated disclosure
- If versions → map to CVE; verify exposure; execute minimal PoC under strict scope
- If schema/introspection → call hidden/privileged fields with non-owner tokens; confirm auth gaps
- If source maps/client JSON → mine endpoints/IDs/flags; pivot to IDOR/listing; validate filtering
- If cache/CDN keys → demonstrate cross-user cache leak via Vary/ETag/Range; escalate to broken access control
- If paths/hosts → target LFI/SSRF with harmless reads (e.g., /etc/hostname, metadata headers); avoid destructive actions
- If observability/admin → enumerate read-only info first; prove data scope breach; avoid write/exec operations
</escalation_playbook>
<exploitation_chains>
<credential_extraction>
- DVCS/config dumps exposing secrets (DB, SMTP, JWT, cloud)
- Keys → cloud control plane access; rotate and verify scope
</credential_extraction>
<version_to_cve>
1. Derive precise component versions from headers/errors/bundles.
2. Map to known CVEs and confirm reachability.
3. Execute minimal proof targeting disclosed component.
</version_to_cve>
<path_disclosure_to_lfi>
1. Paths from stack traces/templates reveal filesystem layout.
2. Use LFI/traversal to fetch config/keys.
3. Prove controlled access without altering state.
</path_disclosure_to_lfi>
<schema_to_auth_bypass>
1. Schema reveals hidden fields/endpoints.
2. Attempt requests with those fields; confirm missing authorization or field filtering.
</schema_to_auth_bypass>
</exploitation_chains>
<validation>
1. Provide raw evidence (headers/body/artifact) and explain exact data revealed.
2. Determine intent: cross-check docs/UX; classify per triage rubric (Critical/High/Medium/Low).
3. Attempt minimal, reversible exploitation or present a concrete step-by-step chain (what to try next and why).
4. Show reproducibility and minimal request set; include cross-channel confirmation where applicable.
5. Bound scope (user, tenant, environment) and data sensitivity classification.
</validation>
<false_positives>
- Intentional public docs or non-sensitive metadata with no exploit path
- Generic errors with no actionable details
- Redacted fields that do not change differential oracles (length/ETag stable)
- Version banners with no exposed vulnerable surface and no chain
- Owner-visible-only details that do not cross identity/tenant boundaries
</false_positives>
<impact>
- Accelerated exploitation of RCE/LFI/SSRF via precise versions and paths
- Credential/secret exposure leading to persistent external compromise
- Cross-tenant data disclosure through exports, caches, or mis-scoped signed URLs
- Privacy/regulatory violations and business intelligence leakage
</impact>
<pro_tips>
1. Start with artifacts (DVCS, backups, maps) before payloads; artifacts yield the fastest wins.
2. Normalize responses and diff by digest to reduce noise when comparing roles.
3. Hunt source maps and client data JSON; they often carry internal IDs and flags.
4. Probe caches/CDNs for identity-unaware keys; verify Vary includes Authorization/tenant.
5. Treat introspection and reflection as configuration findings across GraphQL/gRPC; validate per environment.
6. Mine observability endpoints last; they are noisy but high-yield in misconfigured setups.
7. Chain quickly to a concrete risk and stop—proof should be minimal and reversible.
</pro_tips>
<remember>Information disclosure is an amplifier. Convert leaks into precise, minimal exploits or clear architectural risks.</remember>
</information_disclosure_vulnerability_guide>

View File

@@ -0,0 +1,177 @@
<open_redirect_vulnerability_guide>
<title>OPEN REDIRECT</title>
<critical>Open redirects enable phishing, OAuth/OIDC code and token theft, and allowlist bypass in server-side fetchers that follow redirects. Treat every redirect target as untrusted: canonicalize and enforce exact allowlists per scheme, host, and path.</critical>
<scope>
- Server-driven redirects (HTTP 3xx Location) and client-driven redirects (window.location, meta refresh, SPA routers)
- OAuth/OIDC/SAML flows using redirect_uri, post_logout_redirect_uri, RelayState, returnTo/continue/next
- Multi-hop chains where only the first hop is validated
- Allowlist/canonicalization bypasses across URL parsers and reverse proxies
</scope>
<methodology>
1. Inventory all redirect surfaces: login/logout, password reset, SSO/OAuth flows, payment gateways, email links, invite/verification, unsubscribe, language/locale switches, /out or /r redirectors.
2. Build a test matrix of scheme×host×path variants and encoding/unicode forms. Compare server-side validation vs browser navigation results.
3. Exercise multi-hop: trusted-domain → redirector → external. Verify if validation applies pre- or post-redirect.
4. Prove impact: credential phishing, OAuth code interception, internal egress (if a server fetcher follows redirects).
</methodology>
<discovery_techniques>
<injection_points>
- Params: redirect, url, next, return_to, returnUrl, continue, goto, target, callback, out, dest, back, to, r, u
- OAuth/OIDC/SAML: redirect_uri, post_logout_redirect_uri, RelayState, state (if used to compute final destination)
- SPA: router.push/replace, location.assign/href, meta refresh, window.open
- Headers influencing construction: Host, X-Forwarded-Host/Proto, Referer; and server-side Location echo
</injection_points>
<parser_differentials>
<userinfo>
https://trusted.com@evil.com → many validators parse host as trusted.com, browser navigates to evil.com
Variants: trusted.com%40evil.com, a%40evil.com%40trusted.com
</userinfo>
<backslash_and_slashes>
https://trusted.com\\evil.com, https://trusted.com\\@evil.com, ///evil.com, /\\evil.com
Windows/backends may normalize \\ to /; browsers differ on interpretation of extra leading slashes
</backslash_and_slashes>
<whitespace_and_ctrl>
http%09://evil.com, http%0A://evil.com, trusted.com%09evil.com
Control/whitespace around the scheme/host can split parsers
</whitespace_and_ctrl>
<fragment_and_query>
trusted.com#@evil.com, trusted.com?//@evil.com, ?next=//evil.com#@trusted.com
Validators often stop at # while the browser parses after it
</fragment_and_query>
<unicode_and_idna>
Punycode/IDN: truѕted.com (Cyrillic), trusted.com。evil.com (full-width dot), trailing dot trusted.com.
Test with mixed Unicode normalization and IDNA conversion
</unicode_and_idna>
</parser_differentials>
<encoding_bypasses>
- Double encoding: %2f%2fevil.com, %252f%252fevil.com
- Mixed case and scheme smuggling: hTtPs://evil.com, http:evil.com
- IP variants: decimal 2130706433, octal 0177.0.0.1, hex 0x7f.1, IPv6 [::ffff:127.0.0.1]
- User-controlled path bases: /out?url=/\\evil.com
</encoding_bypasses>
</discovery_techniques>
<allowlist_evasion>
<common_mistakes>
- Substring/regex contains checks: allows trusted.com.evil.com, or path matches leaking external
- Wildcards: *.trusted.com also matches attacker.trusted.com.evil.net
- Missing scheme pinning: data:, javascript:, file:, gopher: accepted
- Case/IDN drift between validator and browser
</common_mistakes>
<robust_validation>
- Canonicalize with a single modern URL parser (WHATWG URL) and compare exact scheme, hostname (post-IDNA), and an explicit allowlist with optional exact path prefixes
- Require absolute HTTPS; reject protocol-relative // and unknown schemes
- Normalize and compare after following zero redirects only; if following, re-validate the final destination per hop server-side
</robust_validation>
</allowlist_evasion>
<oauth_oidc_saml>
<redirect_uri_abuse>
- Using an open redirect on a trusted domain for redirect_uri enables code interception
- Weak prefix/suffix checks: https://trusted.com → https://trusted.com.evil.com; /callback → /callback@evil.com
- Path traversal/canonicalization: /oauth/../../@evil.com
- post_logout_redirect_uri often less strictly validated; test both
- state must be unguessable and bound to client/session; do not recompute final destination from state without validation
</redirect_uri_abuse>
<defense_notes>
- Pre-register exact redirect_uri values per client (no wildcards). Enforce exact scheme/host/port/path match
- For public native apps, follow RFC guidance (loopback 127.0.0.1 with exact port handling); disallow open web redirectors
- SAML RelayState should be validated against an allowlist or ignored for absolute URLs
</defense_notes>
</oauth_oidc_saml>
<client_side_vectors>
<javascript_redirects>
- location.href/assign/replace using user input; ensure targets are normalized and restricted to same-origin or allowlist
- meta refresh content=0;url=USER_INPUT; browsers treat javascript:/data: differently; still dangerous in client-controlled redirects
- SPA routers: router.push(searchParams.get('next')); enforce same-origin and strip schemes
</javascript_redirects>
</client_side_vectors>
<reverse_proxies_and_gateways>
- Host/X-Forwarded-* may change absolute URL construction; validate against server-derived canonical origin, not client headers
- CDNs that follow redirects for link checking or prefetching can leak tokens when chained with open redirects
</reverse_proxies_and_gateways>
<ssrf_chaining>
- Some server-side fetchers (web previewers, link unfurlers, validators) follow 3xx; combine with an open redirect on an allowlisted domain to pivot to internal targets (169.254.169.254, localhost, cluster addresses)
- Confirm by observing distinct error/timing for internal vs external, or OAST callbacks when reachable
</ssrf_chaining>
<framework_notes>
<server_side>
- Rails: redirect_to params[:url] without URI parsing; test array params and protocol-relative
- Django: HttpResponseRedirect(request.GET['next']) without is_safe_url; relies on ALLOWED_HOSTS + scheme checks
- Spring: return "redirect:" + param; ensure UriComponentsBuilder normalization and allowlist
- Express: res.redirect(req.query.url); use a safe redirect helper enforcing relative paths or a vetted allowlist
</server_side>
<client_side>
- React/Next.js/Vue/Angular routing based on URLSearchParams; ensure same-origin policy and disallow external schemes in client code
</client_side>
</framework_notes>
<exploitation_scenarios>
<oauth_code_interception>
1. Set redirect_uri to https://trusted.example/out?url=https://attacker.tld/cb
2. IdP sends code to trusted.example which redirects to attacker.tld
3. Exchange code for tokens; demonstrate account access
</oauth_code_interception>
<phishing_flow>
1. Send link on trusted domain: /login?next=https://attacker.tld/fake
2. Victim authenticates; browser navigates to attacker page
3. Capture credentials/tokens via cloned UI or injected JS
</phishing_flow>
<internal_evasion>
1. Server-side link unfurler fetches https://trusted.example/out?u=http://169.254.169.254/latest/meta-data
2. Redirect follows to metadata; confirm via timing/headers or controlled endpoints
</internal_evasion>
</exploitation_scenarios>
<validation>
1. Produce a minimal URL that navigates to an external domain via the vulnerable surface; include the full address bar capture.
2. Show bypass of the stated validation (regex/allowlist) using canonicalization variants.
3. Test multi-hop: prove only first hop is validated and second hop escapes constraints.
4. For OAuth/SAML, demonstrate code/RelayState delivery to an attacker-controlled endpoint with role-separated evidence.
</validation>
<false_positives>
- Redirects constrained to relative same-origin paths with robust normalization
- Exact pre-registered OAuth redirect_uri with strict verifier
- Validators using a single canonical parser and comparing post-IDNA host and scheme
- User prompts that show the exact final destination before navigating and refuse unknown schemes
</false_positives>
<impact>
- Credential and token theft via phishing and OAuth/OIDC interception
- Internal data exposure when server fetchers follow redirects (previewers/unfurlers)
- Policy bypass where allowlists are enforced only on the first hop
- Cross-application trust erosion and brand abuse
</impact>
<pro_tips>
1. Always compare server-side canonicalization to real browser navigation; differences reveal bypasses.
2. Try userinfo, protocol-relative, Unicode/IDN, and IP numeric variants early; they catch many weak validators.
3. In OAuth, prioritize post_logout_redirect_uri and less-discussed flows; theyre often looser.
4. Exercise multi-hop across distinct subdomains and paths; validators commonly check only hop 1.
5. For SSRF chaining, target services known to follow redirects and log their outbound requests.
6. Favor allowlists of exact origins plus optional path prefixes; never substring/regex contains checks.
7. Keep a curated suite of redirect payloads per runtime (Java, Node, Python, Go) reflecting each parsers quirks.
</pro_tips>
<remember>Redirection is safe only when the final destination is constrained after canonicalization. Enforce exact origins, verify per hop, and treat client-provided destinations as untrusted across every stack.</remember>
</open_redirect_vulnerability_guide>

View File

@@ -0,0 +1,155 @@
<subdomain_takeover_guide>
<title>SUBDOMAIN TAKEOVER</title>
<critical>Subdomain takeover lets an attacker serve content from a trusted subdomain by claiming resources referenced by dangling DNS (CNAME/A/ALIAS/NS) or mis-bound provider configurations. Consequences include phishing on a trusted origin, cookie and CORS pivot, OAuth redirect abuse, and CDN cache poisoning.</critical>
<scope>
- Dangling CNAME/A/ALIAS to third-party services (hosting, storage, serverless, CDN)
- Orphaned NS delegations (child zones with abandoned/expired nameservers)
- Decommissioned SaaS integrations (support, docs, marketing, forms) referenced via CNAME
- CDN “alternate domain” mappings (CloudFront/Fastly/Azure CDN) lacking ownership verification
- Storage and static hosting endpoints (S3/Blob/GCS buckets, GitHub/GitLab Pages)
</scope>
<methodology>
1. Enumerate subdomains comprehensively (web, API, mobile, legacy): aggregate CT logs, passive DNS, and org inventory. De-duplicate and normalize.
2. Resolve DNS for all RR types: A/AAAA, CNAME, NS, MX, TXT. Keep CNAME chains; record terminal CNAME targets and provider hints.
3. HTTP/TLS probe: capture status, body, length, canonical error text, Server/alt-svc headers, certificate SANs, and CDN headers (Via, X-Served-By).
4. Fingerprint providers: map known “unclaimed/missing resource” signatures to candidate services. Maintain a living dictionary.
5. Attempt claim (only with authorization): create the missing resource on the provider with the exact required name; bind the custom domain if the provider allows.
6. Validate control: serve a minimal unique payload; confirm over HTTPS; optionally obtain a DV certificate (CT log evidence) within legal scope.
</methodology>
<discovery_techniques>
<enumeration_pipeline>
- Subdomain inventory: combine CT (crt.sh APIs), passive DNS sources, in-house asset lists, IaC/terraform outputs, mobile app assets, and historical DNS
- Resolver sweep: use IPv4/IPv6-aware resolvers; track NXDOMAIN vs SERVFAIL vs provider-branded 4xx/5xx responses
- Record graph: build a CNAME graph and collapse chains to identify external endpoints (e.g., myapp.example.com → foo.azurewebsites.net)
</enumeration_pipeline>
<dns_indicators>
- CNAME targets ending in provider domains: github.io, amazonaws.com, cloudfront.net, azurewebsites.net, blob.core.windows.net, fastly.net, vercel.app, netlify.app, herokudns.com, trafficmanager.net, azureedge.net, akamaized.net
- Orphaned NS: subzone delegated to nameservers on a domain that has expired or no longer hosts authoritative servers; or to inexistent NS hosts
- MX to third-party mail providers with decommissioned domains (risk: mail subdomain control or delivery manipulation)
- TXT/verification artifacts (asuid, _dnsauth, _github-pages-challenge) suggesting previous external bindings
</dns_indicators>
<http_fingerprints>
- Service-specific unclaimed messages (examples, not exhaustive):
- GitHub Pages: “There isnt a GitHub Pages site here.”
- Fastly: “Fastly error: unknown domain”
- Heroku: “No such app” or “Theres nothing here, yet.”
- S3 static site: “NoSuchBucket” / “The specified bucket does not exist”
- CloudFront (alt domain not configured): 403/400 with “The request could not be satisfied” and no matching distribution
- Azure App Service: default 404 for azurewebsites.net unless custom-domain verified (look for asuid TXT requirement)
- Shopify: “Sorry, this shop is currently unavailable”
- TLS clues: certificate CN/SAN referencing provider default host instead of the custom subdomain indicates potential mis-binding
</http_fingerprints>
</discovery_techniques>
<exploitation_techniques>
<claim_third_party_resource>
- Create the resource with the exact required name:
- Storage/hosting: S3 bucket “sub.example.com” (website endpoint) or bucket named after the CNAME target if provider dictates
- Pages hosting: create repo/site and add the custom domain (when provider does not enforce prior domain verification)
- Serverless/app hosting: create app/site matching the target hostname, then add custom domain mapping
- Bind the custom domain: some providers require TXT verification (modern hardened path), others historically allowed binding without proof
</claim_third_party_resource>
<cdn_alternate_domains>
- Add the victim subdomain as an alternate domain on your CDN distribution if the provider does not enforce domain ownership checks
- Upload a TLS cert via provider or use managed cert issuance if allowed; confirm 200 on the subdomain with your content
</cdn_alternate_domains>
<ns_delegation_takeover>
- If a child zone (e.g., zone.example.com) is delegated to nameservers under an expired domain (ns1.abandoned.tld), register abandoned.tld and host authoritative NS; publish records to control all hosts under the delegated subzone
- Validate with SOA/NS queries and serve a verification token; then add A/CNAME/MX/TXT as needed
</ns_delegation_takeover>
<mail_surface>
- If MX points to a decommissioned provider that allowed inbox creation without domain re-verification (historically), a takeover could enable email receipt for that subdomain; modern providers generally require explicit TXT ownership
</mail_surface>
</exploitation_techniques>
<advanced_techniques>
<blind_and_cache_channels>
- CDN edge behavior: 404/421 vs 403 differentials reveal whether an alt name is partially configured; probe with Host header manipulation
- Cache poisoning: once taken over, exploit cache keys and Vary headers to persist malicious responses at the edge
</blind_and_cache_channels>
<ct_and_tls>
- Use CT logs to detect unexpected certificate issuance for your subdomain; for PoC, issue a DV cert post-takeover (within scope) to produce verifiable evidence
</ct_and_tls>
<oauth_and_trust_chains>
- If the subdomain is whitelisted as an OAuth redirect/callback or in CSP/script-src, a takeover elevates impact to account takeover or script injection on trusted origins
</oauth_and_trust_chains>
<provider_edges>
- Many providers hardened domain binding (TXT verification) but legacy projects or specific products remain weak; verify per-product behavior (CDN vs app hosting vs storage)
- Multi-tenant providers sometimes accept custom domains at the edge even when backend resource is missing; leverage timing and registration windows
</provider_edges>
</advanced_techniques>
<bypass_techniques>
<verification_gaps>
- Look for providers that accept domain binding prior to TXT verification, or where verification is optional for trial/legacy tiers
- Race windows: re-claim resource names immediately after victim deletion while DNS still points to provider
</verification_gaps>
<wildcards_and_fallbacks>
- Wildcard CNAMEs to providers may expose unbounded subdomains; test random hosts to identify service-wide unclaimed behavior
- Fallback origins: CDNs configured with multiple origins may expose unknown-domain responses from a default origin that is claimable
</wildcards_and_fallbacks>
</bypass_techniques>
<special_contexts>
<storage_and_static>
- S3/GCS/Azure Blob static sites: bucket naming constraints dictate whether a bucket can match hostname; website vs API endpoints differ in claimability and fingerprints
</storage_and_static>
<serverless_and_hosting>
- GitHub/GitLab Pages, Netlify, Vercel, Azure Static Web Apps: domain binding flows vary; most require TXT now, but historical projects or specific paths may not
</serverless_and_hosting>
<cdn_and_edge>
- CloudFront/Fastly/Azure CDN/Akamai: alternate domain verification differs; some products historically allowed alt-domain claims without proof
</cdn_and_edge>
<dns_delegations>
- Child-zone NS delegations outrank parent records; control of delegated NS yields full control of all hosts below that label
</dns_delegations>
</special_contexts>
<validation>
1. Before: record DNS chain, HTTP response (status/body length/fingerprint), and TLS details.
2. After claim: serve unique content and verify over HTTPS at the target subdomain.
3. Optional: issue a DV certificate (legal scope) and reference CT entry as durable evidence.
4. Demonstrate impact chains (CSP/script-src trust, OAuth redirect acceptance, cookie Domain scoping) with minimal PoCs.
</validation>
<false_positives>
- “Unknown domain” pages that are not claimable due to enforced TXT/ownership checks.
- Provider-branded default pages for valid, owned resources (not a takeover) versus “unclaimed resource” states
- Soft 404s from your own infrastructure or catch-all vhosts
</false_positives>
<impact>
- Content injection under trusted subdomain: phishing, malware delivery, brand damage
- Cookie and CORS pivot: if parent site sets Domain-scoped cookies or allows subdomain origins in CORS/Trusted Types/CSP
- OAuth/SSO abuse via whitelisted redirect URIs
- Email delivery manipulation for subdomain (MX/DMARC/SPF interactions in edge cases)
</impact>
<pro_tips>
1. Build a pipeline: enumerate (subfinder/amass) → resolve (dnsx) → probe (httpx) → fingerprint (nuclei/custom) → verify claims.
2. Maintain a current fingerprint corpus; provider messages change frequently—prefer regex families over exact strings.
3. Prefer minimal PoCs: static “ownership proof” page and, where allowed, DV cert issuance for auditability.
4. Monitor CT for unexpected certs on your subdomains; alert and investigate.
5. Eliminate dangling DNS in decommission workflows first; deletion of the app/service must remove or block the DNS target.
6. For NS delegations, treat any expired nameserver domain as critical; reassign or remove delegation immediately.
7. Use CAA to limit certificate issuance while you triage; it reduces the blast radius for taken-over hosts.
</pro_tips>
<remember>Subdomain safety is lifecycle safety: if DNS points at anything, you must own and verify the thing on every provider and product path. Remove or verify—there is no safe middle.</remember>
</subdomain_takeover_guide>

View File

@@ -358,11 +358,7 @@ class DockerRuntime(AbstractRuntime):
container = self.client.containers.get(container_id)
container.reload()
host = "127.0.0.1"
if "DOCKER_HOST" in os.environ:
docker_host = os.environ["DOCKER_HOST"]
if "://" in docker_host:
host = docker_host.split("://")[1].split(":")[0]
host = self._resolve_docker_host()
except NotFound:
raise ValueError(f"Container {container_id} not found.") from None
@@ -371,6 +367,20 @@ class DockerRuntime(AbstractRuntime):
else:
return f"http://{host}:{port}"
def _resolve_docker_host(self) -> str:
docker_host = os.getenv("DOCKER_HOST", "")
if not docker_host:
return "127.0.0.1"
from urllib.parse import urlparse
parsed = urlparse(docker_host)
if parsed.scheme in ("tcp", "http", "https") and parsed.hostname:
return parsed.hostname
return "127.0.0.1"
async def destroy_sandbox(self, container_id: str) -> None:
logger.info("Destroying scan container %s", container_id)
try:

View File

@@ -50,6 +50,7 @@ class Tracer:
self._run_dir: Path | None = None
self._next_execution_id = 1
self._next_message_id = 1
self._saved_vuln_ids: set[str] = set()
self.vulnerability_found_callback: Callable[[str, str, str, str], None] | None = None
@@ -59,7 +60,7 @@ class Tracer:
def get_run_dir(self) -> Path:
if self._run_dir is None:
runs_dir = Path.cwd() / "agent_runs"
runs_dir = Path.cwd() / "strix_runs"
runs_dir.mkdir(exist_ok=True)
run_dir_name = self.run_name if self.run_name else self.run_id
@@ -92,6 +93,7 @@ class Tracer:
report_id, title.strip(), content.strip(), severity.lower().strip()
)
self.save_run_data()
return report_id
def set_final_scan_result(
@@ -108,6 +110,7 @@ class Tracer:
}
logger.info(f"Set final scan result: success={success}")
self.save_run_data(mark_complete=True)
def log_agent_creation(
self, agent_id: str, name: str, task: str, parent_id: str | None = None
@@ -197,11 +200,13 @@ class Tracer:
"max_iterations": config.get("max_iterations", 200),
}
)
self.get_run_dir()
def save_run_data(self) -> None:
def save_run_data(self, mark_complete: bool = False) -> None:
try:
run_dir = self.get_run_dir()
self.end_time = datetime.now(UTC).isoformat()
if mark_complete:
self.end_time = datetime.now(UTC).isoformat()
if self.final_scan_result:
penetration_test_report_file = run_dir / "penetration_test_report.md"
@@ -219,13 +224,13 @@ class Tracer:
vuln_dir = run_dir / "vulnerabilities"
vuln_dir.mkdir(exist_ok=True)
severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3, "info": 4}
sorted_reports = sorted(
self.vulnerability_reports,
key=lambda x: (severity_order.get(x["severity"], 5), x["timestamp"]),
)
new_reports = [
report
for report in self.vulnerability_reports
if report["id"] not in self._saved_vuln_ids
]
for report in sorted_reports:
for report in new_reports:
vuln_file = vuln_dir / f"{report['id']}.md"
with vuln_file.open("w", encoding="utf-8") as f:
f.write(f"# {report['title']}\n\n")
@@ -234,30 +239,39 @@ class Tracer:
f.write(f"**Found:** {report['timestamp']}\n\n")
f.write("## Description\n\n")
f.write(f"{report['content']}\n")
self._saved_vuln_ids.add(report["id"])
vuln_csv_file = run_dir / "vulnerabilities.csv"
with vuln_csv_file.open("w", encoding="utf-8", newline="") as f:
import csv
if self.vulnerability_reports:
severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3, "info": 4}
sorted_reports = sorted(
self.vulnerability_reports,
key=lambda x: (severity_order.get(x["severity"], 5), x["timestamp"]),
)
fieldnames = ["id", "title", "severity", "timestamp", "file"]
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
vuln_csv_file = run_dir / "vulnerabilities.csv"
with vuln_csv_file.open("w", encoding="utf-8", newline="") as f:
import csv
for report in sorted_reports:
writer.writerow(
{
"id": report["id"],
"title": report["title"],
"severity": report["severity"].upper(),
"timestamp": report["timestamp"],
"file": f"vulnerabilities/{report['id']}.md",
}
)
fieldnames = ["id", "title", "severity", "timestamp", "file"]
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
logger.info(
f"Saved {len(self.vulnerability_reports)} vulnerability reports to: {vuln_dir}"
)
logger.info(f"Saved vulnerability index to: {vuln_csv_file}")
for report in sorted_reports:
writer.writerow(
{
"id": report["id"],
"title": report["title"],
"severity": report["severity"].upper(),
"timestamp": report["timestamp"],
"file": f"vulnerabilities/{report['id']}.md",
}
)
if new_reports:
logger.info(
f"Saved {len(new_reports)} new vulnerability report(s) to: {vuln_dir}"
)
logger.info(f"Updated vulnerability index: {vuln_csv_file}")
logger.info(f"📊 Essential scan data saved to: {run_dir}")
@@ -320,4 +334,4 @@ class Tracer:
}
def cleanup(self) -> None:
self.save_run_data()
self.save_run_data(mark_complete=True)

View File

@@ -230,9 +230,18 @@ def create_agent(
state = AgentState(task=task, agent_name=name, parent_id=parent_id, max_iterations=300)
llm_config = LLMConfig(prompt_modules=module_list)
parent_agent = _agent_instances.get(parent_id)
timeout = None
if (
parent_agent
and hasattr(parent_agent, "llm_config")
and hasattr(parent_agent.llm_config, "timeout")
):
timeout = parent_agent.llm_config.timeout
llm_config = LLMConfig(prompt_modules=module_list, timeout=timeout)
agent_config = {
"llm_config": llm_config,
"state": state,

View File

@@ -87,10 +87,6 @@ Only create a new agent if no existing agent is handling the specific task.</des
<description>Response containing: - agent_id: Unique identifier for the created agent - success: Whether the agent was created successfully - message: Status message - agent_info: Details about the created agent</description>
</returns>
<examples>
# REQUIRED: Check agent graph again before creating another agent
<function=view_agent_graph>
</function>
# After confirming no SQL testing agent exists, create agent for vulnerability validation
<function=create_agent>
<parameter=task>Validate and exploit the suspected SQL injection vulnerability found in
@@ -125,11 +121,16 @@ Only create a new agent if no existing agent is handling the specific task.</des
</tool>
<tool name="send_message_to_agent">
<description>Send a message to another agent in the graph for coordination and communication.</description>
<details>This enables agents to communicate with each other during execution for:
<details>This enables agents to communicate with each other during execution, but should be used only when essential:
- Sharing discovered information or findings
- Asking questions or requesting assistance
- Providing instructions or coordination
- Reporting status or results</details>
- Reporting status or results
Best practices:
- Avoid routine status updates; batch non-urgent information
- Prefer parent/child completion flows (agent_finish)
- Do not message when the context is already known</details>
<parameters>
<parameter name="target_agent_id" type="string" required="true">
<description>ID of the agent to send the message to</description>