Merge origin/main into better-whitebox

This commit is contained in:
bearsyankees
2026-03-19 19:36:17 -06:00
66 changed files with 4712 additions and 336 deletions

View File

@@ -27,7 +27,7 @@ If applicable, add screenshots to help explain your problem.
- OS: [e.g. Ubuntu 22.04]
- Strix Version or Commit: [e.g. 0.1.18]
- Python Version: [e.g. 3.12]
- LLM Used: [e.g. GPT-5, Claude Sonnet 4]
- LLM Used: [e.g. GPT-5, Claude Sonnet 4.6]
**Additional context**
Add any other context about the problem here.

View File

@@ -15,7 +15,7 @@
<a href="https://docs.strix.ai"><img src="https://img.shields.io/badge/Docs-docs.strix.ai-2b9246?style=for-the-badge&logo=gitbook&logoColor=white" alt="Docs"></a>
<a href="https://strix.ai"><img src="https://img.shields.io/badge/Website-strix.ai-f0f0f0?style=for-the-badge&logoColor=000000" alt="Website"></a>
[![](https://dcbadge.limes.pink/api/server/8Suzzd9z)](https://discord.gg/strix-ai)
[![](https://dcbadge.limes.pink/api/server/strix-ai)](https://discord.gg/strix-ai)
<a href="https://deepwiki.com/usestrix/strix"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
<a href="https://github.com/usestrix/strix"><img src="https://img.shields.io/github/stars/usestrix/strix?style=flat-square" alt="GitHub Stars"></a>
@@ -32,6 +32,7 @@
</div>
> [!TIP]
> **New!** Strix integrates seamlessly with GitHub Actions and CI/CD pipelines. Automatically scan for vulnerabilities on every pull request and block insecure code before it reaches production!
@@ -72,7 +73,9 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
**Prerequisites:**
- Docker (running)
- An LLM provider key (e.g. [get OpenAI API key](https://platform.openai.com/api-keys) or use a local LLM)
- An LLM API key:
- Any [supported provider](https://docs.strix.ai/llm-providers/overview) (OpenAI, Anthropic, Google, etc.)
- Or [Strix Router](https://models.strix.ai) — single API key for multiple providers
### Installation & First Scan
@@ -80,11 +83,8 @@ Strix are autonomous AI agents that act just like real hackers - they run your c
# Install Strix
curl -sSL https://strix.ai/install | bash
# Or via pipx
pipx install strix-agent
# Configure your AI provider
export STRIX_LLM="openai/gpt-5"
export STRIX_LLM="openai/gpt-5" # or "strix/gpt-5" via Strix Router (https://models.strix.ai)
export LLM_API_KEY="your-api-key"
# Run your first security assessment
@@ -96,6 +96,20 @@ strix --target ./app-directory
---
## ☁️ Strix Platform
Try the Strix full-stack security platform at **[app.strix.ai](https://app.strix.ai)** — sign up for free, connect your repos and domains, and launch a pentest in minutes.
- **Validated findings with PoCs** and reproduction steps
- **One-click autofix** as ready-to-merge pull requests
- **Continuous monitoring** across code, cloud, and infrastructure
- **Integrations** with GitHub, Slack, Jira, Linear, and CI/CD pipelines
- **Continuous learning** that builds on past findings and remediations
[**Start your first pentest →**](https://app.strix.ai)
---
## ✨ Features
### Agentic Security Tools
@@ -229,11 +243,15 @@ export STRIX_REASONING_EFFORT="high" # control thinking effort (default: high,
**Recommended models for best results:**
- [OpenAI GPT-5](https://openai.com/api/) — `openai/gpt-5`
- [Anthropic Claude Sonnet 4.5](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-5`
- [Anthropic Claude Sonnet 4.6](https://claude.com/platform/api) — `anthropic/claude-sonnet-4-6`
- [Google Gemini 3 Pro Preview](https://cloud.google.com/vertex-ai) — `vertex_ai/gemini-3-pro-preview`
See the [LLM Providers documentation](https://docs.strix.ai/llm-providers/overview) for all supported providers including Vertex AI, Bedrock, Azure, and local models.
## Enterprise
Get the same Strix experience with [enterprise-grade](https://strix.ai/demo) controls: SSO (SAML/OIDC), custom compliance reports, dedicated support & SLA, custom deployment options (VPC/self-hosted), BYOK model support, and tailored agents optimized for your environment. [Learn more](https://strix.ai/demo).
## Documentation
Full documentation is available at **[docs.strix.ai](https://docs.strix.ai)** — including detailed guides for usage, CI/CD integrations, skills, and advanced configuration.

View File

@@ -9,7 +9,7 @@ if [ ! -f /app/certs/ca.p12 ]; then
exit 1
fi
caido-cli --listen 127.0.0.1:${CAIDO_PORT} \
caido-cli --listen 0.0.0.0:${CAIDO_PORT} \
--allow-guests \
--no-logging \
--no-open \

View File

@@ -8,7 +8,7 @@ Configure Strix using environment variables or a config file.
## LLM Configuration
<ParamField path="STRIX_LLM" type="string" required>
Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-5`).
Model name in LiteLLM format (e.g., `openai/gpt-5`, `anthropic/claude-sonnet-4-6`).
</ParamField>
<ParamField path="LLM_API_KEY" type="string">
@@ -46,9 +46,37 @@ Configure Strix using environment variables or a config file.
</ParamField>
<ParamField path="STRIX_TELEMETRY" default="1" type="string">
Enable/disable anonymous telemetry. Set to `0`, `false`, `no`, or `off` to disable.
Global telemetry default toggle. Set to `0`, `false`, `no`, or `off` to disable both PostHog and OTEL unless overridden by per-channel flags below.
</ParamField>
<ParamField path="STRIX_OTEL_TELEMETRY" type="string">
Enable/disable OpenTelemetry run observability independently. When unset, falls back to `STRIX_TELEMETRY`.
</ParamField>
<ParamField path="STRIX_POSTHOG_TELEMETRY" type="string">
Enable/disable PostHog product telemetry independently. When unset, falls back to `STRIX_TELEMETRY`.
</ParamField>
<ParamField path="TRACELOOP_BASE_URL" type="string">
OTLP/Traceloop base URL for remote OpenTelemetry export. If unset, Strix keeps traces local only.
</ParamField>
<ParamField path="TRACELOOP_API_KEY" type="string">
API key used for remote trace export. Remote export is enabled only when both `TRACELOOP_BASE_URL` and `TRACELOOP_API_KEY` are set.
</ParamField>
<ParamField path="TRACELOOP_HEADERS" type="string">
Optional custom OTEL headers (JSON object or `key=value,key2=value2`). Useful for Langfuse or custom/self-hosted OTLP gateways.
</ParamField>
When remote OTEL vars are not set, Strix still writes complete run telemetry locally to:
```bash
strix_runs/<run_name>/events.jsonl
```
When remote vars are set, Strix dual-writes telemetry to both local JSONL and the remote OTEL endpoint.
## Docker Configuration
<ParamField path="STRIX_IMAGE" default="ghcr.io/usestrix/strix-sandbox:0.1.12" type="string">
@@ -106,4 +134,5 @@ export PERPLEXITY_API_KEY="pplx-..."
# Optional: Custom timeouts
export LLM_TIMEOUT="600"
export STRIX_SANDBOX_EXECUTION_TIMEOUT="300"
```

View File

@@ -81,6 +81,21 @@ Protocol-specific testing techniques.
| --------- | ------------------------------------------------ |
| `graphql` | GraphQL introspection, batching, resolver issues |
### Tooling
Sandbox CLI playbooks for core recon and scanning tools.
| Skill | Coverage |
| ----------- | ------------------------------------------------------- |
| `nmap` | Port/service scan syntax and high-signal scan patterns |
| `nuclei` | Template selection, severity filtering, and rate tuning |
| `httpx` | HTTP probing and fingerprint output patterns |
| `ffuf` | Wordlist fuzzing, matcher/filter strategy, recursion |
| `subfinder` | Passive subdomain enumeration and source control |
| `naabu` | Fast port scanning with explicit rate/verify controls |
| `katana` | Crawl depth/JS/known-files behavior and pitfalls |
| `sqlmap` | SQLi workflow for enumeration and controlled extraction |
## Skill Structure
Each skill is a Markdown file with YAML frontmatter for metadata:

View File

@@ -32,6 +32,7 @@
"group": "LLM Providers",
"pages": [
"llm-providers/overview",
"llm-providers/models",
"llm-providers/openai",
"llm-providers/anthropic",
"llm-providers/openrouter",

View File

@@ -6,7 +6,7 @@ description: "Configure Strix with Claude models"
## Setup
```bash
export STRIX_LLM="anthropic/claude-sonnet-4-5"
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="sk-ant-..."
```
@@ -14,8 +14,8 @@ export LLM_API_KEY="sk-ant-..."
| Model | Description |
|-------|-------------|
| `anthropic/claude-sonnet-4-5` | Best balance of intelligence and speed (recommended) |
| `anthropic/claude-opus-4-5` | Maximum capability for deep analysis |
| `anthropic/claude-sonnet-4-6` | Best balance of intelligence and speed |
| `anthropic/claude-opus-4-6` | Maximum capability for deep analysis |
## Get API Key

View File

@@ -0,0 +1,75 @@
---
title: "Strix Router"
description: "Access top LLMs through a single API with high rate limits and zero data retention"
---
Strix Router gives you access to the best LLMs through a single API key.
<Note>
Strix Router is currently in **beta**. It's completely optional — Strix works with any [LiteLLM-compatible provider](/llm-providers/overview) using your own API keys, or with [local models](/llm-providers/local). Strix Router is just the setup we test and optimize for.
</Note>
## Why Use Strix Router?
- **High rate limits** — No throttling during long-running scans
- **Zero data retention** — Routes to providers with zero data retention policies enabled
- **Failover & load balancing** — Automatic fallback across providers for reliability
- **Simple setup** — One API key, one environment variable, no provider accounts needed
- **No markup** — Same token pricing as the underlying providers, no extra fees
## Quick Start
1. Get your API key at [models.strix.ai](https://models.strix.ai)
2. Set your environment:
```bash
export LLM_API_KEY='your-strix-api-key'
export STRIX_LLM='strix/gpt-5'
```
3. Run a scan:
```bash
strix --target ./your-app
```
## Available Models
### Anthropic
| Model | ID |
|-------|-----|
| Claude Sonnet 4.6 | `strix/claude-sonnet-4.6` |
| Claude Opus 4.6 | `strix/claude-opus-4.6` |
### OpenAI
| Model | ID |
|-------|-----|
| GPT-5.2 | `strix/gpt-5.2` |
| GPT-5.1 | `strix/gpt-5.1` |
| GPT-5 | `strix/gpt-5` |
### Google
| Model | ID |
|-------|-----|
| Gemini 3 Pro | `strix/gemini-3-pro-preview` |
| Gemini 3 Flash | `strix/gemini-3-flash-preview` |
### Other
| Model | ID |
|-------|-----|
| GLM-5 | `strix/glm-5` |
| GLM-4.7 | `strix/glm-4.7` |
## Configuration Reference
<ParamField path="LLM_API_KEY" type="string" required>
Your Strix API key from [models.strix.ai](https://models.strix.ai).
</ParamField>
<ParamField path="STRIX_LLM" type="string" required>
Model ID from the tables above. Must be prefixed with `strix/`.
</ParamField>

View File

@@ -19,7 +19,7 @@ Access any model on OpenRouter using the format `openrouter/<provider>/<model>`:
| Model | Configuration |
|-------|---------------|
| GPT-5 | `openrouter/openai/gpt-5` |
| Claude 4.5 Sonnet | `openrouter/anthropic/claude-sonnet-4.5` |
| Claude Sonnet 4.6 | `openrouter/anthropic/claude-sonnet-4.6` |
| Gemini 3 Pro | `openrouter/google/gemini-3-pro-preview` |
| GLM-4.7 | `openrouter/z-ai/glm-4.7` |

View File

@@ -5,31 +5,54 @@ description: "Configure your AI model for Strix"
Strix uses [LiteLLM](https://docs.litellm.ai/docs/providers) for model compatibility, supporting 100+ LLM providers.
## Recommended Models
## Strix Router (Recommended)
For best results, use one of these models:
The fastest way to get started. [Strix Router](/llm-providers/models) gives you access to tested models with the highest rate limits and zero data retention.
```bash
export STRIX_LLM="strix/gpt-5"
export LLM_API_KEY="your-strix-api-key"
```
Get your API key at [models.strix.ai](https://models.strix.ai).
## Bring Your Own Key
You can also use any LiteLLM-compatible provider with your own API keys:
| Model | Provider | Configuration |
| ----------------- | ------------- | -------------------------------- |
| GPT-5 | OpenAI | `openai/gpt-5` |
| Claude 4.5 Sonnet | Anthropic | `anthropic/claude-sonnet-4-5` |
| Claude Sonnet 4.6 | Anthropic | `anthropic/claude-sonnet-4-6` |
| Gemini 3 Pro | Google Vertex | `vertex_ai/gemini-3-pro-preview` |
## Quick Setup
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
```
## Local Models
Run models locally with [Ollama](https://ollama.com), [LM Studio](https://lmstudio.ai), or any OpenAI-compatible server:
```bash
export STRIX_LLM="ollama/llama4"
export LLM_API_BASE="http://localhost:11434"
```
See the [Local Models guide](/llm-providers/local) for setup instructions and recommended models.
## Provider Guides
<CardGroup cols={2}>
<Card title="Strix Router" href="/llm-providers/models">
Recommended models router with high rate limits.
</Card>
<Card title="OpenAI" href="/llm-providers/openai">
GPT-5 and Codex models.
GPT-5 models.
</Card>
<Card title="Anthropic" href="/llm-providers/anthropic">
Claude 4.5 Sonnet, Opus, and Haiku.
Claude Opus, Sonnet, and Haiku.
</Card>
<Card title="OpenRouter" href="/llm-providers/openrouter">
Access 100+ models through a single API.
@@ -38,7 +61,7 @@ export LLM_API_KEY="your-api-key"
Gemini 3 models via Google Cloud.
</Card>
<Card title="AWS Bedrock" href="/llm-providers/bedrock">
Claude 4.5 and Titan models via AWS.
Claude and Titan models via AWS.
</Card>
<Card title="Azure OpenAI" href="/llm-providers/azure">
GPT-5 via Azure.
@@ -54,7 +77,7 @@ Use LiteLLM's `provider/model-name` format:
```
openai/gpt-5
anthropic/claude-sonnet-4-5
anthropic/claude-sonnet-4-6
vertex_ai/gemini-3-pro-preview
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
ollama/llama4

View File

@@ -44,7 +44,7 @@ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
```bash
export VERTEXAI_PROJECT="your-project-id"
export VERTEXAI_LOCATION="us-central1"
export VERTEXAI_LOCATION="global"
```
## Prerequisites

View File

@@ -6,7 +6,7 @@ description: "Install Strix and run your first security scan"
## Prerequisites
- Docker (running)
- An LLM provider API key (OpenAI, Anthropic, or local model)
- An LLM API key — use [Strix Router](/llm-providers/models) for the easiest setup, or bring your own key from any [supported provider](/llm-providers/overview)
## Installation
@@ -27,13 +27,23 @@ description: "Install Strix and run your first security scan"
Set your LLM provider:
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
```
<Tabs>
<Tab title="Strix Router">
```bash
export STRIX_LLM="strix/gpt-5"
export LLM_API_KEY="your-strix-api-key"
```
</Tab>
<Tab title="Bring Your Own Key">
```bash
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"
```
</Tab>
</Tabs>
<Tip>
For best results, use `openai/gpt-5`, `anthropic/claude-sonnet-4-5`, or `vertex_ai/gemini-3-pro-preview`.
For best results, use `strix/gpt-5`, `strix/claude-opus-4.6`, or `strix/gpt-5.2`.
</Tip>
## Run Your First Scan

View File

@@ -80,6 +80,27 @@ for req in user_requests.get('requests', []):
print(f"Potential IDOR: {test_id} returned 200")
```
## Human-in-the-Loop
Strix exposes the Caido proxy to your host machine, so you can interact with it alongside the automated scan. When the sandbox starts, the Caido URL is displayed in the TUI sidebar — click it to copy, then open it in Caido Desktop.
### Accessing Caido
1. Start a scan as usual
2. Look for the **Caido** URL in the sidebar stats panel (e.g. `localhost:52341`)
3. Open the URL in Caido Desktop
4. Click **Continue as guest** to access the instance
### What You Can Do
- **Inspect traffic** — Browse all HTTP/HTTPS requests the agent is making in real time
- **Replay requests** — Take any captured request and resend it with your own modifications
- **Intercept and modify** — Pause requests mid-flight, edit them, then forward
- **Explore the sitemap** — See the full attack surface the agent has discovered
- **Manual testing** — Use Caido's tools to test findings the agent reports, or explore areas it hasn't reached
This turns Strix from a fully automated scanner into a collaborative tool — the agent handles the heavy lifting while you focus on the interesting parts.
## Scope
Create scopes to filter traffic to relevant domains:

1630
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "strix-agent"
version = "0.7.0"
version = "0.8.2"
description = "Open-source AI Hackers for your apps"
authors = ["Strix <hi@usestrix.com>"]
readme = "README.md"
@@ -56,6 +56,9 @@ textual = "^4.0.0"
xmltodict = "^0.13.0"
requests = "^2.32.0"
cvss = "^3.2"
traceloop-sdk = "^0.53.0"
opentelemetry-exporter-otlp-proto-http = "^1.40.0"
scrubadub = "^2.0.1"
# Optional LLM provider dependencies
google-cloud-aiplatform = { version = ">=1.38", optional = true }
@@ -148,6 +151,9 @@ module = [
"libtmux.*",
"pytest.*",
"cvss.*",
"opentelemetry.*",
"scrubadub.*",
"traceloop.*",
]
ignore_missing_imports = true
@@ -155,6 +161,7 @@ ignore_missing_imports = true
[[tool.mypy.overrides]]
module = ["tests.*"]
disallow_untyped_decorators = false
disallow_untyped_defs = false
# ============================================================================
# Ruff Configuration (Fast Python Linter & Formatter)

View File

@@ -4,7 +4,7 @@ set -euo pipefail
APP=strix
REPO="usestrix/strix"
STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.11"
STRIX_IMAGE="ghcr.io/usestrix/strix-sandbox:0.1.12"
MUTED='\033[0;2m'
RED='\033[0;31m'
@@ -335,14 +335,18 @@ echo -e "${MUTED} AI Penetration Testing Agent${NC}"
echo ""
echo -e "${MUTED}To get started:${NC}"
echo ""
echo -e " ${CYAN}1.${NC} Set your LLM provider:"
echo -e " ${MUTED}export STRIX_LLM='openai/gpt-5'${NC}"
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
echo -e " ${CYAN}1.${NC} Get your Strix API key:"
echo -e " ${MUTED}https://models.strix.ai${NC}"
echo ""
echo -e " ${CYAN}2.${NC} Run a penetration test:"
echo -e " ${CYAN}2.${NC} Set your environment:"
echo -e " ${MUTED}export LLM_API_KEY='your-api-key'${NC}"
echo -e " ${MUTED}export STRIX_LLM='strix/gpt-5'${NC}"
echo ""
echo -e " ${CYAN}3.${NC} Run a penetration test:"
echo -e " ${MUTED}strix --target https://example.com${NC}"
echo ""
echo -e "${MUTED}For more information visit ${NC}https://strix.ai"
echo -e "${MUTED}Supported models ${NC}https://docs.strix.ai/llm-providers/overview"
echo -e "${MUTED}Join our community ${NC}https://discord.gg/strix-ai"
echo ""

View File

@@ -21,6 +21,18 @@ INTER-AGENT MESSAGES:
- NEVER echo agent_identity blocks; treat them as internal metadata for identity only. Do not include them in outputs or tool calls.
- Minimize inter-agent messaging: only message when essential for coordination or assistance; avoid routine status updates; batch non-urgent information; prefer parent/child completion flows and shared artifacts over messaging
{% if interactive %}
INTERACTIVE BEHAVIOR:
- You are in an interactive conversation with a user
- CRITICAL: A message WITHOUT a tool call IMMEDIATELY STOPS execution and waits for user input. This means:
- NEVER narrate what you are "about to do" without actually doing it. Statements like "I'll now launch the browser..." or "Let me scan the target..." WITHOUT a tool call will HALT your work.
- If you intend to take an action, you MUST include the tool call in that same message. Describe what you're doing AND call the tool together.
- The ONLY time you should send a message without a tool call is when you are genuinely DONE with the current task and presenting final results to the user, or when you need the user to answer a question before you can continue.
- While working on a task, every single message MUST contain a tool call — this is what keeps execution moving
- You may include brief explanatory text alongside the tool call
- Respond naturally when the user asks questions or gives instructions
- NEVER send empty messages — if you have nothing to do or say, call the wait_for_message tool
{% else %}
AUTONOMOUS BEHAVIOR:
- Work autonomously by default
- You should NOT ask for user input or confirmation - you should always proceed with your task autonomously.
@@ -28,6 +40,7 @@ AUTONOMOUS BEHAVIOR:
- NEVER send an empty or blank message. If you have no content to output or need to wait (for user input, subagent results, or any other reason), you MUST call the wait_for_message tool (or another appropriate tool) instead of emitting an empty response.
- If there is nothing to execute and no user query to answer any more: do NOT send filler/repetitive text — either call wait_for_message or finish your work (subagents: agent_finish; root: finish_scan)
- While the agent loop is running, almost every output MUST be a tool call. Do NOT send plain text messages; act via tools. If idle, use wait_for_message; when done, use agent_finish (subagents) or finish_scan (root)
{% endif %}
</communication_rules>
<execution_guidelines>
@@ -308,19 +321,55 @@ Tool call format:
</function>
CRITICAL RULES:
{% if interactive %}
0. When using tools, include exactly one tool call per message. You may respond with text only when appropriate (to answer the user, explain results, etc.).
{% else %}
0. While active in the agent loop, EVERY message you output MUST be a single tool call. Do not send plain text-only responses.
{% endif %}
1. Exactly one tool call per message — never include more than one <function>...</function> block in a single LLM message.
2. Tool call must be last in message
3. EVERY tool call MUST end with </function>. This is MANDATORY. Never omit the closing tag. End your response immediately after </function>.
4. Use ONLY the exact format shown above. NEVER use JSON/YAML/INI or any other syntax for tools or parameters.
5. When sending ANY multi-line content in tool parameters, use real newlines (actual line breaks). Do NOT emit literal "\n" sequences. Literal "\n" instead of real line breaks will cause tools to fail.
6. Tool names must match exactly the tool "name" defined (no module prefixes, dots, or variants).
- Correct: <function=think> ... </function>
- Incorrect: <thinking_tools.think> ... </function>
- Incorrect: <think> ... </think>
- Incorrect: {"think": {...}}
7. Parameters must use <parameter=param_name>value</parameter> exactly. Do NOT pass parameters as JSON or key:value lines. Do NOT add quotes/braces around values.
{% if interactive %}
8. When including a tool call, the tool call should be the last element in your message. You may include brief explanatory text before it.
{% else %}
8. Do NOT wrap tool calls in markdown/code fences or add any text before or after the tool block.
{% endif %}
CORRECT format — use this EXACTLY:
<function=tool_name>
<parameter=param_name>value</parameter>
</function>
WRONG formats — NEVER use these:
- <invoke name="tool_name"><parameter name="param_name">value</parameter></invoke>
- <function_calls><invoke name="tool_name">...</invoke></function_calls>
- <tool_call><tool_name>...</tool_name></tool_call>
- {"tool_name": {"param_name": "value"}}
- ```<function=tool_name>...</function>```
- <function=tool_name>value_without_parameter_tags</function>
EVERY argument MUST be wrapped in <parameter=name>...</parameter> tags. NEVER put values directly in the function body without parameter tags. This WILL cause the tool call to fail.
Do NOT emit any extra XML tags in your output. In particular:
- NO <thinking>...</thinking> or <thought>...</thought> blocks
- NO <scratchpad>...</scratchpad> or <reasoning>...</reasoning> blocks
- NO <answer>...</answer> or <response>...</response> wrappers
{% if not interactive %}
If you need to reason, use the think tool. Your raw output must contain ONLY the tool call — no surrounding XML tags.
{% else %}
If you need to reason, use the think tool. When using tools, do not add surrounding XML tags.
{% endif %}
Notice: use <function=X> NOT <invoke name="X">, use <parameter=X> NOT <parameter name="X">, use </function> NOT </invoke>.
Example (terminal tool):
<function=terminal_execute>
<parameter=command>nmap -sV -p 1-1000 target.com</parameter>
</function>
Example (agent creation tool):
<function=create_agent>

View File

@@ -56,7 +56,6 @@ class BaseAgent(metaclass=AgentMeta):
self.config = config
self.local_sources = config.get("local_sources", [])
self.non_interactive = config.get("non_interactive", False)
if "max_iterations" in config:
self.max_iterations = config["max_iterations"]
@@ -74,6 +73,9 @@ class BaseAgent(metaclass=AgentMeta):
max_iterations=self.max_iterations,
)
self.interactive = getattr(self.llm_config, "interactive", False)
if self.interactive and self.state.parent_id is None:
self.state.waiting_timeout = 0
self.llm = LLM(self.llm_config, agent_name=self.agent_name)
with contextlib.suppress(Exception):
@@ -169,7 +171,7 @@ class BaseAgent(metaclass=AgentMeta):
continue
if self.state.should_stop():
if self.non_interactive:
if not self.interactive:
return self.state.final_result or {}
await self._enter_waiting_state(tracer)
continue
@@ -213,8 +215,12 @@ class BaseAgent(metaclass=AgentMeta):
should_finish = await iteration_task
self._current_task = None
if should_finish is None and self.interactive:
await self._enter_waiting_state(tracer, text_response=True)
continue
if should_finish:
if self.non_interactive:
if not self.interactive:
self.state.set_completed({"success": True})
if tracer:
tracer.update_agent_status(self.state.agent_id, "completed")
@@ -230,7 +236,7 @@ class BaseAgent(metaclass=AgentMeta):
self.state.add_message(
"assistant", f"{partial_content}\n\n[ABORTED BY USER]"
)
if self.non_interactive:
if not self.interactive:
raise
await self._enter_waiting_state(tracer, error_occurred=False, was_cancelled=True)
continue
@@ -243,7 +249,7 @@ class BaseAgent(metaclass=AgentMeta):
except (RuntimeError, ValueError, TypeError) as e:
if not await self._handle_iteration_error(e, tracer):
if self.non_interactive:
if not self.interactive:
self.state.set_completed({"success": False, "error": str(e)})
if tracer:
tracer.update_agent_status(self.state.agent_id, "failed")
@@ -283,11 +289,14 @@ class BaseAgent(metaclass=AgentMeta):
task_completed: bool = False,
error_occurred: bool = False,
was_cancelled: bool = False,
text_response: bool = False,
) -> None:
self.state.enter_waiting_state()
if tracer:
if task_completed:
if text_response:
tracer.update_agent_status(self.state.agent_id, "waiting_for_input")
elif task_completed:
tracer.update_agent_status(self.state.agent_id, "completed")
elif error_occurred:
tracer.update_agent_status(self.state.agent_id, "error")
@@ -296,6 +305,9 @@ class BaseAgent(metaclass=AgentMeta):
else:
tracer.update_agent_status(self.state.agent_id, "stopped")
if text_response:
return
if task_completed:
self.state.add_message(
"assistant",
@@ -333,6 +345,14 @@ class BaseAgent(metaclass=AgentMeta):
if "agent_id" in sandbox_info:
self.state.sandbox_info["agent_id"] = sandbox_info["agent_id"]
caido_port = sandbox_info.get("caido_port")
if caido_port:
from strix.telemetry.tracer import get_global_tracer
tracer = get_global_tracer()
if tracer:
tracer.caido_url = f"localhost:{caido_port}"
except Exception as e:
from strix.telemetry import posthog
@@ -344,7 +364,7 @@ class BaseAgent(metaclass=AgentMeta):
self.state.add_message("user", task)
async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool:
async def _process_iteration(self, tracer: Optional["Tracer"]) -> bool | None:
final_response = None
async for response in self.llm.generate(self.state.get_conversation_history()):
@@ -390,7 +410,7 @@ class BaseAgent(metaclass=AgentMeta):
if actions:
return await self._execute_actions(actions, tracer)
return False
return None
async def _execute_actions(self, actions: list[Any], tracer: Optional["Tracer"]) -> bool:
"""Execute actions and return True if agent should finish."""
@@ -418,7 +438,7 @@ class BaseAgent(metaclass=AgentMeta):
self.state.set_completed({"success": True})
if tracer:
tracer.update_agent_status(self.state.agent_id, "completed")
if self.non_interactive and self.state.parent_id is None:
if not self.interactive and self.state.parent_id is None:
return True
return True
@@ -518,7 +538,7 @@ class BaseAgent(metaclass=AgentMeta):
error_details = error.details
self.state.add_error(error_msg)
if self.non_interactive:
if not self.interactive:
self.state.set_completed({"success": False, "error": error_msg})
if tracer:
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)
@@ -553,7 +573,7 @@ class BaseAgent(metaclass=AgentMeta):
error_details = getattr(error, "details", None)
self.state.add_error(error_msg)
if self.non_interactive:
if not self.interactive:
self.state.set_completed({"success": False, "error": error_msg})
if tracer:
tracer.update_agent_status(self.state.agent_id, "failed", error_msg)

View File

@@ -25,6 +25,7 @@ class AgentState(BaseModel):
waiting_for_input: bool = False
llm_failed: bool = False
waiting_start_time: datetime | None = None
waiting_timeout: int = 600
final_result: dict[str, Any] | None = None
max_iterations_warning_sent: bool = False
@@ -116,6 +117,9 @@ class AgentState(BaseModel):
return self.iteration >= int(self.max_iterations * threshold)
def has_waiting_timeout(self) -> bool:
if self.waiting_timeout == 0:
return False
if not self.waiting_for_input or not self.waiting_start_time:
return False
@@ -128,7 +132,7 @@ class AgentState(BaseModel):
return False
elapsed = (datetime.now(UTC) - self.waiting_start_time).total_seconds()
return elapsed > 600
return elapsed > self.waiting_timeout
def has_empty_last_messages(self, count: int = 3) -> bool:
if len(self.messages) < count:

View File

@@ -5,6 +5,9 @@ from pathlib import Path
from typing import Any
STRIX_API_BASE = "https://models.strix.ai/api/v1"
class Config:
"""Configuration Manager for Strix."""
@@ -44,6 +47,11 @@ class Config:
# Telemetry
strix_telemetry = "1"
strix_otel_telemetry = None
strix_posthog_telemetry = None
traceloop_base_url = None
traceloop_api_key = None
traceloop_headers = None
# Config file override (set via --config CLI arg)
_config_file_override: Path | None = None
@@ -177,3 +185,31 @@ def apply_saved_config(force: bool = False) -> dict[str, str]:
def save_current_config() -> bool:
return Config.save_current()
def resolve_llm_config() -> tuple[str | None, str | None, str | None]:
"""Resolve LLM model, api_key, and api_base based on STRIX_LLM prefix.
Returns:
tuple: (model_name, api_key, api_base)
- model_name: Original model name (strix/ prefix preserved for display)
- api_key: LLM API key
- api_base: API base URL (auto-set to STRIX_API_BASE for strix/ models)
"""
model = Config.get("strix_llm")
if not model:
return None, None, None
api_key = Config.get("llm_api_key")
if model.startswith("strix/"):
api_base: str | None = STRIX_API_BASE
else:
api_base = (
Config.get("llm_api_base")
or Config.get("openai_api_base")
or Config.get("litellm_base_url")
or Config.get("ollama_api_base")
)
return model, api_key, api_base

View File

@@ -77,12 +77,21 @@ Toast.-information .toast--title {
margin-bottom: 0;
}
#stats_display {
#stats_scroll {
height: auto;
max-height: 15;
background: transparent;
padding: 0;
margin: 0;
border: round #333333;
scrollbar-size: 0 0;
}
#stats_display {
height: auto;
background: transparent;
padding: 0 1;
margin: 0;
}
#vulnerabilities_panel {

View File

@@ -82,7 +82,6 @@ async def run_cli(args: Any) -> None: # noqa: PLR0915
agent_config = {
"llm_config": llm_config,
"max_iterations": 300,
"non_interactive": True,
}
if getattr(args, "local_sources", None):

View File

@@ -18,6 +18,8 @@ from rich.panel import Panel
from rich.text import Text
from strix.config import Config, apply_saved_config, save_current_config
from strix.config.config import resolve_llm_config
from strix.llm.utils import resolve_strix_model
apply_saved_config()
@@ -52,10 +54,13 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
missing_required_vars = []
missing_optional_vars = []
if not Config.get("strix_llm"):
strix_llm = Config.get("strix_llm")
uses_strix_models = strix_llm and strix_llm.startswith("strix/")
if not strix_llm:
missing_required_vars.append("STRIX_LLM")
has_base_url = any(
has_base_url = uses_strix_models or any(
[
Config.get("llm_api_base"),
Config.get("openai_api_base"),
@@ -136,7 +141,10 @@ def validate_environment() -> None: # noqa: PLR0912, PLR0915
)
error_text.append("\nExample setup:\n", style="white")
error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
if uses_strix_models:
error_text.append("export STRIX_LLM='strix/gpt-5'\n", style="dim white")
else:
error_text.append("export STRIX_LLM='openai/gpt-5'\n", style="dim white")
if missing_optional_vars:
for var in missing_optional_vars:
@@ -202,14 +210,9 @@ async def warm_up_llm() -> None:
console = Console()
try:
model_name = Config.get("strix_llm")
api_key = Config.get("llm_api_key")
api_base = (
Config.get("llm_api_base")
or Config.get("openai_api_base")
or Config.get("litellm_base_url")
or Config.get("ollama_api_base")
)
model_name, api_key, api_base = resolve_llm_config()
litellm_model, _ = resolve_strix_model(model_name)
litellm_model = litellm_model or model_name
test_messages = [
{"role": "system", "content": "You are a helpful assistant."},
@@ -219,7 +222,7 @@ async def warm_up_llm() -> None:
llm_timeout = int(Config.get("llm_timeout") or "300")
completion_kwargs: dict[str, Any] = {
"model": model_name,
"model": litellm_model,
"messages": test_messages,
"timeout": llm_timeout,
}
@@ -433,8 +436,6 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
if tracer and tracer.scan_results:
scan_completed = tracer.scan_results.get("scan_completed", False)
has_vulnerabilities = tracer and len(tracer.vulnerability_reports) > 0
completion_text = Text()
if scan_completed:
completion_text.append("Penetration test completed", style="bold #22c55e")
@@ -459,13 +460,12 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
if stats_text.plain:
panel_parts.extend(["\n", stats_text])
if scan_completed or has_vulnerabilities:
results_text = Text()
results_text.append("\n")
results_text.append("Output", style="dim")
results_text.append(" ")
results_text.append(str(results_path), style="#60a5fa")
panel_parts.extend(["\n", results_text])
results_text = Text()
results_text.append("\n")
results_text.append("Output", style="dim")
results_text.append(" ")
results_text.append(str(results_path), style="#60a5fa")
panel_parts.extend(["\n", results_text])
panel_content = Text.assemble(*panel_parts)
@@ -482,7 +482,7 @@ def display_completion_message(args: argparse.Namespace, results_path: Path) ->
console.print("\n")
console.print(panel)
console.print()
console.print("[#60a5fa]strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]")
console.print("[#60a5fa]models.strix.ai[/] [dim]·[/] [#60a5fa]discord.gg/strix-ai[/]")
console.print()

View File

@@ -3,8 +3,11 @@ import re
from dataclasses import dataclass
from typing import Literal
from strix.llm.utils import normalize_tool_format
_FUNCTION_TAG_PREFIX = "<function="
_INVOKE_TAG_PREFIX = "<invoke "
_FUNC_PATTERN = re.compile(r"<function=([^>]+)>")
_FUNC_END_PATTERN = re.compile(r"</function>")
@@ -21,9 +24,8 @@ def _get_safe_content(content: str) -> tuple[str, str]:
return content, ""
suffix = content[last_lt:]
target = _FUNCTION_TAG_PREFIX # "<function="
if target.startswith(suffix):
if _FUNCTION_TAG_PREFIX.startswith(suffix) or _INVOKE_TAG_PREFIX.startswith(suffix):
return content[:last_lt], suffix
return content, ""
@@ -42,6 +44,8 @@ def parse_streaming_content(content: str) -> list[StreamSegment]:
if not content:
return []
content = normalize_tool_format(content)
segments: list[StreamSegment] = []
func_matches = list(_FUNC_PATTERN.finditer(content))

View File

@@ -4,6 +4,7 @@ from . import (
browser_renderer,
file_edit_renderer,
finish_renderer,
load_skill_renderer,
notes_renderer,
proxy_renderer,
python_renderer,
@@ -28,6 +29,7 @@ __all__ = [
"file_edit_renderer",
"finish_renderer",
"get_tool_renderer",
"load_skill_renderer",
"notes_renderer",
"proxy_renderer",
"python_renderer",

View File

@@ -0,0 +1,33 @@
from typing import Any, ClassVar
from rich.text import Text
from textual.widgets import Static
from .base_renderer import BaseToolRenderer
from .registry import register_tool_renderer
@register_tool_renderer
class LoadSkillRenderer(BaseToolRenderer):
tool_name: ClassVar[str] = "load_skill"
css_classes: ClassVar[list[str]] = ["tool-call", "load-skill-tool"]
@classmethod
def render(cls, tool_data: dict[str, Any]) -> Static:
args = tool_data.get("args", {})
status = tool_data.get("status", "completed")
requested = args.get("skills", "")
text = Text()
text.append("", style="#10b981")
text.append("loading skill", style="dim")
if requested:
text.append(" ")
text.append(requested, style="#10b981")
elif not tool_data.get("result"):
text.append("\n ")
text.append("Loading...", style="dim")
return Static(text, classes=cls.get_css_classes(status))

View File

@@ -687,7 +687,7 @@ class StrixTUIApp(App): # type: ignore[misc]
CSS_PATH = "assets/tui_styles.tcss"
ALLOW_SELECT = True
SIDEBAR_MIN_WIDTH = 140
SIDEBAR_MIN_WIDTH = 120
selected_agent_id: reactive[str | None] = reactive(default=None)
show_splash: reactive[bool] = reactive(default=True)
@@ -749,7 +749,9 @@ class StrixTUIApp(App): # type: ignore[misc]
def _build_agent_config(self, args: argparse.Namespace) -> dict[str, Any]:
scan_mode = getattr(args, "scan_mode", "deep")
llm_config = LLMConfig(
scan_mode=scan_mode, is_whitebox=bool(getattr(args, "local_sources", []))
scan_mode=scan_mode,
interactive=True,
is_whitebox=bool(getattr(args, "local_sources", [])),
)
config = {
@@ -832,11 +834,11 @@ class StrixTUIApp(App): # type: ignore[misc]
agents_tree.guide_style = "dashed"
stats_display = Static("", id="stats_display")
stats_display.ALLOW_SELECT = False
stats_scroll = VerticalScroll(stats_display, id="stats_scroll")
vulnerabilities_panel = VulnerabilitiesPanel(id="vulnerabilities_panel")
sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_display, id="sidebar")
sidebar = Vertical(agents_tree, vulnerabilities_panel, stats_scroll, id="sidebar")
content_container.mount(chat_area_container)
content_container.mount(sidebar)
@@ -1275,6 +1277,9 @@ class StrixTUIApp(App): # type: ignore[misc]
if not self._is_widget_safe(stats_display):
return
if self.screen.selections:
return
stats_content = Text()
stats_text = build_tui_stats_text(self.tracer, self.agent_config)
@@ -1284,15 +1289,7 @@ class StrixTUIApp(App): # type: ignore[misc]
version = get_package_version()
stats_content.append(f"\nv{version}", style="white")
from rich.panel import Panel
stats_panel = Panel(
stats_content,
border_style="#333333",
padding=(0, 1),
)
self._safe_widget_operation(stats_display.update, stats_panel)
self._safe_widget_operation(stats_display.update, stats_content)
def _update_vulnerabilities_panel(self) -> None:
"""Update the vulnerabilities panel with current vulnerability data."""

View File

@@ -392,6 +392,12 @@ def build_tui_stats_text(tracer: Any, agent_config: dict[str, Any] | None = None
stats_text.append(" · ", style="white")
stats_text.append(f"${total_stats['cost']:.2f}", style="white")
caido_url = getattr(tracer, "caido_url", None)
if caido_url:
stats_text.append("\n")
stats_text.append("Caido: ", style="bold white")
stats_text.append(caido_url, style="white")
return stats_text

View File

@@ -1,4 +1,6 @@
from strix.config import Config
from strix.config.config import resolve_llm_config
from strix.llm.utils import resolve_strix_model
class LLMConfig:
@@ -10,12 +12,18 @@ class LLMConfig:
timeout: int | None = None,
scan_mode: str = "deep",
is_whitebox: bool = False,
interactive: bool = False,
):
self.model_name = model_name or Config.get("strix_llm")
resolved_model, self.api_key, self.api_base = resolve_llm_config()
self.model_name = model_name or resolved_model
if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty")
api_model, canonical = resolve_strix_model(self.model_name)
self.litellm_model: str = api_model or self.model_name
self.canonical_model: str = canonical or self.model_name
self.enable_prompt_caching = enable_prompt_caching
self.skills = skills or []
@@ -23,3 +31,4 @@ class LLMConfig:
self.scan_mode = scan_mode if scan_mode in ["quick", "standard", "deep"] else "deep"
self.is_whitebox = is_whitebox
self.interactive = interactive

View File

@@ -5,7 +5,8 @@ from typing import Any
import litellm
from strix.config import Config
from strix.config.config import resolve_llm_config
from strix.llm.utils import resolve_strix_model
logger = logging.getLogger(__name__)
@@ -155,14 +156,9 @@ def check_duplicate(
comparison_data = {"candidate": candidate_cleaned, "existing_reports": existing_cleaned}
model_name = Config.get("strix_llm")
api_key = Config.get("llm_api_key")
api_base = (
Config.get("llm_api_base")
or Config.get("openai_api_base")
or Config.get("litellm_base_url")
or Config.get("ollama_api_base")
)
model_name, api_key, api_base = resolve_llm_config()
litellm_model, _ = resolve_strix_model(model_name)
litellm_model = litellm_model or model_name
messages = [
{"role": "system", "content": DEDUPE_SYSTEM_PROMPT},
@@ -177,7 +173,7 @@ def check_duplicate(
]
completion_kwargs: dict[str, Any] = {
"model": model_name,
"model": litellm_model,
"messages": messages,
"timeout": 120,
}

View File

@@ -14,6 +14,7 @@ from strix.llm.memory_compressor import MemoryCompressor
from strix.llm.utils import (
_truncate_to_first_function,
fix_incomplete_tool_call,
normalize_tool_format,
parse_tool_invocations,
)
from strix.skills import load_skills
@@ -62,8 +63,9 @@ class LLM:
self.config = config
self.agent_name = agent_name
self.agent_id: str | None = None
self._active_skills: list[str] = list(config.skills or [])
self._total_stats = RequestStats()
self.memory_compressor = MemoryCompressor(model_name=config.model_name)
self.memory_compressor = MemoryCompressor(model_name=config.litellm_model)
self.system_prompt = self._load_system_prompt(agent_name)
reasoning = Config.get("strix_reasoning_effort")
@@ -86,24 +88,52 @@ class LLM:
autoescape=select_autoescape(enabled_extensions=(), default_for_string=False),
)
skills_to_load = [
*list(self.config.skills or []),
f"scan_modes/{self.config.scan_mode}",
]
if self.config.is_whitebox:
skills_to_load.append("coordination/source_aware_whitebox")
skills_to_load = self._get_skills_to_load()
skill_content = load_skills(skills_to_load)
env.globals["get_skill"] = lambda name: skill_content.get(name, "")
result = env.get_template("system_prompt.jinja").render(
get_tools_prompt=get_tools_prompt,
loaded_skill_names=list(skill_content.keys()),
interactive=self.config.interactive,
**skill_content,
)
return str(result)
except Exception: # noqa: BLE001
return ""
def _get_skills_to_load(self) -> list[str]:
ordered_skills = [*self._active_skills]
ordered_skills.append(f"scan_modes/{self.config.scan_mode}")
if self.config.is_whitebox:
ordered_skills.append("coordination/source_aware_whitebox")
deduped: list[str] = []
seen: set[str] = set()
for skill_name in ordered_skills:
if skill_name not in seen:
deduped.append(skill_name)
seen.add(skill_name)
return deduped
def add_skills(self, skill_names: list[str]) -> list[str]:
added: list[str] = []
for skill_name in skill_names:
if not skill_name or skill_name in self._active_skills:
continue
self._active_skills.append(skill_name)
added.append(skill_name)
if not added:
return []
updated_prompt = self._load_system_prompt(self.agent_name)
if updated_prompt:
self.system_prompt = updated_prompt
return added
def set_agent_identity(self, agent_name: str | None, agent_id: str | None) -> None:
if agent_name:
self.agent_name = agent_name
@@ -145,10 +175,10 @@ class LLM:
delta = self._get_chunk_content(chunk)
if delta:
accumulated += delta
if "</function>" in accumulated:
accumulated = accumulated[
: accumulated.find("</function>") + len("</function>")
]
if "</function>" in accumulated or "</invoke>" in accumulated:
end_tag = "</function>" if "</function>" in accumulated else "</invoke>"
pos = accumulated.find(end_tag)
accumulated = accumulated[: pos + len(end_tag)]
yield LLMResponse(content=accumulated)
done_streaming = 1
continue
@@ -157,6 +187,7 @@ class LLM:
if chunks:
self._update_usage_stats(stream_chunk_builder(chunks))
accumulated = normalize_tool_format(accumulated)
accumulated = fix_incomplete_tool_call(_truncate_to_first_function(accumulated))
yield LLMResponse(
content=accumulated,
@@ -186,6 +217,9 @@ class LLM:
conversation_history.extend(compressed)
messages.extend(compressed)
if messages[-1].get("role") == "assistant" and not self.config.interactive:
messages.append({"role": "user", "content": "<meta>Continue the task.</meta>"})
if self._is_anthropic() and self.config.enable_prompt_caching:
messages = self._add_cache_control(messages)
@@ -196,21 +230,16 @@ class LLM:
messages = self._strip_images(messages)
args: dict[str, Any] = {
"model": self.config.model_name,
"model": self.config.litellm_model,
"messages": messages,
"timeout": self.config.timeout,
"stream_options": {"include_usage": True},
}
if api_key := Config.get("llm_api_key"):
args["api_key"] = api_key
if api_base := (
Config.get("llm_api_base")
or Config.get("openai_api_base")
or Config.get("litellm_base_url")
or Config.get("ollama_api_base")
):
args["api_base"] = api_base
if self.config.api_key:
args["api_key"] = self.config.api_key
if self.config.api_base:
args["api_base"] = self.config.api_base
if self._supports_reasoning():
args["reasoning_effort"] = self._reasoning_effort
@@ -236,8 +265,8 @@ class LLM:
def _update_usage_stats(self, response: Any) -> None:
try:
if hasattr(response, "usage") and response.usage:
input_tokens = getattr(response.usage, "prompt_tokens", 0)
output_tokens = getattr(response.usage, "completion_tokens", 0)
input_tokens = getattr(response.usage, "prompt_tokens", 0) or 0
output_tokens = getattr(response.usage, "completion_tokens", 0) or 0
cached_tokens = 0
if hasattr(response.usage, "prompt_tokens_details"):
@@ -245,14 +274,11 @@ class LLM:
if hasattr(prompt_details, "cached_tokens"):
cached_tokens = prompt_details.cached_tokens or 0
cost = self._extract_cost(response)
else:
input_tokens = 0
output_tokens = 0
cached_tokens = 0
try:
cost = completion_cost(response) or 0.0
except Exception: # noqa: BLE001
cost = 0.0
self._total_stats.input_tokens += input_tokens
@@ -263,6 +289,18 @@ class LLM:
except Exception: # noqa: BLE001, S110 # nosec B110
pass
def _extract_cost(self, response: Any) -> float:
if hasattr(response, "usage") and response.usage:
direct_cost = getattr(response.usage, "cost", None)
if direct_cost is not None:
return float(direct_cost)
try:
if hasattr(response, "_hidden_params"):
response._hidden_params.pop("custom_llm_provider", None)
return completion_cost(response, model=self.config.canonical_model) or 0.0
except Exception: # noqa: BLE001
return 0.0
def _should_retry(self, e: Exception) -> bool:
code = getattr(e, "status_code", None) or getattr(
getattr(e, "response", None), "status_code", None
@@ -282,13 +320,13 @@ class LLM:
def _supports_vision(self) -> bool:
try:
return bool(supports_vision(model=self.config.model_name))
return bool(supports_vision(model=self.config.canonical_model))
except Exception: # noqa: BLE001
return False
def _supports_reasoning(self) -> bool:
try:
return bool(supports_reasoning(model=self.config.model_name))
return bool(supports_reasoning(model=self.config.canonical_model))
except Exception: # noqa: BLE001
return False
@@ -309,7 +347,7 @@ class LLM:
return result
def _add_cache_control(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]:
if not messages or not supports_prompt_caching(self.config.model_name):
if not messages or not supports_prompt_caching(self.config.canonical_model):
return messages
result = list(messages)

View File

@@ -3,7 +3,7 @@ from typing import Any
import litellm
from strix.config import Config
from strix.config.config import Config, resolve_llm_config
logger = logging.getLogger(__name__)
@@ -91,7 +91,7 @@ def _summarize_messages(
if not messages:
empty_summary = "<context_summary message_count='0'>{text}</context_summary>"
return {
"role": "assistant",
"role": "user",
"content": empty_summary.format(text="No messages to summarize"),
}
@@ -104,13 +104,7 @@ def _summarize_messages(
conversation = "\n".join(formatted)
prompt = SUMMARY_PROMPT_TEMPLATE.format(conversation=conversation)
api_key = Config.get("llm_api_key")
api_base = (
Config.get("llm_api_base")
or Config.get("openai_api_base")
or Config.get("litellm_base_url")
or Config.get("ollama_api_base")
)
_, api_key, api_base = resolve_llm_config()
try:
completion_args: dict[str, Any] = {
@@ -129,7 +123,7 @@ def _summarize_messages(
return messages[0]
summary_msg = "<context_summary message_count='{count}'>{text}</context_summary>"
return {
"role": "assistant",
"role": "user",
"content": summary_msg.format(count=len(messages), text=summary),
}
except Exception:
@@ -164,7 +158,7 @@ class MemoryCompressor:
):
self.max_images = max_images
self.model_name = model_name or Config.get("strix_llm")
self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "30")
self.timeout = timeout or int(Config.get("strix_memory_compressor_timeout") or "120")
if not self.model_name:
raise ValueError("STRIX_LLM environment variable must be set and not empty")

View File

@@ -3,11 +3,71 @@ import re
from typing import Any
_INVOKE_OPEN = re.compile(r'<invoke\s+name=["\']([^"\']+)["\']>')
_PARAM_NAME_ATTR = re.compile(r'<parameter\s+name=["\']([^"\']+)["\']>')
_FUNCTION_CALLS_TAG = re.compile(r"</?function_calls>")
_STRIP_TAG_QUOTES = re.compile(r"<(function|parameter)\s*=\s*([^>]*?)>")
def normalize_tool_format(content: str) -> str:
"""Convert alternative tool-call XML formats to the expected one.
Handles:
<function_calls>...</function_calls> → stripped
<invoke name="X"> → <function=X>
<parameter name="X"> → <parameter=X>
</invoke> → </function>
<function="X"> → <function=X>
<parameter="X"> → <parameter=X>
"""
if "<invoke" in content or "<function_calls" in content:
content = _FUNCTION_CALLS_TAG.sub("", content)
content = _INVOKE_OPEN.sub(r"<function=\1>", content)
content = _PARAM_NAME_ATTR.sub(r"<parameter=\1>", content)
content = content.replace("</invoke>", "</function>")
return _STRIP_TAG_QUOTES.sub(
lambda m: f"<{m.group(1)}={m.group(2).strip().strip(chr(34) + chr(39))}>", content
)
STRIX_MODEL_MAP: dict[str, str] = {
"claude-sonnet-4.6": "anthropic/claude-sonnet-4-6",
"claude-opus-4.6": "anthropic/claude-opus-4-6",
"gpt-5.2": "openai/gpt-5.2",
"gpt-5.1": "openai/gpt-5.1",
"gpt-5": "openai/gpt-5",
"gemini-3-pro-preview": "gemini/gemini-3-pro-preview",
"gemini-3-flash-preview": "gemini/gemini-3-flash-preview",
"glm-5": "openrouter/z-ai/glm-5",
"glm-4.7": "openrouter/z-ai/glm-4.7",
}
def resolve_strix_model(model_name: str | None) -> tuple[str | None, str | None]:
"""Resolve a strix/ model into names for API calls and capability lookups.
Returns (api_model, canonical_model):
- api_model: openai/<base> for API calls (Strix API is OpenAI-compatible)
- canonical_model: actual provider model name for litellm capability lookups
Non-strix models return the same name for both.
"""
if not model_name or not model_name.startswith("strix/"):
return model_name, model_name
base_model = model_name[6:]
api_model = f"openai/{base_model}"
canonical_model = STRIX_MODEL_MAP.get(base_model, api_model)
return api_model, canonical_model
def _truncate_to_first_function(content: str) -> str:
if not content:
return content
function_starts = [match.start() for match in re.finditer(r"<function=", content)]
function_starts = [
match.start() for match in re.finditer(r"<function=|<invoke\s+name=", content)
]
if len(function_starts) >= 2:
second_function_start = function_starts[1]
@@ -18,6 +78,7 @@ def _truncate_to_first_function(content: str) -> str:
def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
content = normalize_tool_format(content)
content = fix_incomplete_tool_call(content)
tool_invocations: list[dict[str, Any]] = []
@@ -47,12 +108,14 @@ def parse_tool_invocations(content: str) -> list[dict[str, Any]] | None:
def fix_incomplete_tool_call(content: str) -> str:
"""Fix incomplete tool calls by adding missing </function> tag."""
if (
"<function=" in content
and content.count("<function=") == 1
and "</function>" not in content
):
"""Fix incomplete tool calls by adding missing closing tag.
Handles both ``<function=…>`` and ``<invoke name="">`` formats.
"""
has_open = "<function=" in content or "<invoke " in content
count_open = content.count("<function=") + content.count("<invoke ")
has_close = "</function>" in content or "</invoke>" in content
if has_open and count_open == 1 and not has_close:
content = content.rstrip()
content = content + "function>" if content.endswith("</") else content + "\n</function>"
return content
@@ -73,6 +136,7 @@ def clean_content(content: str) -> str:
if not content:
return ""
content = normalize_tool_format(content)
content = fix_incomplete_tool_call(content)
tool_pattern = r"<function=[^>]+>.*?</function>"

View File

@@ -22,6 +22,7 @@ from .runtime import AbstractRuntime, SandboxInfo
HOST_GATEWAY_HOSTNAME = "host.docker.internal"
DOCKER_TIMEOUT = 60
CONTAINER_TOOL_SERVER_PORT = 48081
CONTAINER_CAIDO_PORT = 48080
class DockerRuntime(AbstractRuntime):
@@ -37,6 +38,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container: Container | None = None
self._tool_server_port: int | None = None
self._tool_server_token: str | None = None
self._caido_port: int | None = None
def _find_available_port(self) -> int:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
@@ -78,6 +80,10 @@ class DockerRuntime(AbstractRuntime):
if port_bindings.get(port_key):
self._tool_server_port = int(port_bindings[port_key][0]["HostPort"])
caido_port_key = f"{CONTAINER_CAIDO_PORT}/tcp"
if port_bindings.get(caido_port_key):
self._caido_port = int(port_bindings[caido_port_key][0]["HostPort"])
def _wait_for_tool_server(self, max_retries: int = 30, timeout: int = 5) -> None:
host = self._resolve_docker_host()
health_url = f"http://{host}:{self._tool_server_port}/health"
@@ -121,6 +127,7 @@ class DockerRuntime(AbstractRuntime):
time.sleep(1)
self._tool_server_port = self._find_available_port()
self._caido_port = self._find_available_port()
self._tool_server_token = secrets.token_urlsafe(32)
execution_timeout = Config.get("strix_sandbox_execution_timeout") or "120"
@@ -130,7 +137,10 @@ class DockerRuntime(AbstractRuntime):
detach=True,
name=container_name,
hostname=container_name,
ports={f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port},
ports={
f"{CONTAINER_TOOL_SERVER_PORT}/tcp": self._tool_server_port,
f"{CONTAINER_CAIDO_PORT}/tcp": self._caido_port,
},
cap_add=["NET_ADMIN", "NET_RAW"],
labels={"strix-scan-id": scan_id},
environment={
@@ -152,6 +162,7 @@ class DockerRuntime(AbstractRuntime):
if attempt < max_retries:
self._tool_server_port = None
self._tool_server_token = None
self._caido_port = None
time.sleep(2**attempt)
else:
return container
@@ -173,6 +184,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None
self._tool_server_port = None
self._tool_server_token = None
self._caido_port = None
try:
container = self.client.containers.get(container_name)
@@ -260,7 +272,7 @@ class DockerRuntime(AbstractRuntime):
raise RuntimeError("Docker container ID is unexpectedly None")
token = existing_token or self._tool_server_token
if self._tool_server_port is None or token is None:
if self._tool_server_port is None or self._caido_port is None or token is None:
raise RuntimeError("Tool server not initialized")
host = self._resolve_docker_host()
@@ -273,6 +285,7 @@ class DockerRuntime(AbstractRuntime):
"api_url": api_url,
"auth_token": token,
"tool_server_port": self._tool_server_port,
"caido_port": self._caido_port,
"agent_id": agent_id,
}
@@ -314,6 +327,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None
self._tool_server_port = None
self._tool_server_token = None
self._caido_port = None
except (NotFound, DockerException):
pass
@@ -323,6 +337,7 @@ class DockerRuntime(AbstractRuntime):
self._scan_container = None
self._tool_server_port = None
self._tool_server_token = None
self._caido_port = None
if container_name is None:
return

View File

@@ -7,6 +7,7 @@ class SandboxInfo(TypedDict):
api_url: str
auth_token: str | None
tool_server_port: int
caido_port: int
agent_id: str

View File

@@ -33,6 +33,7 @@ The skills are dynamically injected into the agent's system prompt, allowing it
| **`/frameworks`** | Specific testing methods for popular frameworks e.g. Django, Express, FastAPI, and Next.js |
| **`/technologies`** | Specialized techniques for third-party services such as Supabase, Firebase, Auth0, and payment gateways |
| **`/protocols`** | Protocol-specific testing patterns for GraphQL, WebSocket, OAuth, and other communication standards |
| **`/tooling`** | Command-line playbooks for core sandbox tools (nmap, nuclei, httpx, ffuf, subfinder, naabu, katana, sqlmap) |
| **`/cloud`** | Cloud provider security testing for AWS, Azure, GCP, and Kubernetes environments |
| **`/reconnaissance`** | Advanced information gathering and enumeration techniques for comprehensive attack surface mapping |
| **`/custom`** | Community-contributed skills for specialized or industry-specific testing scenarios |

View File

@@ -54,6 +54,30 @@ def validate_skill_names(skill_names: list[str]) -> dict[str, list[str]]:
return {"valid": valid_skills, "invalid": invalid_skills}
def parse_skill_list(skills: str | None) -> list[str]:
if not skills:
return []
return [s.strip() for s in skills.split(",") if s.strip()]
def validate_requested_skills(skill_list: list[str], max_skills: int = 5) -> str | None:
if len(skill_list) > max_skills:
return "Cannot specify more than 5 skills for an agent (use comma-separated format)"
if not skill_list:
return None
validation = validate_skill_names(skill_list)
if validation["invalid"]:
available_skills = list(get_all_skill_names())
return (
f"Invalid skills: {validation['invalid']}. "
f"Available skills: {', '.join(available_skills)}"
)
return None
def generate_skills_description() -> str:
available_skills = get_available_skills()

View File

@@ -0,0 +1,225 @@
---
name: nestjs
description: Security testing playbook for NestJS applications covering guards, pipes, decorators, module boundaries, and multi-transport auth
---
# NestJS
Security testing for NestJS applications. Focus on guard gaps across decorator stacks, validation pipe bypasses, module boundary leaks, and inconsistent auth enforcement across HTTP, WebSocket, and microservice transports.
## Attack Surface
**Decorator Pipeline**
- Guards: `@UseGuards`, `CanActivate`, execution context (HTTP/WS/RPC), `Reflector` metadata
- Pipes: `ValidationPipe` (whitelist, transform, forbidNonWhitelisted), `ParseIntPipe`, custom pipes
- Interceptors: response mapping, caching, logging, timeout — can modify request/response flow
- Filters: exception filters that may leak information
- Metadata: `@SetMetadata`, `@Public()`, `@Roles()`, `@Permissions()`
**Module System**
- `@Module` boundaries, provider scoping (DEFAULT/REQUEST/TRANSIENT)
- Dynamic modules: `forRoot`/`forRootAsync`, global modules
- DI container: provider overrides, custom providers
**Controllers & Transports**
- REST: `@Controller`, versioning (URI/Header/MediaType)
- GraphQL: `@Resolver`, playground/sandbox exposure
- WebSocket: `@WebSocketGateway`, gateway guards, room authorization
- Microservices: TCP, Redis, NATS, MQTT, gRPC, Kafka — often lack HTTP-level auth
**Data Layer**
- TypeORM: repositories, QueryBuilder, raw queries, relations
- Prisma: `$queryRaw`, `$queryRawUnsafe`
- Mongoose: operator injection, `$where`, `$regex`
**Auth & Config**
- `@nestjs/passport` strategies, `@nestjs/jwt`, session-based auth
- `@nestjs/config`, ConfigService, `.env` files
- `@nestjs/throttler`, rate limiting with `@SkipThrottle`
**API Documentation**
- `@nestjs/swagger`: OpenAPI exposure, DTO schemas, auth schemes
## High-Value Targets
- Swagger/OpenAPI endpoints in production (`/api`, `/api-docs`, `/api-json`, `/swagger`)
- Auth endpoints: login, register, token refresh, password reset, OAuth callbacks
- Admin controllers decorated with `@Roles('admin')` — test with user-level tokens
- File upload endpoints using `FileInterceptor`/`FilesInterceptor`
- WebSocket gateways sharing business logic with HTTP controllers
- Microservice handlers (`@MessagePattern`, `@EventPattern`) — often unguarded
- CRUD generators (`@nestjsx/crud`) with auto-generated endpoints
- Background jobs and scheduled tasks (`@nestjs/schedule`)
- Health/metrics endpoints (`@nestjs/terminus`, `/health`, `/metrics`)
- GraphQL playground/sandbox in production (`/graphql`)
## Reconnaissance
**Swagger Discovery**
```
GET /api
GET /api-docs
GET /api-json
GET /swagger
GET /docs
GET /v1/api-docs
GET /api/v2/docs
```
Extract: paths, parameter schemas, DTOs, auth schemes, example values. Swagger may reveal internal endpoints, deprecated routes, and admin-only paths not visible in the UI.
**Guard Mapping**
For each controller and method, identify:
- Global guards (applied in `main.ts` or app module)
- Controller-level guards (`@UseGuards` on the class)
- Method-level guards (`@UseGuards` on individual handlers)
- `@Public()` or `@SkipThrottle()` decorators that bypass protection
## Key Vulnerabilities
### Guard Bypass
**Decorator Stack Gaps**
- Guards execute: global → controller → method. A method missing `@UseGuards` when siblings have it is the #1 finding.
- `@Public()` metadata causing global `AuthGuard` to skip enforcement — check if applied too broadly.
- New methods added to existing controllers without inheriting the expected guard.
**ExecutionContext Switching**
- Guards handling only HTTP context (`getRequest()`) may fail silently on WebSocket or RPC, returning `true` by default.
- Test same business logic through alternate transports to find context-specific bypasses.
**Reflector Mismatches**
- Guard reads `SetMetadata('roles', [...])` but decorator sets `'role'` (singular) — guard sees no metadata, defaults to allow.
- `applyDecorators()` compositions accidentally overriding stricter guards with permissive ones.
### Validation Pipe Exploits
**Whitelist Bypass**
- `whitelist: true` without `forbidNonWhitelisted: true`: extra properties silently stripped but may have been processed by earlier middleware/interceptors.
- Missing `@Type(() => ChildDto)` on nested objects: `@ValidateNested()` without `@Type` means nested payload is never validated.
- Array elements: `@IsArray()` doesn't validate elements without `@ValidateNested({ each: true })` and `@Type`.
**Type Coercion**
- `transform: true` enables implicit coercion: strings → numbers, `"true"``true`, `"null"``null`.
- Exploit truthiness assumptions in business logic downstream.
**Conditional Validation**
- `@ValidateIf()` and validation groups creating paths where fields skip validation entirely.
**Missing Parse Pipes**
- `@Param('id')` without `ParseIntPipe`/`ParseUUIDPipe` — string values reach ORM queries directly.
### Auth & Passport
**JWT Strategy**
- Check `ignoreExpiration` is false, `algorithms` is pinned (no `none` or HS/RS confusion)
- Weak `secretOrKey` values
- Cross-service token reuse when audience/issuer not enforced
**Passport Strategy Issues**
- `validate()` return value becomes `req.user` — if it returns full DB record, sensitive fields leak downstream
- Multiple strategies (JWT + session): one may bypass restrictions of the other
- Custom guards returning `true` for unauthenticated as "optional auth"
**Timing Attacks**
- Plain string comparison instead of bcrypt/argon2 in local strategy
### Serialization Leaks
**Missing ClassSerializerInterceptor**
- If not applied globally, `@Exclude()` fields (passwords, internal IDs) returned in responses.
- `@Expose()` with groups: admin-only fields exposed when groups not enforced per-request.
**Circular Relations**
- Eager-loaded TypeORM/Prisma relations exposing entire object graph without careful serialization.
### Interceptor Abuse
**Cache Poisoning**
- `CacheInterceptor` without user/tenant identity in cache key — responses from one user served to another.
- Test: authenticated request, then unauthenticated request returning cached data.
**Response Mapping**
- Transformation interceptors may leak internal entity fields if mapping is incomplete.
### Module Boundary Leaks
**Global Module Exposure**
- `@Global()` modules expose all providers to every module without explicit imports.
- Sensitive services (admin operations, internal APIs) accessible from untrusted modules.
**Config Leaks**
- `forRoot`/`forRootAsync` configuration secrets accessible via `ConfigService` injection in any module.
**Scope Issues**
- Request-scoped providers (`Scope.REQUEST`) incorrectly scoped as DEFAULT (singleton) — request context leaks across concurrent requests.
### WebSocket Gateway
- HTTP guards don't automatically apply to WebSocket gateways — `@UseGuards` must be explicit.
- Authentication deferred from `handleConnection` to message handlers allows unauthenticated message sending.
- Room/namespace authorization: users joining rooms they shouldn't access.
- `@SubscribeMessage()` handlers relying on connection-level auth instead of per-message validation.
### Microservice Transport
- `@MessagePattern`/`@EventPattern` handlers often lack guards (considered "internal").
- If transport (Redis, NATS, Kafka) is network-accessible, messages can be injected bypassing all HTTP security.
- `ValidationPipe` may only be configured for HTTP — microservice payloads skip validation.
### ORM Injection
**TypeORM**
- `QueryBuilder` and `.query()` with template literal interpolation → SQL injection.
- Relations: API allowing specification of which relations to load via query params.
**Mongoose**
- Query operator injection: `{ password: { $gt: "" } }` via unsanitized request body.
- `$where` and `$regex` operators from user input.
**Prisma**
- `$queryRaw`/`$executeRaw` with string interpolation (but not tagged template).
- `$queryRawUnsafe` usage.
### Rate Limiting
- `@SkipThrottle()` on sensitive endpoints (login, password reset, OTP).
- In-memory throttler storage: resets on restart, doesn't work across instances.
- Behind proxy without `trust proxy`: all requests share same IP, or header spoofable.
### CRUD Generators
- Auto-generated CRUD endpoints may not inherit manual guard configurations.
- Bulk operations (`createMany`, `updateMany`) bypassing per-entity authorization.
- Query parameter injection in CRUD libraries: `filter`, `sort`, `join`, `select` exposing unauthorized data.
## Bypass Techniques
- `@Public()` / skip-metadata applied via composed decorators at method level causing global guards to skip via `Reflector` metadata checks
- Route param pollution: `/users/123?id=456` — which `id` wins in guards vs handlers?
- Version routing: v1 of endpoint may still be registered without the guard added to v2
- `X-HTTP-Method-Override` or `_method` processed by Express before guards
- Content-type switching: `application/x-www-form-urlencoded` instead of JSON to bypass JSON-specific validation
- Exception filter differences: guard throwing results in generic error that leaks route existence info
## Testing Methodology
1. **Enumerate** — Fetch Swagger/OpenAPI, map all controllers, resolvers, and gateways
2. **Guard audit** — Map decorator stack per method: which guards, pipes, interceptors are applied at each level
3. **Matrix testing** — Test each endpoint across: unauth/user/admin × HTTP/WS/microservice
4. **Validation probing** — Send extra fields, wrong types, nested objects, arrays to find pipe gaps
5. **Transport parity** — Same operation via HTTP, WebSocket, and microservice transport
6. **Module boundaries** — Check if providers from one module are accessible without proper imports
7. **Serialization check** — Compare raw entity fields with API response fields
## Validation Requirements
- Guard bypass: request to guarded endpoint succeeding without auth, showing guard chain break point
- Validation bypass: payload with extra/malformed fields affecting business logic
- Cross-transport inconsistency: same action authorized via HTTP but exploitable via WebSocket/microservice
- Module boundary leak: accessing provider or data across unauthorized module boundaries
- Serialization leak: response containing excluded fields (passwords, internal metadata)
- IDOR: side-by-side requests from different users showing unauthorized data access
- ORM injection: raw query with user-controlled input returning unauthorized data, or error-based evidence of query structure
- Cache poisoning: response from unauthenticated or different-user request matching a prior authenticated user's cached response

View File

@@ -0,0 +1,66 @@
---
name: ffuf
description: ffuf fuzzing syntax with matcher/filter strategy and non-interactive defaults.
---
# ffuf CLI Playbook
Official docs:
- https://github.com/ffuf/ffuf
Canonical syntax:
`ffuf -w <wordlist> -u <url_with_FUZZ> [flags]`
High-signal flags:
- `-u <url>` target URL containing `FUZZ`
- `-w <wordlist>` wordlist input (supports `KEYWORD` mapping via `-w file:KEYWORD`)
- `-mc <codes>` match status codes
- `-fc <codes>` filter status codes
- `-fs <size>` filter by body size
- `-ac` auto-calibration
- `-t <n>` threads
- `-rate <n>` request rate
- `-timeout <seconds>` HTTP timeout
- `-x <proxy_url>` upstream proxy (HTTP/SOCKS)
- `-ignore-body` skip downloading response body
- `-noninteractive` disable interactive console mode
- `-recursion` and `-recursion-depth <n>` recursive discovery
- `-H <header>` custom headers
- `-X <method>` and `-d <body>` for non-GET fuzzing
- `-o <file> -of <json|ejson|md|html|csv|ecsv>` structured output
Agent-safe baseline for automation:
`ffuf -w wordlist.txt -u https://target.tld/FUZZ -mc 200,204,301,302,307,401,403,405 -ac -t 20 -rate 50 -timeout 10 -noninteractive -of json -o ffuf.json`
Common patterns:
- Basic path fuzzing:
`ffuf -w /path/wordlist.txt -u https://target.tld/FUZZ -mc 200,204,301,302,307,401,403 -ac -t 40 -rate 200 -noninteractive`
- Vhost fuzzing:
`ffuf -w vhosts.txt -u https://target.tld -H 'Host: FUZZ.target.tld' -fs 0 -ac -noninteractive`
- Parameter value fuzzing:
`ffuf -w values.txt -u 'https://target.tld/search?q=FUZZ' -mc all -fs 0 -ac -t 30 -noninteractive`
- POST body fuzzing:
`ffuf -w payloads.txt -u https://target.tld/login -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'username=admin&password=FUZZ' -fc 401 -noninteractive`
- Recursive discovery:
`ffuf -w dirs.txt -u https://target.tld/FUZZ -recursion -recursion-depth 2 -ac -t 30 -noninteractive`
- Proxy-instrumented run:
`ffuf -w wordlist.txt -u https://target.tld/FUZZ -x http://127.0.0.1:48080 -mc 200,301,302,403 -ac -noninteractive`
Critical correctness rules:
- `FUZZ` must appear exactly at the mutation point in URL/header/body.
- If using `-w file:KEYWORD`, that same `KEYWORD` must be present in URL/header/body.
- Always include `-noninteractive` in agent/script execution to prevent ffuf console mode from swallowing subsequent shell commands.
- Save structured output with `-of json -o <file>` for deterministic parsing.
Usage rules:
- Prefer explicit matcher/filter strategy (`-mc`/`-fc`/`-fs`) over default-only output.
- Start conservative (`-rate`, `-t`) and scale only if target tolerance is known.
- Do not use `-h`/`--help` during normal execution unless absolutely necessary.
Failure recovery:
- If ffuf drops into interactive mode, send `C-c` and rerun with `-noninteractive`.
- If response noise is too high, tighten `-mc/-fc/-fs` instead of increasing load.
- If runtime is too long, lower `-rate/-t` and tighten scope.
If uncertain, query web_search with:
`site:github.com/ffuf/ffuf <flag> README`

View File

@@ -0,0 +1,77 @@
---
name: httpx
description: ProjectDiscovery httpx probing syntax, exact probe flags, and automation-safe output patterns.
---
# httpx CLI Playbook
Official docs:
- https://docs.projectdiscovery.io/opensource/httpx/usage
- https://docs.projectdiscovery.io/opensource/httpx/running
- https://github.com/projectdiscovery/httpx
Canonical syntax:
`httpx [flags]`
High-signal flags:
- `-u, -target <url>` single target
- `-l, -list <file>` target list
- `-nf, -no-fallback` probe both HTTP and HTTPS
- `-nfs, -no-fallback-scheme` do not auto-switch schemes
- `-sc` status code
- `-title` page title
- `-server, -web-server` server header
- `-td, -tech-detect` technology detection
- `-fr, -follow-redirects` follow redirects
- `-mc <codes>` / `-fc <codes>` match or filter status codes
- `-path <path_or_file>` probe specific paths
- `-p, -ports <ports>` probe custom ports
- `-proxy, -http-proxy <url>` proxy target requests
- `-tlsi, -tls-impersonate` experimental TLS impersonation
- `-j, -json` JSONL output
- `-sr, -store-response` store request/response artifacts
- `-srd, -store-response-dir <dir>` custom directory for stored artifacts
- `-silent` compact output
- `-rl <n>` requests/second cap
- `-t <n>` threads
- `-timeout <seconds>` request timeout
- `-retries <n>` retry attempts
- `-o <file>` output file
Agent-safe baseline for automation:
`httpx -l hosts.txt -sc -title -server -td -fr -timeout 10 -retries 1 -rl 50 -t 25 -silent -j -o httpx.jsonl`
Common patterns:
- Quick live+fingerprint check:
`httpx -l hosts.txt -sc -title -server -td -silent -o httpx.txt`
- Probe known admin paths:
`httpx -l hosts.txt -path /,/login,/admin -sc -title -silent -j -o httpx_paths.jsonl`
- Probe both schemes explicitly:
`httpx -l hosts.txt -nf -sc -title -silent`
- Vhost detection pass:
`httpx -l hosts.txt -vhost -sc -title -silent -j -o httpx_vhost.jsonl`
- Proxy-instrumented probing:
`httpx -l hosts.txt -sc -title -proxy http://127.0.0.1:48080 -silent -j -o httpx_proxy.jsonl`
- Response-storage pass for downstream content parsing:
`httpx -l hosts.txt -fr -sr -srd recon/httpx_store -sc -title -server -cl -ct -location -probe -silent`
Critical correctness rules:
- For machine parsing, prefer `-j -o <file>`.
- Keep `-rl` and `-t` explicit for reproducible throughput.
- Use `-nf` when you need dual-scheme probing from host-only input.
- When using `-path` or `-ports`, keep scope tight to avoid accidental scan inflation.
- Use `-sr -srd <dir>` when later steps need raw response artifacts (JS/route extraction, grepping, replay).
Usage rules:
- Use `-silent` for pipeline-friendly output.
- Use `-mc/-fc` when downstream steps depend on specific response classes.
- Prefer `-proxy` flag over global proxy env vars when only httpx traffic should be proxied.
- Do not use `-h`/`--help` for routine runs unless absolutely necessary.
Failure recovery:
- If too many timeouts occur, reduce `-rl/-t` and/or increase `-timeout`.
- If output is noisy, add `-fc` filters or `-fd` duplicate filtering.
- If HTTPS-only probing misses HTTP services, rerun with `-nf` (and avoid `-nfs`).
If uncertain, query web_search with:
`site:docs.projectdiscovery.io httpx <flag> usage`

View File

@@ -0,0 +1,76 @@
---
name: katana
description: Katana crawler syntax, depth/js/known-files behavior, and stable concurrency controls.
---
# Katana CLI Playbook
Official docs:
- https://docs.projectdiscovery.io/opensource/katana/usage
- https://docs.projectdiscovery.io/opensource/katana/running
- https://github.com/projectdiscovery/katana
Canonical syntax:
`katana [flags]`
High-signal flags:
- `-u, -list <url|file>` target URL(s)
- `-d, -depth <n>` crawl depth
- `-jc, -js-crawl` parse JavaScript-discovered endpoints
- `-jsl, -jsluice` deeper JS parsing (memory intensive)
- `-kf, -known-files <all|robotstxt|sitemapxml>` known-file crawling mode
- `-proxy <http|socks5 proxy>` explicit proxy setting
- `-c, -concurrency <n>` concurrent fetchers
- `-p, -parallelism <n>` concurrent input targets
- `-rl, -rate-limit <n>` request rate limit
- `-timeout <seconds>` request timeout
- `-retry <n>` retry count
- `-ef, -extension-filter <list>` extension exclusions
- `-tlsi, -tls-impersonate` experimental JA3/TLS impersonation
- `-hl, -headless` enable hybrid headless crawling
- `-sc, -system-chrome` use local Chrome for headless mode
- `-ho, -headless-options <csv>` extra Chrome options (for example proxy-server)
- `-nos, -no-sandbox` run Chrome headless with no-sandbox
- `-noi, -no-incognito` disable incognito in headless mode
- `-cdd, -chrome-data-dir <dir>` persist browser profile/session
- `-xhr, -xhr-extraction` include XHR endpoints in JSONL output
- `-silent`, `-j, -jsonl`, `-o <file>` output controls
Agent-safe baseline for automation:
`mkdir -p crawl && katana -u https://target.tld -d 3 -jc -kf robotstxt -c 10 -p 10 -rl 50 -timeout 10 -retry 1 -ef png,jpg,jpeg,gif,svg,css,woff,woff2,ttf,eot,map -silent -j -o crawl/katana.jsonl`
Common patterns:
- Fast crawl baseline:
`katana -u https://target.tld -d 3 -jc -silent`
- Deeper JS-aware crawl:
`katana -u https://target.tld -d 5 -jc -jsl -kf all -c 10 -p 10 -rl 50 -o katana_urls.txt`
- Multi-target run with JSONL output:
`katana -list urls.txt -d 3 -jc -silent -j -o katana.jsonl`
- Headless crawl with local Chrome:
`katana -u https://target.tld -hl -sc -nos -xhr -j -o crawl/katana_headless.jsonl`
- Headless crawl through proxy:
`katana -u https://target.tld -hl -sc -ho proxy-server=http://127.0.0.1:48080 -j -o crawl/katana_proxy.jsonl`
Critical correctness rules:
- `-kf` must be followed by one of `all`, `robotstxt`, or `sitemapxml`.
- Use documented `-hl` for headless mode.
- `-proxy` expects a single proxy URL string (for example `http://127.0.0.1:8080`).
- `-ho` expects comma-separated Chrome options (example: `-ho --disable-gpu,proxy-server=http://127.0.0.1:8080`).
- For `-kf`, keep depth at least `-d 3` so known files are fully covered.
- If writing to a file, ensure parent directory exists before `-o`.
Usage rules:
- Keep `-d`, `-c`, `-p`, and `-rl` explicit for reproducible runs.
- Use `-ef` early to reduce static-file noise before fuzzing.
- Prefer `-proxy` over environment proxy variables when proxying only Katana traffic.
- Use `-hc` only for one-time diagnostics, not routine crawling loops.
- Do not use `-h`/`--help` for routine runs unless absolutely necessary.
Failure recovery:
- If crawl runs too long, lower `-d` and optionally add `-ct`.
- If memory spikes, disable `-jsl` and lower `-c/-p`.
- If headless fails with Chrome errors, drop `-sc` or install system Chrome.
- If output is noisy, tighten scope and add `-ef` filters.
If uncertain, query web_search with:
`site:docs.projectdiscovery.io katana <flag> usage`

View File

@@ -0,0 +1,68 @@
---
name: naabu
description: Naabu port-scanning syntax with host input, scan-type, verification, and rate controls.
---
# Naabu CLI Playbook
Official docs:
- https://docs.projectdiscovery.io/opensource/naabu/usage
- https://docs.projectdiscovery.io/opensource/naabu/running
- https://github.com/projectdiscovery/naabu
Canonical syntax:
`naabu [flags]`
High-signal flags:
- `-host <host>` single host
- `-list, -l <file>` hosts list
- `-p <ports>` explicit ports (supports ranges)
- `-top-ports <n|full>` top ports profile
- `-exclude-ports <ports>` exclusions
- `-scan-type <s|c|syn|connect>` SYN or CONNECT scan
- `-Pn` skip host discovery
- `-rate <n>` packets per second
- `-c <n>` worker count
- `-timeout <ms>` per-probe timeout in milliseconds
- `-retries <n>` retry attempts
- `-proxy <socks5://host:port>` SOCKS5 proxy
- `-verify` verify discovered open ports
- `-j, -json` JSONL output
- `-silent` compact output
- `-o <file>` output file
Agent-safe baseline for automation:
`naabu -list hosts.txt -top-ports 100 -scan-type c -Pn -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent -j -o naabu.jsonl`
Common patterns:
- Top ports with controlled rate:
`naabu -list hosts.txt -top-ports 100 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent -o naabu.txt`
- Focused web-ports sweep:
`naabu -list hosts.txt -p 80,443,8080,8443 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify -silent`
- Single-host quick check:
`naabu -host target.tld -p 22,80,443 -scan-type c -rate 300 -c 25 -timeout 1000 -retries 1 -verify`
- Root SYN mode (if available):
`sudo naabu -list hosts.txt -top-ports 100 -scan-type syn -rate 500 -c 25 -timeout 1000 -retries 1 -verify -silent`
Critical correctness rules:
- Use `-scan-type connect` when running without root/privileged raw socket access.
- Always set `-timeout` explicitly; it is in milliseconds.
- Set `-rate` explicitly to avoid unstable or noisy scans.
- `-timeout` is in milliseconds, not seconds.
- Keep port scope tight: prefer explicit important ports or a small `-top-ports` value unless broader coverage is explicitly required.
- Do not spam traffic; start with the smallest useful port set and conservative rate/worker settings.
- Prefer `-verify` before handing ports to follow-up scanners.
Usage rules:
- Keep host discovery behavior explicit (`-Pn` or default discovery).
- Use `-j -o <file>` for automation pipelines.
- Prefer `-p 22,80,443,8080,8443` or `-top-ports 100` before considering larger sweeps.
- Do not use `-h`/`--help` for normal flow unless absolutely necessary.
Failure recovery:
- If privileged socket errors occur, switch to `-scan-type c`.
- If scans are slow or lossy, lower `-rate`, lower `-c`, and tighten `-p`/`-top-ports`.
- If many hosts appear down, compare runs with and without `-Pn`.
If uncertain, query web_search with:
`site:docs.projectdiscovery.io naabu <flag> usage`

View File

@@ -0,0 +1,66 @@
---
name: nmap
description: Canonical Nmap CLI syntax, two-pass scanning workflow, and sandbox-safe bounded scan patterns.
---
# Nmap CLI Playbook
Official docs:
- https://nmap.org/book/man-briefoptions.html
- https://nmap.org/book/man.html
- https://nmap.org/book/man-performance.html
Canonical syntax:
`nmap [Scan Type(s)] [Options] {target specification}`
High-signal flags:
- `-n` skip DNS resolution
- `-Pn` skip host discovery when ICMP/ping is filtered
- `-sS` SYN scan (root/privileged)
- `-sT` TCP connect scan (no raw-socket privilege)
- `-sV` detect service versions
- `-sC` run default NSE scripts
- `-p <ports>` explicit ports (`-p-` for all TCP ports)
- `--top-ports <n>` quick common-port sweep
- `--open` show only hosts with open ports
- `-T<0-5>` timing template (`-T4` common)
- `--max-retries <n>` cap retransmissions
- `--host-timeout <time>` give up on very slow hosts
- `--script-timeout <time>` bound NSE script runtime
- `-oA <prefix>` output in normal/XML/grepable formats
Agent-safe baseline for automation:
`nmap -n -Pn --open --top-ports 100 -T4 --max-retries 1 --host-timeout 90s -oA nmap_quick <host>`
Common patterns:
- Fast first pass:
`nmap -n -Pn --top-ports 100 --open -T4 --max-retries 1 --host-timeout 90s <host>`
- Very small important-port pass:
`nmap -n -Pn -p 22,80,443,8080,8443 --open -T4 --max-retries 1 --host-timeout 90s <host>`
- Service/script enrichment on discovered ports:
`nmap -n -Pn -sV -sC -p <comma_ports> --script-timeout 30s --host-timeout 3m -oA nmap_services <host>`
- No-root fallback:
`nmap -n -Pn -sT --top-ports 100 --open --host-timeout 90s <host>`
Critical correctness rules:
- Always set target scope explicitly.
- Prefer two-pass scanning: discovery pass, then enrichment pass.
- Always set a timeout boundary with `--host-timeout`; add `--script-timeout` whenever NSE scripts are involved.
- Keep discovery scans tight: use explicit important ports or a small `--top-ports` profile unless broader coverage is explicitly required.
- In sandboxed runs, avoid exhaustive sweeps (`-p-`, very high `--top-ports`, or wide host ranges) unless explicitly required.
- Do not spam traffic; start with the smallest port set that can answer the question.
- Prefer `naabu` for broad port discovery; use `nmap` for scoped verification/enrichment.
Usage rules:
- Add `-n` by default in automation to avoid DNS delays.
- Use `-oA` for reusable artifacts.
- Prefer `-p 22,80,443,8080,8443` or `--top-ports 100` before considering larger sweeps.
- Do not use `-h`/`--help` for routine usage unless absolutely necessary.
Failure recovery:
- If host appears down unexpectedly, rerun with `-Pn`.
- If scan stalls, tighten scope (`-p` or smaller `--top-ports`) and lower retries.
- If scripts run too long, add `--script-timeout`.
If uncertain, query web_search with:
`site:nmap.org/book nmap <flag>`

View File

@@ -0,0 +1,67 @@
---
name: nuclei
description: Exact Nuclei command structure, template selection, and bounded high-throughput execution controls.
---
# Nuclei CLI Playbook
Official docs:
- https://docs.projectdiscovery.io/opensource/nuclei/running
- https://docs.projectdiscovery.io/opensource/nuclei/mass-scanning-cli
- https://github.com/projectdiscovery/nuclei
Canonical syntax:
`nuclei [flags]`
High-signal flags:
- `-u, -target <url>` single target
- `-l, -list <file>` targets file
- `-im, -input-mode <mode>` list/burp/jsonl/yaml/openapi/swagger
- `-t, -templates <path|tag>` explicit template path(s)
- `-tags <tag1,tag2>` run by tag
- `-s, -severity <critical,high,...>` severity filter
- `-as, -automatic-scan` tech-mapped automatic scan
- `-ni, -no-interactsh` disable OAST/interactsh requests
- `-rl, -rate-limit <n>` global request rate cap
- `-c, -concurrency <n>` template concurrency
- `-bs, -bulk-size <n>` hosts in parallel per template
- `-timeout <seconds>` request timeout
- `-retries <n>` retries
- `-stats` periodic scan stats output
- `-silent` findings-only output
- `-j, -jsonl` JSONL output
- `-o <file>` output file
Agent-safe baseline for automation:
`nuclei -l targets.txt -as -s critical,high -rl 50 -c 20 -bs 20 -timeout 10 -retries 1 -silent -j -o nuclei.jsonl`
Common patterns:
- Focused severity scan:
`nuclei -u https://target.tld -s critical,high -silent -o nuclei_high.txt`
- List-driven controlled scan:
`nuclei -l targets.txt -as -rl 50 -c 20 -bs 20 -timeout 10 -retries 1 -j -o nuclei.jsonl`
- Tag-driven run:
`nuclei -l targets.txt -tags cve,misconfig -s critical,high,medium -silent`
- Explicit templates:
`nuclei -l targets.txt -t http/cves/ -t dns/ -rl 30 -c 10 -bs 10 -j -o nuclei_templates.jsonl`
- Deterministic non-OAST run:
`nuclei -l targets.txt -as -s critical,high -ni -stats -rl 30 -c 10 -bs 10 -timeout 10 -retries 1 -j -o nuclei_no_oast.jsonl`
Critical correctness rules:
- Provide a template selection method (`-as`, `-t`, or `-tags`); avoid unscoped broad runs.
- Keep `-rl`, `-c`, and `-bs` explicit for predictable resource use.
- Use `-ni` when outbound interactsh/OAST traffic is not expected or not allowed.
- Use structured output (`-j -o <file>`) for automation.
Usage rules:
- Start with severity/tags/templates filters to keep runs explainable.
- Keep retries conservative (`-retries 1`) unless transport instability is proven.
- Do not use `-h`/`--help` for routine operation unless absolutely necessary.
Failure recovery:
- If performance degrades, lower `-c/-bs` before lowering `-rl`.
- If findings are unexpectedly empty, verify template selection (`-as` vs explicit `-t/-tags`).
- If scan duration grows, reduce target set and enforce stricter template/severity filters.
If uncertain, query web_search with:
`site:docs.projectdiscovery.io nuclei <flag> running`

View File

@@ -0,0 +1,72 @@
---
name: semgrep
description: Exact Semgrep CLI structure, metrics-off scanning, scoped ruleset selection, and automation-safe output patterns.
---
# Semgrep CLI Playbook
Official docs:
- https://semgrep.dev/docs/cli-reference
- https://semgrep.dev/docs/getting-started/cli
- https://semgrep.dev/docs/semgrep-code/semgrep-pro-engine-intro
Canonical syntax:
`semgrep scan [flags]`
High-signal flags:
- `--config <rule_or_ruleset>` ruleset, registry pack, local rule file, or directory
- `--metrics=off` disable telemetry and metrics reporting
- `--json` JSON output
- `--sarif` SARIF output
- `--output <file>` write findings to file
- `--severity <level>` filter by severity
- `--error` return non-zero exit when findings exist
- `--quiet` suppress progress noise
- `--jobs <n>` parallel workers
- `--timeout <seconds>` per-file timeout
- `--exclude <pattern>` exclude path pattern
- `--include <pattern>` include path pattern
- `--exclude-rule <rule_id>` suppress specific rule
- `--baseline-commit <sha>` only report findings introduced after baseline
- `--pro` enable Pro engine if available
- `--oss-only` force OSS engine only
Agent-safe baseline for automation:
`semgrep scan --config p/default --metrics=off --json --output semgrep.json --quiet --jobs 4 --timeout 20 /workspace`
Common patterns:
- Default security scan:
`semgrep scan --config p/default --metrics=off --json --output semgrep.json --quiet /workspace`
- High-severity focused pass:
`semgrep scan --config p/default --severity ERROR --metrics=off --json --output semgrep_high.json --quiet /workspace`
- OWASP-oriented scan:
`semgrep scan --config p/owasp-top-ten --metrics=off --sarif --output semgrep.sarif --quiet /workspace`
- Language- or framework-specific rules:
`semgrep scan --config p/python --config p/secrets --metrics=off --json --output semgrep_python.json --quiet /workspace`
- Scoped directory scan:
`semgrep scan --config p/default --metrics=off --json --output semgrep_api.json --quiet /workspace/services/api`
- Pro engine check or run:
`semgrep scan --config p/default --pro --metrics=off --json --output semgrep_pro.json --quiet /workspace`
Critical correctness rules:
- Always include `--metrics=off`; Semgrep sends telemetry by default.
- Always provide an explicit `--config`; do not rely on vague or implied defaults.
- Prefer `--json --output <file>` or `--sarif --output <file>` for machine-readable downstream processing.
- Keep the target path explicit; use an absolute or clearly scoped workspace path instead of `.` when possible.
- If Pro availability matters, check it explicitly with a bounded command before assuming cross-file analysis exists.
Usage rules:
- Start with `p/default` unless the task clearly calls for a narrower pack.
- Add focused packs such as `p/secrets`, `p/python`, or `p/javascript` only when they match the target stack.
- Use `--quiet` in automation to reduce noisy logs.
- Use `--jobs` and `--timeout` explicitly for reproducible runtime behavior.
- Do not use `-h`/`--help` for routine operation unless absolutely necessary.
Failure recovery:
- If scans are too slow, narrow the target path and reduce the active rulesets before changing engine settings.
- If scans time out, increase `--timeout` modestly or lower `--jobs`.
- If output is too broad, scope `--config`, add `--severity`, or exclude known irrelevant paths.
- If Pro mode fails, rerun with `--oss-only` or without `--pro` and note the loss of cross-file coverage.
If uncertain, query web_search with:
`site:semgrep.dev semgrep <flag> cli`

View File

@@ -0,0 +1,67 @@
---
name: sqlmap
description: sqlmap target syntax, non-interactive execution, and common validation/enumeration workflows.
---
# sqlmap CLI Playbook
Official docs:
- https://github.com/sqlmapproject/sqlmap/wiki/usage
- https://sqlmap.org
Canonical syntax:
`sqlmap -u "<target_url_with_params>" [options]`
High-signal flags:
- `-u, --url <url>` target URL
- `-r <request_file>` raw HTTP request input
- `-p <param>` test specific parameter(s)
- `--batch` non-interactive mode
- `--level <1-5>` test depth
- `--risk <1-3>` payload risk profile
- `--threads <n>` concurrency
- `--technique <letters>` technique selection
- `--forms` parse and test forms from target page
- `--cookie <cookie>` and `--headers <headers>` authenticated context
- `--timeout <seconds>` and `--retries <n>` transport stability
- `--tamper <scripts>` WAF/input-filter evasion
- `--random-agent` randomize user-agent
- `--ignore-proxy` bypass configured proxy
- `--dbs`, `-D <db> --tables`, `-D <db> -T <table> --columns`, `-D <db> -T <table> -C <cols> --dump`
- `--flush-session` clear cached scan state
Agent-safe baseline for automation:
`sqlmap -u "https://target.tld/item?id=1" -p id --batch --level 2 --risk 1 --threads 5 --timeout 10 --retries 1 --random-agent`
Common patterns:
- Baseline injection check:
`sqlmap -u "https://target.tld/item?id=1" -p id --batch --level 2 --risk 1 --threads 5`
- POST parameter testing:
`sqlmap -u "https://target.tld/login" --data "user=admin&pass=test" -p pass --batch --level 2 --risk 1`
- Form-driven testing:
`sqlmap -u "https://target.tld/login" --forms --batch --level 2 --risk 1 --random-agent`
- Enumerate DBs:
`sqlmap -u "https://target.tld/item?id=1" -p id --batch --dbs`
- Enumerate tables in DB:
`sqlmap -u "https://target.tld/item?id=1" -p id --batch -D appdb --tables`
- Dump selected columns:
`sqlmap -u "https://target.tld/item?id=1" -p id --batch -D appdb -T users -C id,email,role --dump`
Critical correctness rules:
- Always include `--batch` in automation to avoid interactive prompts.
- Keep target parameter explicit with `-p` when possible.
- Use `--flush-session` when retesting after request/profile changes.
- Start conservative (`--level 1-2`, `--risk 1`) and escalate only when needed.
Usage rules:
- Keep authenticated context (`--cookie`/`--headers`) aligned with manual validation state.
- Prefer narrow extraction (`-D/-T/-C`) over broad dump-first behavior.
- Do not use `-h`/`--help` during normal execution unless absolutely necessary.
Failure recovery:
- If results conflict with manual testing, rerun with `--flush-session`.
- If blocked by filtering/WAF, reduce `--threads` and test targeted `--tamper` chains.
- If initial detection misses likely injection, increment `--level`/`--risk` gradually.
If uncertain, query web_search with:
`site:github.com/sqlmapproject/sqlmap/wiki/usage sqlmap <flag>`

View File

@@ -0,0 +1,66 @@
---
name: subfinder
description: Subfinder passive subdomain enumeration syntax, source controls, and pipeline-ready output patterns.
---
# Subfinder CLI Playbook
Official docs:
- https://docs.projectdiscovery.io/opensource/subfinder/usage
- https://docs.projectdiscovery.io/opensource/subfinder/running
- https://github.com/projectdiscovery/subfinder
Canonical syntax:
`subfinder [flags]`
High-signal flags:
- `-d <domain>` single domain
- `-dL <file>` domain list
- `-all` include all sources
- `-recursive` use recursive-capable sources
- `-s <sources>` include specific sources
- `-es <sources>` exclude specific sources
- `-rl <n>` global rate limit
- `-rls <source=n/s,...>` per-source rate limits
- `-proxy <http://host:port>` proxy outbound source requests
- `-silent` compact output
- `-o <file>` output file
- `-oJ, -json` JSONL output
- `-cs, -collect-sources` include source metadata (`-oJ` output)
- `-nW, -active` show only active subdomains
- `-timeout <seconds>` request timeout
- `-max-time <minutes>` overall enumeration cap
Agent-safe baseline for automation:
`subfinder -d example.com -all -recursive -rl 20 -timeout 30 -silent -oJ -o subfinder.jsonl`
Common patterns:
- Standard passive enum:
`subfinder -d example.com -silent -o subs.txt`
- Broad-source passive enum:
`subfinder -d example.com -all -recursive -silent -o subs_all.txt`
- Multi-domain run:
`subfinder -dL domains.txt -all -recursive -rl 20 -silent -o subfinder_out.txt`
- Source-attributed JSONL output:
`subfinder -d example.com -all -oJ -cs -o subfinder_sources.jsonl`
- Passive enum via explicit proxy:
`subfinder -d example.com -all -recursive -proxy http://127.0.0.1:48080 -silent -oJ -o subfinder_proxy.jsonl`
Critical correctness rules:
- `-cs` is useful only with JSON output (`-oJ`).
- Many sources require API keys in provider config; low results can be config-related, not target-related.
- `-nW` performs active resolution/filtering and can drop passive-only hits.
- Keep passive enum first, then validate with `httpx`.
Usage rules:
- Keep output files explicit when chaining to `httpx`/`nuclei`.
- Use `-rl/-rls` when providers throttle aggressively.
- Do not use `-h`/`--help` for routine tasks unless absolutely necessary.
Failure recovery:
- If results are unexpectedly low, rerun with `-all` and verify provider config/API keys.
- If provider errors appear, lower `-rl` and apply `-rls` per source.
- If runs take too long, lower scope or split domain batches.
If uncertain, query web_search with:
`site:docs.projectdiscovery.io subfinder <flag> usage`

23
strix/telemetry/flags.py Normal file
View File

@@ -0,0 +1,23 @@
from strix.config import Config
_DISABLED_VALUES = {"0", "false", "no", "off"}
def _is_enabled(raw_value: str | None, default: str = "1") -> bool:
value = (raw_value if raw_value is not None else default).strip().lower()
return value not in _DISABLED_VALUES
def is_otel_enabled() -> bool:
explicit = Config.get("strix_otel_telemetry")
if explicit is not None:
return _is_enabled(explicit)
return _is_enabled(Config.get("strix_telemetry"), default="1")
def is_posthog_enabled() -> bool:
explicit = Config.get("strix_posthog_telemetry")
if explicit is not None:
return _is_enabled(explicit)
return _is_enabled(Config.get("strix_telemetry"), default="1")

View File

@@ -6,7 +6,7 @@ from pathlib import Path
from typing import TYPE_CHECKING, Any
from uuid import uuid4
from strix.config import Config
from strix.telemetry.flags import is_posthog_enabled
if TYPE_CHECKING:
@@ -19,7 +19,7 @@ _SESSION_ID = uuid4().hex[:16]
def _is_enabled() -> bool:
return (Config.get("strix_telemetry") or "1").lower() not in ("0", "false", "no", "off")
return is_posthog_enabled()
def _is_first_run() -> bool:

View File

@@ -1,20 +1,40 @@
import json
import logging
import threading
from datetime import UTC, datetime
from pathlib import Path
from typing import TYPE_CHECKING, Any, Optional
from typing import Any, Callable, Optional
from uuid import uuid4
from opentelemetry import trace
from opentelemetry.trace import SpanContext, SpanKind
from strix.config import Config
from strix.telemetry import posthog
from strix.telemetry.flags import is_otel_enabled
from strix.telemetry.utils import (
TelemetrySanitizer,
append_jsonl_record,
bootstrap_otel,
format_span_id,
format_trace_id,
get_events_write_lock,
)
if TYPE_CHECKING:
from collections.abc import Callable
try:
from traceloop.sdk import Traceloop
except ImportError: # pragma: no cover - exercised when dependency is absent
Traceloop = None # type: ignore[assignment,unused-ignore]
logger = logging.getLogger(__name__)
_global_tracer: Optional["Tracer"] = None
_OTEL_BOOTSTRAP_LOCK = threading.Lock()
_OTEL_BOOTSTRAPPED = False
_OTEL_REMOTE_ENABLED = False
def get_global_tracer() -> Optional["Tracer"]:
return _global_tracer
@@ -52,15 +72,225 @@ class Tracer:
"status": "running",
}
self._run_dir: Path | None = None
self._events_file_path: Path | None = None
self._next_execution_id = 1
self._next_message_id = 1
self._saved_vuln_ids: set[str] = set()
self._run_completed_emitted = False
self._telemetry_enabled = is_otel_enabled()
self._sanitizer = TelemetrySanitizer()
self._otel_tracer: Any = None
self._remote_export_enabled = False
self.caido_url: str | None = None
self.vulnerability_found_callback: Callable[[dict[str, Any]], None] | None = None
self._setup_telemetry()
self._emit_run_started_event()
@property
def events_file_path(self) -> Path:
if self._events_file_path is None:
self._events_file_path = self.get_run_dir() / "events.jsonl"
return self._events_file_path
def _active_events_file_path(self) -> Path:
active = get_global_tracer()
if active and active._events_file_path is not None:
return active._events_file_path
return self.events_file_path
def _get_events_write_lock(self, output_path: Path | None = None) -> threading.Lock:
path = output_path or self.events_file_path
return get_events_write_lock(path)
def _active_run_metadata(self) -> dict[str, Any]:
active = get_global_tracer()
if active:
return active.run_metadata
return self.run_metadata
def _setup_telemetry(self) -> None:
global _OTEL_BOOTSTRAPPED, _OTEL_REMOTE_ENABLED
if not self._telemetry_enabled:
self._otel_tracer = None
self._remote_export_enabled = False
return
run_dir = self.get_run_dir()
self._events_file_path = run_dir / "events.jsonl"
base_url = (Config.get("traceloop_base_url") or "").strip()
api_key = (Config.get("traceloop_api_key") or "").strip()
headers_raw = Config.get("traceloop_headers") or ""
(
self._otel_tracer,
self._remote_export_enabled,
_OTEL_BOOTSTRAPPED,
_OTEL_REMOTE_ENABLED,
) = bootstrap_otel(
bootstrapped=_OTEL_BOOTSTRAPPED,
remote_enabled_state=_OTEL_REMOTE_ENABLED,
bootstrap_lock=_OTEL_BOOTSTRAP_LOCK,
traceloop=Traceloop,
base_url=base_url,
api_key=api_key,
headers_raw=headers_raw,
output_path_getter=self._active_events_file_path,
run_metadata_getter=self._active_run_metadata,
sanitizer=self._sanitize_data,
write_lock_getter=self._get_events_write_lock,
tracer_name="strix.telemetry.tracer",
)
def _set_association_properties(self, properties: dict[str, Any]) -> None:
if Traceloop is None:
return
sanitized = self._sanitize_data(properties)
try:
Traceloop.set_association_properties(sanitized)
except Exception: # noqa: BLE001
logger.debug("Failed to set Traceloop association properties")
def _sanitize_data(self, data: Any, key_hint: str | None = None) -> Any:
return self._sanitizer.sanitize(data, key_hint=key_hint)
def _append_event_record(self, record: dict[str, Any]) -> None:
try:
append_jsonl_record(self.events_file_path, record)
except OSError:
logger.exception("Failed to append JSONL event record")
def _enrich_actor(self, actor: dict[str, Any] | None) -> dict[str, Any] | None:
if not actor:
return None
enriched = dict(actor)
if "agent_name" in enriched:
return enriched
agent_id = enriched.get("agent_id")
if not isinstance(agent_id, str):
return enriched
agent_data = self.agents.get(agent_id, {})
agent_name = agent_data.get("name")
if isinstance(agent_name, str) and agent_name:
enriched["agent_name"] = agent_name
return enriched
def _emit_event(
self,
event_type: str,
actor: dict[str, Any] | None = None,
payload: Any | None = None,
status: str | None = None,
error: Any | None = None,
source: str = "strix.tracer",
include_run_metadata: bool = False,
) -> None:
if not self._telemetry_enabled:
return
enriched_actor = self._enrich_actor(actor)
sanitized_actor = self._sanitize_data(enriched_actor) if enriched_actor else None
sanitized_payload = self._sanitize_data(payload) if payload is not None else None
sanitized_error = self._sanitize_data(error) if error is not None else None
trace_id: str | None = None
span_id: str | None = None
parent_span_id: str | None = None
current_context = trace.get_current_span().get_span_context()
if isinstance(current_context, SpanContext) and current_context.is_valid:
parent_span_id = format_span_id(current_context.span_id)
if self._otel_tracer is not None:
try:
with self._otel_tracer.start_as_current_span(
f"strix.{event_type}",
kind=SpanKind.INTERNAL,
) as span:
span_context = span.get_span_context()
trace_id = format_trace_id(span_context.trace_id)
span_id = format_span_id(span_context.span_id)
span.set_attribute("strix.event_type", event_type)
span.set_attribute("strix.source", source)
span.set_attribute("strix.run_id", self.run_id)
span.set_attribute("strix.run_name", self.run_name or "")
if status:
span.set_attribute("strix.status", status)
if sanitized_actor is not None:
span.set_attribute(
"strix.actor",
json.dumps(sanitized_actor, ensure_ascii=False),
)
if sanitized_payload is not None:
span.set_attribute(
"strix.payload",
json.dumps(sanitized_payload, ensure_ascii=False),
)
if sanitized_error is not None:
span.set_attribute(
"strix.error",
json.dumps(sanitized_error, ensure_ascii=False),
)
except Exception: # noqa: BLE001
logger.debug("Failed to create OTEL span for event type '%s'", event_type)
if trace_id is None:
trace_id = format_trace_id(uuid4().int & ((1 << 128) - 1)) or uuid4().hex
if span_id is None:
span_id = format_span_id(uuid4().int & ((1 << 64) - 1)) or uuid4().hex[:16]
record = {
"timestamp": datetime.now(UTC).isoformat(),
"event_type": event_type,
"run_id": self.run_id,
"trace_id": trace_id,
"span_id": span_id,
"parent_span_id": parent_span_id,
"actor": sanitized_actor,
"payload": sanitized_payload,
"status": status,
"error": sanitized_error,
"source": source,
}
if include_run_metadata:
record["run_metadata"] = self._sanitize_data(self.run_metadata)
self._append_event_record(record)
def set_run_name(self, run_name: str) -> None:
self.run_name = run_name
self.run_id = run_name
self.run_metadata["run_name"] = run_name
self.run_metadata["run_id"] = run_name
self._run_dir = None
self._events_file_path = None
self._run_completed_emitted = False
self._set_association_properties({"run_id": self.run_id, "run_name": self.run_name or ""})
self._emit_run_started_event()
def _emit_run_started_event(self) -> None:
if not self._telemetry_enabled:
return
self._emit_event(
"run.started",
payload={
"run_name": self.run_name,
"start_time": self.start_time,
"local_jsonl_path": str(self.events_file_path),
"remote_export_enabled": self._remote_export_enabled,
},
status="running",
include_run_metadata=True,
)
def get_run_dir(self) -> Path:
if self._run_dir is None:
@@ -133,6 +363,12 @@ class Tracer:
self.vulnerability_reports.append(report)
logger.info(f"Added vulnerability report: {report_id} - {title}")
posthog.finding(severity)
self._emit_event(
"finding.created",
payload={"report": report},
status=report["severity"],
source="strix.findings",
)
if self.vulnerability_found_callback:
self.vulnerability_found_callback(report)
@@ -177,11 +413,24 @@ class Tracer:
"""
logger.info("Updated scan final fields")
self._emit_event(
"finding.reviewed",
payload={
"scan_completed": True,
"vulnerability_count": len(self.vulnerability_reports),
},
status="completed",
source="strix.findings",
)
self.save_run_data(mark_complete=True)
posthog.end(self, exit_reason="finished_by_tool")
def log_agent_creation(
self, agent_id: str, name: str, task: str, parent_id: str | None = None
self,
agent_id: str,
name: str,
task: str,
parent_id: str | None = None,
) -> None:
agent_data: dict[str, Any] = {
"id": agent_id,
@@ -195,6 +444,13 @@ class Tracer:
}
self.agents[agent_id] = agent_data
self._emit_event(
"agent.created",
actor={"agent_id": agent_id, "agent_name": name},
payload={"task": task, "parent_id": parent_id},
status="running",
source="strix.agents",
)
def log_chat_message(
self,
@@ -216,9 +472,21 @@ class Tracer:
}
self.chat_messages.append(message_data)
self._emit_event(
"chat.message",
actor={"agent_id": agent_id, "role": role},
payload={"message_id": message_id, "content": content, "metadata": metadata or {}},
status="logged",
source="strix.chat",
)
return message_id
def log_tool_execution_start(self, agent_id: str, tool_name: str, args: dict[str, Any]) -> int:
def log_tool_execution_start(
self,
agent_id: str,
tool_name: str,
args: dict[str, Any],
) -> int:
execution_id = self._next_execution_id
self._next_execution_id += 1
@@ -240,18 +508,67 @@ class Tracer:
if agent_id in self.agents:
self.agents[agent_id]["tool_executions"].append(execution_id)
self._emit_event(
"tool.execution.started",
actor={
"agent_id": agent_id,
"tool_name": tool_name,
"execution_id": execution_id,
},
payload={"args": args},
status="running",
source="strix.tools",
)
return execution_id
def update_tool_execution(
self, execution_id: int, status: str, result: Any | None = None
self,
execution_id: int,
status: str,
result: Any | None = None,
) -> None:
if execution_id in self.tool_executions:
self.tool_executions[execution_id]["status"] = status
self.tool_executions[execution_id]["result"] = result
self.tool_executions[execution_id]["completed_at"] = datetime.now(UTC).isoformat()
if execution_id not in self.tool_executions:
return
tool_data = self.tool_executions[execution_id]
tool_data["status"] = status
tool_data["result"] = result
tool_data["completed_at"] = datetime.now(UTC).isoformat()
tool_name = str(tool_data.get("tool_name", "unknown"))
agent_id = str(tool_data.get("agent_id", "unknown"))
error_payload = result if status in {"error", "failed"} else None
self._emit_event(
"tool.execution.updated",
actor={
"agent_id": agent_id,
"tool_name": tool_name,
"execution_id": execution_id,
},
payload={"result": result},
status=status,
error=error_payload,
source="strix.tools",
)
if tool_name == "create_vulnerability_report":
finding_status = "reviewed" if status == "completed" else "rejected"
self._emit_event(
"finding.reviewed",
actor={"agent_id": agent_id, "tool_name": tool_name},
payload={"execution_id": execution_id, "result": result},
status=finding_status,
error=error_payload,
source="strix.findings",
)
def update_agent_status(
self, agent_id: str, status: str, error_message: str | None = None
self,
agent_id: str,
status: str,
error_message: str | None = None,
) -> None:
if agent_id in self.agents:
self.agents[agent_id]["status"] = status
@@ -259,6 +576,15 @@ class Tracer:
if error_message:
self.agents[agent_id]["error_message"] = error_message
self._emit_event(
"agent.status.updated",
actor={"agent_id": agent_id},
payload={"error_message": error_message},
status=status,
error=error_message,
source="strix.agents",
)
def set_scan_config(self, config: dict[str, Any]) -> None:
self.scan_config = config
self.run_metadata.update(
@@ -268,13 +594,29 @@ class Tracer:
"max_iterations": config.get("max_iterations", 200),
}
)
self.get_run_dir()
self._set_association_properties(
{
"run_id": self.run_id,
"run_name": self.run_name or "",
"targets": config.get("targets", []),
"max_iterations": config.get("max_iterations", 200),
}
)
self._emit_event(
"run.configured",
payload={"scan_config": config},
status="configured",
source="strix.run",
)
def save_run_data(self, mark_complete: bool = False) -> None: # noqa: PLR0912, PLR0915
def save_run_data(self, mark_complete: bool = False) -> None:
try:
run_dir = self.get_run_dir()
if mark_complete:
self.end_time = datetime.now(UTC).isoformat()
if self.end_time is None:
self.end_time = datetime.now(UTC).isoformat()
self.run_metadata["end_time"] = self.end_time
self.run_metadata["status"] = "completed"
if self.final_scan_result:
penetration_test_report_file = run_dir / "penetration_test_report.md"
@@ -285,7 +627,8 @@ class Tracer:
)
f.write(f"{self.final_scan_result}\n")
logger.info(
f"Saved final penetration test report to: {penetration_test_report_file}"
"Saved final penetration test report to: %s",
penetration_test_report_file,
)
if self.vulnerability_reports:
@@ -301,7 +644,10 @@ class Tracer:
severity_order = {"critical": 0, "high": 1, "medium": 2, "low": 3, "info": 4}
sorted_reports = sorted(
self.vulnerability_reports,
key=lambda x: (severity_order.get(x["severity"], 5), x["timestamp"]),
key=lambda report: (
severity_order.get(report["severity"], 5),
report["timestamp"],
),
)
for report in new_reports:
@@ -328,8 +674,8 @@ class Tracer:
f.write(f"**{label}:** {value}\n")
f.write("\n## Description\n\n")
desc = report.get("description") or "No description provided."
f.write(f"{desc}\n\n")
description = report.get("description") or "No description provided."
f.write(f"{description}\n\n")
if report.get("impact"):
f.write("## Impact\n\n")
@@ -403,11 +749,25 @@ class Tracer:
if new_reports:
logger.info(
f"Saved {len(new_reports)} new vulnerability report(s) to: {vuln_dir}"
"Saved %d new vulnerability report(s) to: %s",
len(new_reports),
vuln_dir,
)
logger.info(f"Updated vulnerability index: {vuln_csv_file}")
logger.info("Updated vulnerability index: %s", vuln_csv_file)
logger.info(f"📊 Essential scan data saved to: {run_dir}")
logger.info("📊 Essential scan data saved to: %s", run_dir)
if mark_complete and not self._run_completed_emitted:
self._emit_event(
"run.completed",
payload={
"duration_seconds": self._calculate_duration(),
"vulnerability_count": len(self.vulnerability_reports),
},
status="completed",
source="strix.run",
include_run_metadata=True,
)
self._run_completed_emitted = True
except (OSError, RuntimeError):
logger.exception("Failed to save scan data")

413
strix/telemetry/utils.py Normal file
View File

@@ -0,0 +1,413 @@
import json
import logging
import re
import threading
from collections.abc import Callable, Sequence
from datetime import UTC, datetime
from pathlib import Path
from typing import Any
from opentelemetry import trace
from opentelemetry.sdk.trace import ReadableSpan, TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
SimpleSpanProcessor,
SpanExporter,
SpanExportResult,
)
from scrubadub import Scrubber
from scrubadub.detectors import RegexDetector
from scrubadub.filth import Filth
logger = logging.getLogger(__name__)
_REDACTED = "[REDACTED]"
_SCREENSHOT_OMITTED = "[SCREENSHOT_OMITTED]"
_SCREENSHOT_KEY_PATTERN = re.compile(r"screenshot", re.IGNORECASE)
_SENSITIVE_KEY_PATTERN = re.compile(
r"(api[_-]?key|token|secret|password|authorization|cookie|session|credential|private[_-]?key)",
re.IGNORECASE,
)
_SENSITIVE_TOKEN_PATTERN = re.compile(
r"(?i)\b("
r"bearer\s+[a-z0-9._-]+|"
r"sk-[a-z0-9_-]{8,}|"
r"gh[pousr]_[a-z0-9_-]{12,}|"
r"xox[baprs]-[a-z0-9-]{12,}"
r")\b"
)
_SCRUBADUB_PLACEHOLDER_PATTERN = re.compile(r"\{\{[^}]+\}\}")
_EVENTS_FILE_LOCKS_LOCK = threading.Lock()
_EVENTS_FILE_LOCKS: dict[str, threading.Lock] = {}
_NOISY_OTEL_CONTENT_PREFIXES = (
"gen_ai.prompt.",
"gen_ai.completion.",
"llm.input_messages.",
"llm.output_messages.",
)
_NOISY_OTEL_EXACT_KEYS = {
"llm.input",
"llm.output",
"llm.prompt",
"llm.completion",
}
class _SecretFilth(Filth): # type: ignore[misc]
type = "secret"
class _SecretTokenDetector(RegexDetector): # type: ignore[misc]
name = "strix_secret_token_detector"
filth_cls = _SecretFilth
regex = _SENSITIVE_TOKEN_PATTERN
class TelemetrySanitizer:
def __init__(self) -> None:
self._scrubber = Scrubber(detector_list=[_SecretTokenDetector])
def sanitize(self, data: Any, key_hint: str | None = None) -> Any: # noqa: PLR0911
if data is None:
return None
if isinstance(data, dict):
sanitized: dict[str, Any] = {}
for key, value in data.items():
key_str = str(key)
if _SCREENSHOT_KEY_PATTERN.search(key_str):
sanitized[key_str] = _SCREENSHOT_OMITTED
elif _SENSITIVE_KEY_PATTERN.search(key_str):
sanitized[key_str] = _REDACTED
else:
sanitized[key_str] = self.sanitize(value, key_hint=key_str)
return sanitized
if isinstance(data, list):
return [self.sanitize(item, key_hint=key_hint) for item in data]
if isinstance(data, tuple):
return [self.sanitize(item, key_hint=key_hint) for item in data]
if isinstance(data, str):
if key_hint and _SENSITIVE_KEY_PATTERN.search(key_hint):
return _REDACTED
cleaned = self._scrubber.clean(data)
return _SCRUBADUB_PLACEHOLDER_PATTERN.sub(_REDACTED, cleaned)
if isinstance(data, int | float | bool):
return data
return str(data)
def format_trace_id(trace_id: int | None) -> str | None:
if trace_id is None or trace_id == 0:
return None
return f"{trace_id:032x}"
def format_span_id(span_id: int | None) -> str | None:
if span_id is None or span_id == 0:
return None
return f"{span_id:016x}"
def iso_from_unix_ns(unix_ns: int | None) -> str | None:
if unix_ns is None:
return None
try:
return datetime.fromtimestamp(unix_ns / 1_000_000_000, tz=UTC).isoformat()
except (OSError, OverflowError, ValueError):
return None
def get_events_write_lock(output_path: Path) -> threading.Lock:
path_key = str(output_path.resolve(strict=False))
with _EVENTS_FILE_LOCKS_LOCK:
lock = _EVENTS_FILE_LOCKS.get(path_key)
if lock is None:
lock = threading.Lock()
_EVENTS_FILE_LOCKS[path_key] = lock
return lock
def reset_events_write_locks() -> None:
with _EVENTS_FILE_LOCKS_LOCK:
_EVENTS_FILE_LOCKS.clear()
def append_jsonl_record(output_path: Path, record: dict[str, Any]) -> None:
output_path.parent.mkdir(parents=True, exist_ok=True)
with get_events_write_lock(output_path), output_path.open("a", encoding="utf-8") as f:
f.write(json.dumps(record, ensure_ascii=False) + "\n")
def default_resource_attributes() -> dict[str, str]:
return {
"service.name": "strix-agent",
"service.namespace": "strix",
}
def parse_traceloop_headers(raw_headers: str) -> dict[str, str]:
headers = raw_headers.strip()
if not headers:
return {}
if headers.startswith("{"):
try:
parsed = json.loads(headers)
except json.JSONDecodeError:
logger.warning("Invalid TRACELOOP_HEADERS JSON, ignoring custom headers")
return {}
if isinstance(parsed, dict):
return {str(key): str(value) for key, value in parsed.items() if value is not None}
logger.warning("TRACELOOP_HEADERS JSON must be an object, ignoring custom headers")
return {}
result: dict[str, str] = {}
for part in headers.split(","):
key, sep, value = part.partition("=")
if not sep:
continue
key = key.strip()
value = value.strip()
if key and value:
result[key] = value
return result
def prune_otel_span_attributes(attributes: dict[str, Any]) -> dict[str, Any]:
"""Drop high-volume LLM payload attributes to keep JSONL event files compact."""
filtered: dict[str, Any] = {}
filtered_count = 0
for key, value in attributes.items():
key_str = str(key)
if key_str in _NOISY_OTEL_EXACT_KEYS:
filtered_count += 1
continue
if key_str.endswith(".content") and key_str.startswith(_NOISY_OTEL_CONTENT_PREFIXES):
filtered_count += 1
continue
filtered[key_str] = value
if filtered_count:
filtered["strix.filtered_attributes_count"] = filtered_count
return filtered
class JsonlSpanExporter(SpanExporter): # type: ignore[misc]
"""Append OTEL spans to JSONL for local run artifacts."""
def __init__(
self,
output_path_getter: Callable[[], Path],
run_metadata_getter: Callable[[], dict[str, Any]],
sanitizer: Callable[[Any], Any],
write_lock_getter: Callable[[Path], threading.Lock],
):
self._output_path_getter = output_path_getter
self._run_metadata_getter = run_metadata_getter
self._sanitize = sanitizer
self._write_lock_getter = write_lock_getter
def export(self, spans: Sequence[ReadableSpan]) -> SpanExportResult:
records: list[dict[str, Any]] = []
for span in spans:
attributes = prune_otel_span_attributes(dict(span.attributes or {}))
if "strix.event_type" in attributes:
# Tracer events are written directly in Tracer._emit_event.
continue
records.append(self._span_to_record(span, attributes))
if not records:
return SpanExportResult.SUCCESS
try:
output_path = self._output_path_getter()
output_path.parent.mkdir(parents=True, exist_ok=True)
with self._write_lock_getter(output_path), output_path.open("a", encoding="utf-8") as f:
for record in records:
f.write(json.dumps(record, ensure_ascii=False) + "\n")
except OSError:
logger.exception("Failed to write OTEL span records to JSONL")
return SpanExportResult.FAILURE
return SpanExportResult.SUCCESS
def shutdown(self) -> None:
return None
def force_flush(self, timeout_millis: int = 30_000) -> bool: # noqa: ARG002
return True
def _span_to_record(
self,
span: ReadableSpan,
attributes: dict[str, Any],
) -> dict[str, Any]:
span_context = span.get_span_context()
parent_context = span.parent
status = None
if span.status and span.status.status_code:
status = span.status.status_code.name.lower()
event_type = str(attributes.get("gen_ai.operation.name", span.name))
run_metadata = self._run_metadata_getter()
run_id_attr = (
attributes.get("strix.run_id")
or attributes.get("strix_run_id")
or run_metadata.get("run_id")
or span.resource.attributes.get("strix.run_id")
)
record: dict[str, Any] = {
"timestamp": iso_from_unix_ns(span.end_time) or datetime.now(UTC).isoformat(),
"event_type": event_type,
"run_id": str(run_id_attr or run_metadata.get("run_id") or ""),
"trace_id": format_trace_id(span_context.trace_id),
"span_id": format_span_id(span_context.span_id),
"parent_span_id": format_span_id(parent_context.span_id if parent_context else None),
"actor": None,
"payload": None,
"status": status,
"error": None,
"source": "otel.span",
"span_name": span.name,
"span_kind": span.kind.name.lower(),
"attributes": self._sanitize(attributes),
}
if span.events:
record["otel_events"] = self._sanitize(
[
{
"name": event.name,
"timestamp": iso_from_unix_ns(event.timestamp),
"attributes": dict(event.attributes or {}),
}
for event in span.events
]
)
return record
def bootstrap_otel(
*,
bootstrapped: bool,
remote_enabled_state: bool,
bootstrap_lock: threading.Lock,
traceloop: Any,
base_url: str,
api_key: str,
headers_raw: str,
output_path_getter: Callable[[], Path],
run_metadata_getter: Callable[[], dict[str, Any]],
sanitizer: Callable[[Any], Any],
write_lock_getter: Callable[[Path], threading.Lock],
tracer_name: str = "strix.telemetry.tracer",
) -> tuple[Any, bool, bool, bool]:
with bootstrap_lock:
if bootstrapped:
return (
trace.get_tracer(tracer_name),
remote_enabled_state,
bootstrapped,
remote_enabled_state,
)
local_exporter = JsonlSpanExporter(
output_path_getter=output_path_getter,
run_metadata_getter=run_metadata_getter,
sanitizer=sanitizer,
write_lock_getter=write_lock_getter,
)
local_processor = SimpleSpanProcessor(local_exporter)
headers = parse_traceloop_headers(headers_raw)
remote_enabled = bool(base_url and api_key)
otlp_headers = headers
if remote_enabled:
otlp_headers = {"Authorization": f"Bearer {api_key}"}
otlp_headers.update(headers)
otel_init_ok = False
if traceloop:
try:
from traceloop.sdk.instruments import Instruments
init_kwargs: dict[str, Any] = {
"app_name": "strix-agent",
"processor": local_processor,
"telemetry_enabled": False,
"resource_attributes": default_resource_attributes(),
"block_instruments": {
Instruments.URLLIB3,
Instruments.REQUESTS,
},
}
if remote_enabled:
init_kwargs.update(
{
"api_endpoint": base_url,
"api_key": api_key,
"headers": headers,
}
)
import io
import sys
_stdout = sys.stdout
sys.stdout = io.StringIO()
try:
traceloop.init(**init_kwargs)
finally:
sys.stdout = _stdout
otel_init_ok = True
except Exception:
logger.exception("Failed to initialize Traceloop/OpenLLMetry")
remote_enabled = False
if not otel_init_ok:
from opentelemetry.sdk.resources import Resource
provider = TracerProvider(resource=Resource.create(default_resource_attributes()))
provider.add_span_processor(local_processor)
if remote_enabled:
try:
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
OTLPSpanExporter,
)
endpoint = base_url.rstrip("/") + "/v1/traces"
provider.add_span_processor(
BatchSpanProcessor(
OTLPSpanExporter(endpoint=endpoint, headers=otlp_headers)
)
)
except Exception:
logger.exception("Failed to configure OTLP HTTP exporter")
remote_enabled = False
try:
trace.set_tracer_provider(provider)
otel_init_ok = True
except Exception:
logger.exception("Failed to set OpenTelemetry tracer provider")
remote_enabled = False
otel_tracer = trace.get_tracer(tracer_name)
if otel_init_ok:
return otel_tracer, remote_enabled, True, remote_enabled
return otel_tracer, remote_enabled, bootstrapped, remote_enabled_state

View File

@@ -24,9 +24,22 @@ from .registry import (
SANDBOX_MODE = os.getenv("STRIX_SANDBOX_MODE", "false").lower() == "true"
HAS_PERPLEXITY_API = bool(Config.get("perplexity_api_key"))
DISABLE_BROWSER = (Config.get("strix_disable_browser") or "false").lower() == "true"
def _is_browser_disabled() -> bool:
if os.getenv("STRIX_DISABLE_BROWSER", "").lower() == "true":
return True
val: str = Config.load().get("env", {}).get("STRIX_DISABLE_BROWSER", "")
return str(val).lower() == "true"
DISABLE_BROWSER = _is_browser_disabled()
def _has_perplexity_api() -> bool:
if os.getenv("PERPLEXITY_API_KEY"):
return True
return bool(Config.load().get("env", {}).get("PERPLEXITY_API_KEY"))
if not SANDBOX_MODE:
from .agents_graph import * # noqa: F403
@@ -35,6 +48,7 @@ if not SANDBOX_MODE:
from .browser import * # noqa: F403
from .file_edit import * # noqa: F403
from .finish import * # noqa: F403
from .load_skill import * # noqa: F403
from .notes import * # noqa: F403
from .proxy import * # noqa: F403
from .python import * # noqa: F403
@@ -43,7 +57,7 @@ if not SANDBOX_MODE:
from .thinking import * # noqa: F403
from .todo import * # noqa: F403
if HAS_PERPLEXITY_API:
if _has_perplexity_api():
from .web_search import * # noqa: F403
else:
if not DISABLE_BROWSER:

View File

@@ -203,45 +203,27 @@ def create_agent(
try:
parent_id = agent_state.agent_id
skill_list = []
if skills:
skill_list = [s.strip() for s in skills.split(",") if s.strip()]
from strix.skills import parse_skill_list, validate_requested_skills
if len(skill_list) > 5:
skill_list = parse_skill_list(skills)
validation_error = validate_requested_skills(skill_list)
if validation_error:
return {
"success": False,
"error": (
"Cannot specify more than 5 skills for an agent (use comma-separated format)"
),
"error": validation_error,
"agent_id": None,
}
if skill_list:
from strix.skills import get_all_skill_names, validate_skill_names
validation = validate_skill_names(skill_list)
if validation["invalid"]:
available_skills = list(get_all_skill_names())
return {
"success": False,
"error": (
f"Invalid skills: {validation['invalid']}. "
f"Available skills: {', '.join(available_skills)}"
),
"agent_id": None,
}
from strix.agents import StrixAgent
from strix.agents.state import AgentState
from strix.llm.config import LLMConfig
state = AgentState(task=task, agent_name=name, parent_id=parent_id, max_iterations=300)
parent_agent = _agent_instances.get(parent_id)
timeout = None
scan_mode = "deep"
is_whitebox = False
interactive = False
if parent_agent and hasattr(parent_agent, "llm_config"):
if hasattr(parent_agent.llm_config, "timeout"):
timeout = parent_agent.llm_config.timeout
@@ -249,20 +231,29 @@ def create_agent(
scan_mode = parent_agent.llm_config.scan_mode
if hasattr(parent_agent.llm_config, "is_whitebox"):
is_whitebox = parent_agent.llm_config.is_whitebox
interactive = getattr(parent_agent.llm_config, "interactive", False)
if hasattr(parent_agent.llm_config, "is_whitebox"):
is_whitebox = parent_agent.llm_config.is_whitebox
state = AgentState(
task=task,
agent_name=name,
parent_id=parent_id,
max_iterations=300,
waiting_timeout=300 if interactive else 600,
)
llm_config = LLMConfig(
skills=skill_list,
timeout=timeout,
scan_mode=scan_mode,
is_whitebox=is_whitebox,
interactive=interactive,
)
agent_config = {
"llm_config": llm_config,
"state": state,
}
if parent_agent and hasattr(parent_agent, "non_interactive"):
agent_config["non_interactive"] = parent_agent.non_interactive
agent = StrixAgent(agent_config)

View File

@@ -0,0 +1,4 @@
from .load_skill_actions import load_skill
__all__ = ["load_skill"]

View File

@@ -0,0 +1,71 @@
from typing import Any
from strix.tools.registry import register_tool
@register_tool(sandbox_execution=False)
def load_skill(agent_state: Any, skills: str) -> dict[str, Any]:
try:
from strix.skills import parse_skill_list, validate_requested_skills
requested_skills = parse_skill_list(skills)
if not requested_skills:
return {
"success": False,
"error": "No skills provided. Pass one or more comma-separated skill names.",
"requested_skills": [],
}
validation_error = validate_requested_skills(requested_skills)
if validation_error:
return {
"success": False,
"error": validation_error,
"requested_skills": requested_skills,
"loaded_skills": [],
}
from strix.tools.agents_graph.agents_graph_actions import _agent_instances
current_agent = _agent_instances.get(agent_state.agent_id)
if current_agent is None or not hasattr(current_agent, "llm"):
return {
"success": False,
"error": (
"Could not find running agent instance for runtime skill loading. "
"Try again in the current active agent."
),
"requested_skills": requested_skills,
"loaded_skills": [],
}
newly_loaded = current_agent.llm.add_skills(requested_skills)
already_loaded = [skill for skill in requested_skills if skill not in newly_loaded]
prior = agent_state.context.get("loaded_skills", [])
if not isinstance(prior, list):
prior = []
merged_skills = sorted(set(prior).union(requested_skills))
agent_state.update_context("loaded_skills", merged_skills)
except Exception as e: # noqa: BLE001
fallback_requested_skills = (
requested_skills
if "requested_skills" in locals()
else [s.strip() for s in skills.split(",") if s.strip()]
)
return {
"success": False,
"error": f"Failed to load skill(s): {e!s}",
"requested_skills": fallback_requested_skills,
"loaded_skills": [],
}
else:
return {
"success": True,
"requested_skills": requested_skills,
"loaded_skills": requested_skills,
"newly_loaded_skills": newly_loaded,
"already_loaded_skills": already_loaded,
"message": "Skills loaded into this agent prompt context.",
}

View File

@@ -0,0 +1,33 @@
<tools>
<tool name="load_skill">
<description>Dynamically load one or more skills into the current agent at runtime.
Use this when you need exact guidance right before acting (tool syntax, exploit workflow, or protocol details).
This updates the current agent's prompt context immediately.</description>
<details>Accepts one skill or a comma-separated skill bundle. Works for root agents and subagents.
Examples:
- Single skill: `xss`
- Bundle: `sql_injection,business_logic`</details>
<parameters>
<parameter name="skills" type="string" required="true">
<description>Comma-separated list of skills to use for the agent (MAXIMUM 5 skills allowed). Most agents should have at least one skill in order to be useful. Agents should be highly specialized - use 1-3 related skills; up to 5 for complex contexts. {{DYNAMIC_SKILLS_DESCRIPTION}}</description>
</parameter>
</parameters>
<returns type="Dict[str, Any]">
<description>Response containing: - success: Whether runtime loading succeeded - requested_skills: Skills requested - loaded_skills: Skills validated and applied - newly_loaded_skills: Skills newly injected into prompt - already_loaded_skills: Skills already present in prompt context</description>
</returns>
<examples>
<function=load_skill>
<parameter=skills>xss</parameter>
</function>
<function=load_skill>
<parameter=skills>sql_injection,business_logic</parameter>
</function>
<function=load_skill>
<parameter=skills>nmap,httpx</parameter>
</function>
</examples>
</tool>
</tools>

View File

@@ -113,30 +113,59 @@ Do NOT use broad/parent CWEs like CWE-74, CWE-20, CWE-200, CWE-284, or CWE-693.<
<parameter name="code_locations" type="string" required="false">
<description>Nested XML list of code locations where the vulnerability exists. MANDATORY for white-box testing.
Order: first location is where the issue manifests (typically the sink). Additional locations provide data flow context (source → propagation → sink).
CRITICAL — HOW fix_before/fix_after WORK:
fix_before and fix_after are LITERAL BLOCK-LEVEL REPLACEMENTS used directly for GitHub/GitLab PR suggestion blocks. When a reviewer clicks "Accept suggestion", the platform replaces the EXACT lines from start_line to end_line with the fix_after content. This means:
1. fix_before MUST be an EXACT, VERBATIM copy of the source code at lines start_line through end_line. Same whitespace, same indentation, same line breaks. If fix_before does not match the actual file content character-for-character, the suggestion will be wrong or will corrupt the code when accepted.
2. fix_after is the COMPLETE replacement for that entire block. It replaces ALL lines from start_line to end_line. It can be more lines, fewer lines, or the same number of lines as fix_before.
3. start_line and end_line define the EXACT line range being replaced. They must precisely cover the lines in fix_before — no more, no less. If the vulnerable code spans lines 45-48, then start_line=45 and end_line=48, and fix_before must contain all 4 lines exactly as they appear in the file.
MULTI-PART FIXES:
Many fixes require changes in multiple non-contiguous parts of a file (e.g., adding an import at the top AND changing code lower down), or across multiple files. Since each fix_before/fix_after pair covers ONE contiguous block, you MUST create SEPARATE location entries for each part of the fix:
- Each location covers one contiguous block of lines to change
- Use the label field to describe how each part relates to the overall fix (e.g., "Add import for parameterized query library", "Replace string interpolation with parameterized query")
- Order fix locations logically: primary fix first (where the vulnerability manifests), then supporting changes (imports, config, etc.)
COMMON MISTAKES TO AVOID:
- Do NOT guess line numbers. Read the file and verify the exact lines before reporting.
- Do NOT paraphrase or reformat code in fix_before. It must be a verbatim copy.
- Do NOT set start_line=end_line when the vulnerable code spans multiple lines. Cover the full range.
- Do NOT put an import addition and a code change in the same fix_before/fix_after if they are not on adjacent lines. Split them into separate locations.
- Do NOT include lines outside the vulnerable/fixed code in fix_before just to "pad" the range.
- Do NOT duplicate changes across locations. Each location's fix_after must ONLY contain changes for its own line range. Never repeat a change that is already covered by another location.
Each location element fields:
- file (REQUIRED): Path relative to repository root. No leading slash, no absolute paths, no ".." traversal.
Correct: "src/db/queries.ts" or "app/routes/users.py"
Wrong: "/workspace/repo/src/db/queries.ts", "./src/db/queries.ts", "../../etc/passwd"
- start_line (REQUIRED): Exact 1-based line number where the vulnerable code begins. Must be a positive integer. You must be certain of this number — do not guess or approximate. Go back and verify against the actual file content if needed.
- end_line (REQUIRED): Exact 1-based line number where the vulnerable code ends. Must be >= start_line. Set equal to start_line if the vulnerability is on a single line.
- snippet (optional): The actual source code at this location, copied verbatim from the file. Do not paraphrase or summarize code — paste it exactly as it appears.
- label (optional): Short role description for this location in the data flow, e.g. "User input from request parameter (source)", "Unsanitized input passed to SQL query (sink)".
- fix_before (optional): The vulnerable code to be replaced, copied verbatim. Must match the actual source exactly — do not paraphrase, summarize, or add/remove whitespace. Only include on locations where a fix is proposed.
- fix_after (optional): The corrected code that should replace fix_before. Must be syntactically valid and ready to apply as a direct replacement. Only include on locations where a fix is proposed.
- start_line (REQUIRED): Exact 1-based line number where the vulnerable/affected code begins. Must be a positive integer. You must be certain of this number — go back and verify against the actual file content if needed.
- end_line (REQUIRED): Exact 1-based line number where the vulnerable/affected code ends. Must be >= start_line. Set equal to start_line ONLY if the code is truly on a single line.
- snippet (optional): The actual source code at this location, copied verbatim from the file.
- label (optional): Short role description for this location. For multi-part fixes, use this to explain the purpose of each change (e.g., "Add import for escape utility", "Sanitize user input before SQL query").
- fix_before (optional): The vulnerable code to be replaced — VERBATIM copy of lines start_line through end_line. Must match the actual source character-for-character including whitespace and indentation.
- fix_after (optional): The corrected code that replaces the entire fix_before block. Must be syntactically valid and ready to apply as a direct replacement.
Locations without fix_before/fix_after are informational context (e.g. showing the source of tainted data).
Locations with fix_before/fix_after are actionable fixes (used for PR review suggestions).</description>
Locations with fix_before/fix_after are actionable fixes (used directly for PR suggestion blocks).</description>
<format>
<location>
<file>src/db/queries.ts</file>
<start_line>42</start_line>
<end_line>42</end_line>
<snippet>db.query(`SELECT * FROM users WHERE id = ${id}`)</snippet>
<end_line>45</end_line>
<snippet>const query = (
`SELECT * FROM users ` +
`WHERE id = ${id}`
);</snippet>
<label>Unsanitized input used in SQL query (sink)</label>
<fix_before>db.query(`SELECT * FROM users WHERE id = ${id}`)</fix_before>
<fix_after>db.query('SELECT * FROM users WHERE id = $1', [id])</fix_after>
<fix_before>const query = (
`SELECT * FROM users ` +
`WHERE id = ${id}`
);</fix_before>
<fix_after>const query = 'SELECT * FROM users WHERE id = $1';
const result = await db.query(query, [id]);</fix_after>
</location>
<location>
<file>src/routes/users.ts</file>
@@ -299,14 +328,33 @@ if __name__ == "__main__":
<parameter=code_locations>
<location>
<file>src/services/link-preview.ts</file>
<start_line>47</start_line>
<end_line>47</end_line>
<snippet>const response = await fetch(userUrl)</snippet>
<start_line>45</start_line>
<end_line>48</end_line>
<snippet> const options = { timeout: 5000 };
const response = await fetch(userUrl, options);
const html = await response.text();
return extractMetadata(html);</snippet>
<label>Unvalidated user URL passed to server-side fetch (sink)</label>
<fix_before>const response = await fetch(userUrl)</fix_before>
<fix_after>const validated = await validateAndResolveUrl(userUrl)
if (!validated) throw new ForbiddenError('URL not allowed')
const response = await fetch(validated)</fix_after>
<fix_before> const options = { timeout: 5000 };
const response = await fetch(userUrl, options);
const html = await response.text();
return extractMetadata(html);</fix_before>
<fix_after> const validated = await validateAndResolveUrl(userUrl);
if (!validated) throw new ForbiddenError('URL not allowed');
const options = { timeout: 5000 };
const response = await fetch(validated, options);
const html = await response.text();
return extractMetadata(html);</fix_after>
</location>
<location>
<file>src/services/link-preview.ts</file>
<start_line>2</start_line>
<end_line>2</end_line>
<snippet>import { extractMetadata } from '../utils/html';</snippet>
<label>Add import for URL validation utility</label>
<fix_before>import { extractMetadata } from '../utils/html';</fix_before>
<fix_after>import { extractMetadata } from '../utils/html';
import { validateAndResolveUrl } from '../utils/url-validator';</fix_after>
</location>
<location>
<file>src/routes/api/v1/links.ts</file>

View File

@@ -46,7 +46,7 @@ def web_search(query: str) -> dict[str, Any]:
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
payload = {
"model": "sonar-reasoning",
"model": "sonar-reasoning-pro",
"messages": [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": query},

1
tests/config/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Tests for strix.config module."""

View File

@@ -0,0 +1,55 @@
import json
from strix.config.config import Config
def test_traceloop_vars_are_tracked() -> None:
tracked = Config.tracked_vars()
assert "STRIX_OTEL_TELEMETRY" in tracked
assert "STRIX_POSTHOG_TELEMETRY" in tracked
assert "TRACELOOP_BASE_URL" in tracked
assert "TRACELOOP_API_KEY" in tracked
assert "TRACELOOP_HEADERS" in tracked
def test_apply_saved_uses_saved_traceloop_vars(monkeypatch, tmp_path) -> None:
config_path = tmp_path / "cli-config.json"
config_path.write_text(
json.dumps(
{
"env": {
"TRACELOOP_BASE_URL": "https://otel.example.com",
"TRACELOOP_API_KEY": "api-key",
"TRACELOOP_HEADERS": "x-test=value",
}
}
),
encoding="utf-8",
)
monkeypatch.setattr(Config, "_config_file_override", config_path)
monkeypatch.delenv("TRACELOOP_BASE_URL", raising=False)
monkeypatch.delenv("TRACELOOP_API_KEY", raising=False)
monkeypatch.delenv("TRACELOOP_HEADERS", raising=False)
applied = Config.apply_saved()
assert applied["TRACELOOP_BASE_URL"] == "https://otel.example.com"
assert applied["TRACELOOP_API_KEY"] == "api-key"
assert applied["TRACELOOP_HEADERS"] == "x-test=value"
def test_apply_saved_respects_existing_env_traceloop_vars(monkeypatch, tmp_path) -> None:
config_path = tmp_path / "cli-config.json"
config_path.write_text(
json.dumps({"env": {"TRACELOOP_BASE_URL": "https://otel.example.com"}}),
encoding="utf-8",
)
monkeypatch.setattr(Config, "_config_file_override", config_path)
monkeypatch.setenv("TRACELOOP_BASE_URL", "https://env.example.com")
applied = Config.apply_saved(force=False)
assert "TRACELOOP_BASE_URL" not in applied

View File

@@ -0,0 +1,15 @@
import litellm
from strix.llm.config import LLMConfig
from strix.llm.llm import LLM
def test_llm_does_not_modify_litellm_callbacks(monkeypatch) -> None:
monkeypatch.setenv("STRIX_TELEMETRY", "1")
monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1")
monkeypatch.setattr(litellm, "callbacks", ["custom-callback"])
llm = LLM(LLMConfig(model_name="openai/gpt-5"), agent_name=None)
assert llm is not None
assert litellm.callbacks == ["custom-callback"]

1
tests/skills/__init__.py Normal file
View File

@@ -0,0 +1 @@
# Tests for skill-related runtime behavior.

View File

@@ -0,0 +1,28 @@
from strix.telemetry.flags import is_otel_enabled, is_posthog_enabled
def test_flags_fallback_to_strix_telemetry(monkeypatch) -> None:
monkeypatch.delenv("STRIX_OTEL_TELEMETRY", raising=False)
monkeypatch.delenv("STRIX_POSTHOG_TELEMETRY", raising=False)
monkeypatch.setenv("STRIX_TELEMETRY", "0")
assert is_otel_enabled() is False
assert is_posthog_enabled() is False
def test_otel_flag_overrides_global_telemetry(monkeypatch) -> None:
monkeypatch.setenv("STRIX_TELEMETRY", "0")
monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1")
monkeypatch.delenv("STRIX_POSTHOG_TELEMETRY", raising=False)
assert is_otel_enabled() is True
assert is_posthog_enabled() is False
def test_posthog_flag_overrides_global_telemetry(monkeypatch) -> None:
monkeypatch.setenv("STRIX_TELEMETRY", "0")
monkeypatch.setenv("STRIX_POSTHOG_TELEMETRY", "1")
monkeypatch.delenv("STRIX_OTEL_TELEMETRY", raising=False)
assert is_otel_enabled() is False
assert is_posthog_enabled() is True

View File

@@ -0,0 +1,379 @@
import json
import sys
import types
from pathlib import Path
from typing import Any, ClassVar
import pytest
from opentelemetry.sdk.trace.export import SimpleSpanProcessor, SpanExportResult
from strix.telemetry import tracer as tracer_module
from strix.telemetry import utils as telemetry_utils
from strix.telemetry.tracer import Tracer, set_global_tracer
def _load_events(events_path: Path) -> list[dict[str, Any]]:
lines = events_path.read_text(encoding="utf-8").splitlines()
return [json.loads(line) for line in lines if line]
@pytest.fixture(autouse=True)
def _reset_tracer_globals(monkeypatch) -> None:
monkeypatch.setattr(tracer_module, "_global_tracer", None)
monkeypatch.setattr(tracer_module, "_OTEL_BOOTSTRAPPED", False)
monkeypatch.setattr(tracer_module, "_OTEL_REMOTE_ENABLED", False)
telemetry_utils.reset_events_write_locks()
monkeypatch.delenv("STRIX_TELEMETRY", raising=False)
monkeypatch.delenv("STRIX_OTEL_TELEMETRY", raising=False)
monkeypatch.delenv("STRIX_POSTHOG_TELEMETRY", raising=False)
monkeypatch.delenv("TRACELOOP_BASE_URL", raising=False)
monkeypatch.delenv("TRACELOOP_API_KEY", raising=False)
monkeypatch.delenv("TRACELOOP_HEADERS", raising=False)
def test_tracer_local_mode_writes_jsonl_with_correlation(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer("local-observability")
set_global_tracer(tracer)
tracer.set_scan_config({"targets": ["https://example.com"], "user_instructions": "focus auth"})
tracer.log_agent_creation("agent-1", "Root Agent", "scan auth")
tracer.log_chat_message("starting scan", "user", "agent-1")
execution_id = tracer.log_tool_execution_start(
"agent-1",
"send_request",
{"url": "https://example.com/login"},
)
tracer.update_tool_execution(execution_id, "completed", {"status_code": 200, "body": "ok"})
events_path = tmp_path / "strix_runs" / "local-observability" / "events.jsonl"
assert events_path.exists()
events = _load_events(events_path)
assert any(event["event_type"] == "tool.execution.updated" for event in events)
assert not any(event["event_type"] == "traffic.intercepted" for event in events)
for event in events:
assert event["run_id"] == "local-observability"
assert event["trace_id"]
assert event["span_id"]
def test_tracer_redacts_sensitive_payloads(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer("redaction-run")
set_global_tracer(tracer)
execution_id = tracer.log_tool_execution_start(
"agent-1",
"send_request",
{
"url": "https://example.com",
"api_key": "sk-secret-token-value",
"authorization": "Bearer super-secret-token",
},
)
tracer.update_tool_execution(
execution_id,
"error",
{"error": "request failed with token sk-secret-token-value"},
)
events_path = tmp_path / "strix_runs" / "redaction-run" / "events.jsonl"
events = _load_events(events_path)
serialized = json.dumps(events)
assert "sk-secret-token-value" not in serialized
assert "super-secret-token" not in serialized
assert "[REDACTED]" in serialized
def test_tracer_remote_mode_configures_traceloop_export(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
class FakeTraceloop:
init_calls: ClassVar[list[dict[str, Any]]] = []
@staticmethod
def init(**kwargs: Any) -> None:
FakeTraceloop.init_calls.append(kwargs)
@staticmethod
def set_association_properties(properties: dict[str, Any]) -> None: # noqa: ARG004
return None
monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop)
monkeypatch.setenv("TRACELOOP_BASE_URL", "https://otel.example.com")
monkeypatch.setenv("TRACELOOP_API_KEY", "test-api-key")
monkeypatch.setenv("TRACELOOP_HEADERS", '{"x-custom":"header"}')
tracer = Tracer("remote-observability")
set_global_tracer(tracer)
tracer.log_chat_message("hello", "user", "agent-1")
assert tracer._remote_export_enabled is True
assert FakeTraceloop.init_calls
init_kwargs = FakeTraceloop.init_calls[-1]
assert init_kwargs["api_endpoint"] == "https://otel.example.com"
assert init_kwargs["api_key"] == "test-api-key"
assert init_kwargs["headers"] == {"x-custom": "header"}
assert isinstance(init_kwargs["processor"], SimpleSpanProcessor)
assert "strix.run_id" not in init_kwargs["resource_attributes"]
assert "strix.run_name" not in init_kwargs["resource_attributes"]
events_path = tmp_path / "strix_runs" / "remote-observability" / "events.jsonl"
events = _load_events(events_path)
run_started = next(event for event in events if event["event_type"] == "run.started")
assert run_started["payload"]["remote_export_enabled"] is True
def test_tracer_local_mode_avoids_traceloop_remote_endpoint(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
class FakeTraceloop:
init_calls: ClassVar[list[dict[str, Any]]] = []
@staticmethod
def init(**kwargs: Any) -> None:
FakeTraceloop.init_calls.append(kwargs)
@staticmethod
def set_association_properties(properties: dict[str, Any]) -> None: # noqa: ARG004
return None
monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop)
tracer = Tracer("local-traceloop")
set_global_tracer(tracer)
tracer.log_chat_message("hello", "user", "agent-1")
assert FakeTraceloop.init_calls
init_kwargs = FakeTraceloop.init_calls[-1]
assert "api_endpoint" not in init_kwargs
assert "api_key" not in init_kwargs
assert "headers" not in init_kwargs
assert isinstance(init_kwargs["processor"], SimpleSpanProcessor)
assert tracer._remote_export_enabled is False
def test_otlp_fallback_includes_auth_and_custom_headers(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
monkeypatch.setattr(tracer_module, "Traceloop", None)
monkeypatch.setenv("TRACELOOP_BASE_URL", "https://otel.example.com")
monkeypatch.setenv("TRACELOOP_API_KEY", "test-api-key")
monkeypatch.setenv("TRACELOOP_HEADERS", '{"x-custom":"header"}')
captured: dict[str, Any] = {}
class FakeOTLPSpanExporter:
def __init__(self, endpoint: str, headers: dict[str, str] | None = None, **kwargs: Any):
captured["endpoint"] = endpoint
captured["headers"] = headers or {}
captured["kwargs"] = kwargs
def export(self, spans: Any) -> SpanExportResult: # noqa: ARG002
return SpanExportResult.SUCCESS
def shutdown(self) -> None:
return None
def force_flush(self, timeout_millis: int = 30_000) -> bool: # noqa: ARG002
return True
fake_module = types.ModuleType("opentelemetry.exporter.otlp.proto.http.trace_exporter")
fake_module.OTLPSpanExporter = FakeOTLPSpanExporter
monkeypatch.setitem(
sys.modules,
"opentelemetry.exporter.otlp.proto.http.trace_exporter",
fake_module,
)
tracer = Tracer("otlp-fallback")
set_global_tracer(tracer)
assert tracer._remote_export_enabled is True
assert captured["endpoint"] == "https://otel.example.com/v1/traces"
assert captured["headers"]["Authorization"] == "Bearer test-api-key"
assert captured["headers"]["x-custom"] == "header"
def test_traceloop_init_failure_does_not_mark_bootstrapped_on_provider_failure(
monkeypatch, tmp_path
) -> None:
monkeypatch.chdir(tmp_path)
class FakeTraceloop:
@staticmethod
def init(**kwargs: Any) -> None: # noqa: ARG004
raise RuntimeError("traceloop init failed")
@staticmethod
def set_association_properties(properties: dict[str, Any]) -> None: # noqa: ARG004
return None
monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop)
def _raise_provider_error(provider: Any) -> None:
raise RuntimeError("provider setup failed")
monkeypatch.setattr(tracer_module.trace, "set_tracer_provider", _raise_provider_error)
tracer = Tracer("bootstrap-failure")
set_global_tracer(tracer)
assert tracer_module._OTEL_BOOTSTRAPPED is False
assert tracer._remote_export_enabled is False
def test_run_completed_event_emitted_once(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer("single-complete")
set_global_tracer(tracer)
tracer.save_run_data(mark_complete=True)
tracer.save_run_data(mark_complete=True)
events_path = tmp_path / "strix_runs" / "single-complete" / "events.jsonl"
events = _load_events(events_path)
run_completed = [event for event in events if event["event_type"] == "run.completed"]
assert len(run_completed) == 1
def test_events_with_agent_id_include_agent_name(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer("agent-name-enrichment")
set_global_tracer(tracer)
tracer.log_agent_creation("agent-1", "Root Agent", "scan auth")
tracer.log_chat_message("hello", "assistant", "agent-1")
events_path = tmp_path / "strix_runs" / "agent-name-enrichment" / "events.jsonl"
events = _load_events(events_path)
chat_event = next(event for event in events if event["event_type"] == "chat.message")
assert chat_event["actor"]["agent_id"] == "agent-1"
assert chat_event["actor"]["agent_name"] == "Root Agent"
def test_run_metadata_is_only_on_run_lifecycle_events(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer("metadata-scope")
set_global_tracer(tracer)
tracer.log_chat_message("hello", "assistant", "agent-1")
tracer.save_run_data(mark_complete=True)
events_path = tmp_path / "strix_runs" / "metadata-scope" / "events.jsonl"
events = _load_events(events_path)
run_started = next(event for event in events if event["event_type"] == "run.started")
run_completed = next(event for event in events if event["event_type"] == "run.completed")
chat_event = next(event for event in events if event["event_type"] == "chat.message")
assert "run_metadata" in run_started
assert "run_metadata" in run_completed
assert "run_metadata" not in chat_event
def test_set_run_name_resets_cached_paths(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer()
set_global_tracer(tracer)
old_events_path = tracer.events_file_path
tracer.set_run_name("renamed-run")
tracer.log_chat_message("hello", "assistant", "agent-1")
new_events_path = tracer.events_file_path
assert new_events_path != old_events_path
assert new_events_path == tmp_path / "strix_runs" / "renamed-run" / "events.jsonl"
events = _load_events(new_events_path)
assert any(event["event_type"] == "run.started" for event in events)
assert any(event["event_type"] == "chat.message" for event in events)
def test_set_run_name_resets_run_completed_flag(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
tracer = Tracer()
set_global_tracer(tracer)
tracer.save_run_data(mark_complete=True)
tracer.set_run_name("renamed-complete")
tracer.save_run_data(mark_complete=True)
events_path = tmp_path / "strix_runs" / "renamed-complete" / "events.jsonl"
events = _load_events(events_path)
run_completed = [event for event in events if event["event_type"] == "run.completed"]
assert any(event["event_type"] == "run.started" for event in events)
assert len(run_completed) == 1
def test_set_run_name_updates_traceloop_association_properties(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
class FakeTraceloop:
associations: ClassVar[list[dict[str, Any]]] = []
@staticmethod
def init(**kwargs: Any) -> None: # noqa: ARG004
return None
@staticmethod
def set_association_properties(properties: dict[str, Any]) -> None:
FakeTraceloop.associations.append(properties)
monkeypatch.setattr(tracer_module, "Traceloop", FakeTraceloop)
tracer = Tracer()
set_global_tracer(tracer)
tracer.set_run_name("renamed-run")
assert FakeTraceloop.associations
assert FakeTraceloop.associations[-1]["run_id"] == "renamed-run"
assert FakeTraceloop.associations[-1]["run_name"] == "renamed-run"
def test_events_write_locks_are_scoped_by_events_file(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
monkeypatch.setenv("STRIX_TELEMETRY", "0")
tracer_one = Tracer("lock-run-a")
tracer_two = Tracer("lock-run-b")
lock_a_from_one = tracer_one._get_events_write_lock(tracer_one.events_file_path)
lock_a_from_two = tracer_two._get_events_write_lock(tracer_one.events_file_path)
lock_b = tracer_two._get_events_write_lock(tracer_two.events_file_path)
assert lock_a_from_one is lock_a_from_two
assert lock_a_from_one is not lock_b
def test_tracer_skips_jsonl_when_telemetry_disabled(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
monkeypatch.setenv("STRIX_TELEMETRY", "0")
tracer = Tracer("telemetry-disabled")
set_global_tracer(tracer)
tracer.log_chat_message("hello", "assistant", "agent-1")
tracer.save_run_data(mark_complete=True)
events_path = tmp_path / "strix_runs" / "telemetry-disabled" / "events.jsonl"
assert not events_path.exists()
def test_tracer_otel_flag_overrides_global_telemetry(monkeypatch, tmp_path) -> None:
monkeypatch.chdir(tmp_path)
monkeypatch.setenv("STRIX_TELEMETRY", "0")
monkeypatch.setenv("STRIX_OTEL_TELEMETRY", "1")
tracer = Tracer("otel-enabled")
set_global_tracer(tracer)
tracer.log_chat_message("hello", "assistant", "agent-1")
tracer.save_run_data(mark_complete=True)
events_path = tmp_path / "strix_runs" / "otel-enabled" / "events.jsonl"
assert events_path.exists()

View File

@@ -0,0 +1,39 @@
from strix.telemetry.utils import prune_otel_span_attributes
def test_prune_otel_span_attributes_drops_high_volume_prompt_content() -> None:
attributes = {
"gen_ai.operation.name": "openai.chat",
"gen_ai.request.model": "gpt-5.2",
"gen_ai.prompt.0.role": "system",
"gen_ai.prompt.0.content": "a" * 20_000,
"gen_ai.completion.0.content": "b" * 10_000,
"llm.input_messages.0.content": "c" * 5_000,
"llm.output_messages.0.content": "d" * 5_000,
"llm.input": "x" * 3_000,
"llm.output": "y" * 3_000,
}
pruned = prune_otel_span_attributes(attributes)
assert "gen_ai.prompt.0.content" not in pruned
assert "gen_ai.completion.0.content" not in pruned
assert "llm.input_messages.0.content" not in pruned
assert "llm.output_messages.0.content" not in pruned
assert "llm.input" not in pruned
assert "llm.output" not in pruned
assert pruned["gen_ai.operation.name"] == "openai.chat"
assert pruned["gen_ai.prompt.0.role"] == "system"
assert pruned["strix.filtered_attributes_count"] == 6
def test_prune_otel_span_attributes_keeps_metadata_when_nothing_is_dropped() -> None:
attributes = {
"gen_ai.operation.name": "openai.chat",
"gen_ai.request.model": "gpt-5.2",
"gen_ai.prompt.0.role": "user",
}
pruned = prune_otel_span_attributes(attributes)
assert pruned == attributes

View File

@@ -0,0 +1,139 @@
from typing import Any
from strix.tools.agents_graph import agents_graph_actions
from strix.tools.load_skill import load_skill_actions
class _DummyLLM:
def __init__(self, initial_skills: list[str] | None = None) -> None:
self.loaded: set[str] = set(initial_skills or [])
def add_skills(self, skill_names: list[str]) -> list[str]:
newly_loaded = [skill for skill in skill_names if skill not in self.loaded]
self.loaded.update(newly_loaded)
return newly_loaded
class _DummyAgent:
def __init__(self, initial_skills: list[str] | None = None) -> None:
self.llm = _DummyLLM(initial_skills)
class _DummyAgentState:
def __init__(self, agent_id: str) -> None:
self.agent_id = agent_id
self.context: dict[str, Any] = {}
def update_context(self, key: str, value: Any) -> None:
self.context[key] = value
def test_load_skill_success_and_context_update() -> None:
instances = agents_graph_actions.__dict__["_agent_instances"]
original_instances = dict(instances)
try:
state = _DummyAgentState("agent_test_load_skill_success")
instances.clear()
instances[state.agent_id] = _DummyAgent()
result = load_skill_actions.load_skill(state, "ffuf,xss")
assert result["success"] is True
assert result["loaded_skills"] == ["ffuf", "xss"]
assert result["newly_loaded_skills"] == ["ffuf", "xss"]
assert state.context["loaded_skills"] == ["ffuf", "xss"]
finally:
instances.clear()
instances.update(original_instances)
def test_load_skill_uses_same_plain_skill_format_as_create_agent() -> None:
instances = agents_graph_actions.__dict__["_agent_instances"]
original_instances = dict(instances)
try:
state = _DummyAgentState("agent_test_load_skill_short_name")
instances.clear()
instances[state.agent_id] = _DummyAgent()
result = load_skill_actions.load_skill(state, "nmap")
assert result["success"] is True
assert result["loaded_skills"] == ["nmap"]
assert result["newly_loaded_skills"] == ["nmap"]
assert state.context["loaded_skills"] == ["nmap"]
finally:
instances.clear()
instances.update(original_instances)
def test_load_skill_invalid_skill_returns_error() -> None:
instances = agents_graph_actions.__dict__["_agent_instances"]
original_instances = dict(instances)
try:
state = _DummyAgentState("agent_test_load_skill_invalid")
instances.clear()
instances[state.agent_id] = _DummyAgent()
result = load_skill_actions.load_skill(state, "definitely_not_a_real_skill")
assert result["success"] is False
assert "Invalid skills" in result["error"]
assert "Available skills" in result["error"]
finally:
instances.clear()
instances.update(original_instances)
def test_load_skill_rejects_more_than_five_skills() -> None:
instances = agents_graph_actions.__dict__["_agent_instances"]
original_instances = dict(instances)
try:
state = _DummyAgentState("agent_test_load_skill_too_many")
instances.clear()
instances[state.agent_id] = _DummyAgent()
result = load_skill_actions.load_skill(state, "a,b,c,d,e,f")
assert result["success"] is False
assert result["error"] == (
"Cannot specify more than 5 skills for an agent (use comma-separated format)"
)
finally:
instances.clear()
instances.update(original_instances)
def test_load_skill_missing_agent_instance_returns_error() -> None:
instances = agents_graph_actions.__dict__["_agent_instances"]
original_instances = dict(instances)
try:
state = _DummyAgentState("agent_test_load_skill_missing_instance")
instances.clear()
result = load_skill_actions.load_skill(state, "httpx")
assert result["success"] is False
assert "running agent instance" in result["error"]
finally:
instances.clear()
instances.update(original_instances)
def test_load_skill_does_not_reload_skill_already_present_from_agent_creation() -> None:
instances = agents_graph_actions.__dict__["_agent_instances"]
original_instances = dict(instances)
try:
state = _DummyAgentState("agent_test_load_skill_existing_config_skill")
instances.clear()
instances[state.agent_id] = _DummyAgent(["xss"])
result = load_skill_actions.load_skill(state, "xss,sql_injection")
assert result["success"] is True
assert result["loaded_skills"] == ["xss", "sql_injection"]
assert result["newly_loaded_skills"] == ["sql_injection"]
assert result["already_loaded_skills"] == ["xss"]
assert state.context["loaded_skills"] == ["sql_injection", "xss"]
finally:
instances.clear()
instances.update(original_instances)