Merge branch 'worktree-agent-a090b6ec'

This commit is contained in:
salvacybersec
2026-04-05 14:44:26 +03:00
21 changed files with 435 additions and 0 deletions

View File

@@ -0,0 +1,89 @@
---
phase: 03-tier-3-9-providers
plan: 05
subsystem: providers
tags: [providers, tier-8, self-hosted, runtimes, keyword-only]
requires: [pkg/providers/schema.go, pkg/providers/registry.go]
provides:
- Ollama, vLLM, LocalAI, LM Studio, llama.cpp provider definitions
- GPT4All, text-generation-webui, TensorRT-LLM, Triton, Jan AI provider definitions
- 10 Tier 8 self-hosted runtime keyword anchors for OSINT correlation
affects: [pkg/providers/definitions/]
tech_added: []
patterns: [keyword-only detection for providers without documented key formats]
files_created:
- providers/ollama.yaml
- providers/vllm.yaml
- providers/localai.yaml
- providers/lmstudio.yaml
- providers/llamacpp.yaml
- providers/gpt4all.yaml
- providers/text-gen-webui.yaml
- providers/tensorrt-llm.yaml
- providers/triton.yaml
- providers/jan.yaml
- pkg/providers/definitions/ollama.yaml
- pkg/providers/definitions/vllm.yaml
- pkg/providers/definitions/localai.yaml
- pkg/providers/definitions/lmstudio.yaml
- pkg/providers/definitions/llamacpp.yaml
- pkg/providers/definitions/gpt4all.yaml
- pkg/providers/definitions/text-gen-webui.yaml
- pkg/providers/definitions/tensorrt-llm.yaml
- pkg/providers/definitions/triton.yaml
- pkg/providers/definitions/jan.yaml
files_modified: []
decisions:
- "Used keyword-only detection (no regex patterns) for all 10 runtimes — self-hosted stacks typically lack standardized key formats; this avoids Phase 2's false-positive lessons"
- "Captured localhost port anchors (11434, 8080, 1234, 5000) to enable later OSINT/Shodan correlation even when no key is present"
- "Included CLI-flag and env-var keywords (OLLAMA_HOST, VLLM_API_KEY, LOCALAI_API_KEY, etc.) as detection anchors"
requirements: [PROV-08]
metrics:
tasks_completed: 2
files_created: 20
duration: "~3 min"
completed: "2026-04-05"
---
# Phase 3 Plan 05: Tier 8 Self-Hosted Runtimes Summary
**One-liner:** 10 Tier 8 self-hosted LLM runtime provider YAMLs (Ollama, vLLM, LocalAI, LM Studio, llama.cpp, GPT4All, text-generation-webui, TensorRT-LLM, Triton, Jan AI) using keyword-only detection.
## What Was Built
Satisfies PROV-08. Twenty YAML files dual-located in `providers/` and `pkg/providers/definitions/`, each defining a Tier 8 self-hosted runtime. Because self-hosted stacks rarely use bearer-token API keys with documented formats, all definitions rely exclusively on keyword-based anchors — localhost endpoints, CLI flags, environment variable names, and project identifiers — enabling Aho-Corasick pre-filter matches during scanning and OSINT/recon correlation in later phases.
## Tasks Executed
| Task | Name | Commit | Files |
| ---- | ---- | ------ | ----- |
| 1 | Ollama, vLLM, LocalAI, LM Studio, llama.cpp YAMLs | 370dca0 | 10 files |
| 2 | GPT4All, text-gen-webui, TensorRT-LLM, Triton, Jan AI YAMLs | 367cfed | 10 files |
## Verification
- `go test ./pkg/providers/... -count=1` — PASS
- `go test ./pkg/engine/... -count=1` — PASS
- `grep -l 'tier: 8' providers/*.yaml | wc -l` — 10
- All 20 files byte-identical between `providers/` and `pkg/providers/definitions/`
- All 10 YAMLs omit `patterns:` field (keyword-only)
## Deviations from Plan
None — plan executed exactly as written.
## Self-Check: PASSED
- providers/ollama.yaml — FOUND
- providers/vllm.yaml — FOUND
- providers/localai.yaml — FOUND
- providers/lmstudio.yaml — FOUND
- providers/llamacpp.yaml — FOUND
- providers/gpt4all.yaml — FOUND
- providers/text-gen-webui.yaml — FOUND
- providers/tensorrt-llm.yaml — FOUND
- providers/triton.yaml — FOUND
- providers/jan.yaml — FOUND
- All 10 pkg/providers/definitions/*.yaml twins — FOUND
- Commit 370dca0 — FOUND
- Commit 367cfed — FOUND

View File

@@ -0,0 +1,16 @@
format_version: 1
name: gpt4all
display_name: GPT4All
tier: 8
last_verified: "2026-04-05"
keywords:
- "gpt4all"
- "nomic-ai"
- "GPT4ALL_API_KEY"
- "gpt4all.io"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: jan
display_name: Jan AI
tier: 8
last_verified: "2026-04-05"
keywords:
- "jan-ai"
- "janhq"
- "JAN_API_KEY"
- "jan.ai"
- "cortex-cpp"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,18 @@
format_version: 1
name: llamacpp
display_name: llama.cpp server
tier: 8
last_verified: "2026-04-05"
keywords:
- "llama.cpp"
- "llama-cpp"
- "llama_cpp"
- "LLAMA_API_KEY"
- "ggml"
- "gguf"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: lmstudio
display_name: LM Studio
tier: 8
last_verified: "2026-04-05"
keywords:
- "lmstudio"
- "lm-studio"
- "LMSTUDIO_API_KEY"
- "localhost:1234"
- "lmstudio.ai"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: localai
display_name: LocalAI
tier: 8
last_verified: "2026-04-05"
keywords:
- "localai"
- "LOCALAI_API_KEY"
- "go-skynet"
- "localai.io"
- "localhost:8080"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,19 @@
format_version: 1
name: ollama
display_name: Ollama
tier: 8
last_verified: "2026-04-05"
keywords:
- "ollama"
- "OLLAMA_HOST"
- "OLLAMA_API_KEY"
- "OLLAMA_MODELS"
- "localhost:11434"
- "127.0.0.1:11434"
- "api/generate"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: tensorrt-llm
display_name: NVIDIA TensorRT-LLM
tier: 8
last_verified: "2026-04-05"
keywords:
- "tensorrt-llm"
- "trtllm"
- "TRTLLM_API_KEY"
- "tensorrt_llm"
- "nvidia-nim"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: text-gen-webui
display_name: text-generation-webui (oobabooga)
tier: 8
last_verified: "2026-04-05"
keywords:
- "text-generation-webui"
- "oobabooga"
- "TEXTGEN_API_KEY"
- "text-gen-webui"
- "localhost:5000"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: triton
display_name: NVIDIA Triton Inference Server
tier: 8
last_verified: "2026-04-05"
keywords:
- "triton-inference-server"
- "tritonserver"
- "TRITON_API_KEY"
- "triton_grpc"
- "v2/models"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,18 @@
format_version: 1
name: vllm
display_name: vLLM
tier: 8
last_verified: "2026-04-05"
keywords:
- "vllm"
- "VLLM_API_KEY"
- "vllm-openai"
- "--api-key"
- "openai.api_server"
- "vllm.entrypoints"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

16
providers/gpt4all.yaml Normal file
View File

@@ -0,0 +1,16 @@
format_version: 1
name: gpt4all
display_name: GPT4All
tier: 8
last_verified: "2026-04-05"
keywords:
- "gpt4all"
- "nomic-ai"
- "GPT4ALL_API_KEY"
- "gpt4all.io"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

17
providers/jan.yaml Normal file
View File

@@ -0,0 +1,17 @@
format_version: 1
name: jan
display_name: Jan AI
tier: 8
last_verified: "2026-04-05"
keywords:
- "jan-ai"
- "janhq"
- "JAN_API_KEY"
- "jan.ai"
- "cortex-cpp"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

18
providers/llamacpp.yaml Normal file
View File

@@ -0,0 +1,18 @@
format_version: 1
name: llamacpp
display_name: llama.cpp server
tier: 8
last_verified: "2026-04-05"
keywords:
- "llama.cpp"
- "llama-cpp"
- "llama_cpp"
- "LLAMA_API_KEY"
- "ggml"
- "gguf"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

17
providers/lmstudio.yaml Normal file
View File

@@ -0,0 +1,17 @@
format_version: 1
name: lmstudio
display_name: LM Studio
tier: 8
last_verified: "2026-04-05"
keywords:
- "lmstudio"
- "lm-studio"
- "LMSTUDIO_API_KEY"
- "localhost:1234"
- "lmstudio.ai"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

17
providers/localai.yaml Normal file
View File

@@ -0,0 +1,17 @@
format_version: 1
name: localai
display_name: LocalAI
tier: 8
last_verified: "2026-04-05"
keywords:
- "localai"
- "LOCALAI_API_KEY"
- "go-skynet"
- "localai.io"
- "localhost:8080"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

19
providers/ollama.yaml Normal file
View File

@@ -0,0 +1,19 @@
format_version: 1
name: ollama
display_name: Ollama
tier: 8
last_verified: "2026-04-05"
keywords:
- "ollama"
- "OLLAMA_HOST"
- "OLLAMA_API_KEY"
- "OLLAMA_MODELS"
- "localhost:11434"
- "127.0.0.1:11434"
- "api/generate"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: tensorrt-llm
display_name: NVIDIA TensorRT-LLM
tier: 8
last_verified: "2026-04-05"
keywords:
- "tensorrt-llm"
- "trtllm"
- "TRTLLM_API_KEY"
- "tensorrt_llm"
- "nvidia-nim"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

View File

@@ -0,0 +1,17 @@
format_version: 1
name: text-gen-webui
display_name: text-generation-webui (oobabooga)
tier: 8
last_verified: "2026-04-05"
keywords:
- "text-generation-webui"
- "oobabooga"
- "TEXTGEN_API_KEY"
- "text-gen-webui"
- "localhost:5000"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

17
providers/triton.yaml Normal file
View File

@@ -0,0 +1,17 @@
format_version: 1
name: triton
display_name: NVIDIA Triton Inference Server
tier: 8
last_verified: "2026-04-05"
keywords:
- "triton-inference-server"
- "tritonserver"
- "TRITON_API_KEY"
- "triton_grpc"
- "v2/models"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []

18
providers/vllm.yaml Normal file
View File

@@ -0,0 +1,18 @@
format_version: 1
name: vllm
display_name: vLLM
tier: 8
last_verified: "2026-04-05"
keywords:
- "vllm"
- "VLLM_API_KEY"
- "vllm-openai"
- "--api-key"
- "openai.api_server"
- "vllm.entrypoints"
verify:
method: GET
url: ""
headers: {}
valid_status: []
invalid_status: []