Every API call:
- 5 retries with progressive backoff (Olla routes to random instances)
- Body error detection (API 200 but embed error in response)
Per persona verification:
- First batch: LanceDB must physically grow + query must return sources
- Every 10th batch: LanceDB growth check
- Final: triple check (LanceDB size + workspace doc count API + search query)
- Abort on model-not-found errors, skip after 5 consecutive failures
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Pre-flight: test embedding model with 3 retries (120s timeout for cold start)
- First-batch verify: after batch 1, query workspace to confirm vectors searchable
- Abort on model errors: "not found" or "failed to embed" stops immediately
- Consecutive failure guard: 3 fails in a row → skip persona, continue others
- Response error check: API 200 but embed error in body → caught and logged
- Never record progress for failed embeds
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Reduce embed batch to 5 — AnythingLLM hangs on batches >10
- Fix check_script_running() to properly detect setup.py process
(was returning false because pgrep matched monitor.py too)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Skips the slow folder scan (50K+ files) and upload phases — directly
re-embeds already-uploaded documents to workspaces using progress state.
Use with --reset to clear assignment tracking first.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
28 persona workspace with document upload, OCR pipeline, and vector embedding
assignment via AnythingLLM API. Supports 5 clusters (intel, cyber, military,
humanities, engineering) with batch processing and resume capability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>