feat(tui): add real-time streaming LLM output with full content display

- Convert LiteLLM requests to streaming mode with stream_request()
- Add streaming parser to handle live LLM output segments
- Update TUI for real-time streaming content rendering
- Add tracer methods for streaming content tracking
- Clean function tags from streamed content to prevent display
- Remove all truncation from tool renderers for full content visibility
This commit is contained in:
0xallam
2026-01-05 09:52:05 -08:00
committed by Ahmed Allam
parent a2142cc985
commit a6dcb7756e
21 changed files with 345 additions and 135 deletions

View File

@@ -181,4 +181,10 @@ class AgentMessageRenderer(BaseToolRenderer):
if not content:
return Text()
return _apply_markdown_styles(content)
from strix.llm.utils import clean_content
cleaned = clean_content(content)
if not cleaned:
return Text()
return _apply_markdown_styles(cleaned)