feat(tui): add real-time streaming LLM output with full content display

- Convert LiteLLM requests to streaming mode with stream_request()
- Add streaming parser to handle live LLM output segments
- Update TUI for real-time streaming content rendering
- Add tracer methods for streaming content tracking
- Clean function tags from streamed content to prevent display
- Remove all truncation from tool renderers for full content visibility
This commit is contained in:
0xallam
2026-01-05 09:52:05 -08:00
committed by Ahmed Allam
parent a2142cc985
commit a6dcb7756e
21 changed files with 345 additions and 135 deletions

View File

@@ -23,8 +23,7 @@ class ThinkRenderer(BaseToolRenderer):
text.append("\n ")
if thought:
thought_display = cls.truncate(thought, 600)
text.append(thought_display, style="italic dim")
text.append(thought, style="italic dim")
else:
text.append("Thinking...", style="italic dim")