feat(tui): add real-time streaming LLM output with full content display
- Convert LiteLLM requests to streaming mode with stream_request() - Add streaming parser to handle live LLM output segments - Update TUI for real-time streaming content rendering - Add tracer methods for streaming content tracking - Clean function tags from streamed content to prevent display - Remove all truncation from tool renderers for full content visibility
This commit is contained in:
@@ -23,8 +23,7 @@ class ThinkRenderer(BaseToolRenderer):
|
||||
text.append("\n ")
|
||||
|
||||
if thought:
|
||||
thought_display = cls.truncate(thought, 600)
|
||||
text.append(thought_display, style="italic dim")
|
||||
text.append(thought, style="italic dim")
|
||||
else:
|
||||
text.append("Thinking...", style="italic dim")
|
||||
|
||||
|
||||
Reference in New Issue
Block a user