feat(tui): add real-time streaming LLM output with full content display
- Convert LiteLLM requests to streaming mode with stream_request() - Add streaming parser to handle live LLM output segments - Update TUI for real-time streaming content rendering - Add tracer methods for streaming content tracking - Clean function tags from streamed content to prevent display - Remove all truncation from tool renderers for full content visibility
This commit is contained in:
@@ -181,4 +181,10 @@ class AgentMessageRenderer(BaseToolRenderer):
|
||||
if not content:
|
||||
return Text()
|
||||
|
||||
return _apply_markdown_styles(content)
|
||||
from strix.llm.utils import clean_content
|
||||
|
||||
cleaned = clean_content(content)
|
||||
if not cleaned:
|
||||
return Text()
|
||||
|
||||
return _apply_markdown_styles(cleaned)
|
||||
|
||||
Reference in New Issue
Block a user