feat: add interactive mode for agent loop
Re-architects the agent loop to support interactive (chat-like) mode where text-only responses pause execution and wait for user input, while tool-call responses continue looping autonomously. - Add `interactive` flag to LLMConfig (default False, no regression) - Add configurable `waiting_timeout` to AgentState (0 = disabled) - _process_iteration returns None for text-only → agent_loop pauses - Conditional system prompt: interactive allows natural text responses - Skip <meta>Continue the task.</meta> injection in interactive mode - Sub-agents inherit interactive from parent (300s auto-resume timeout) - Root interactive agents wait indefinitely for user input (timeout=0) - TUI sets interactive=True; CLI unchanged (non_interactive=True)
This commit is contained in:
@@ -747,7 +747,7 @@ class StrixTUIApp(App): # type: ignore[misc]
|
||||
|
||||
def _build_agent_config(self, args: argparse.Namespace) -> dict[str, Any]:
|
||||
scan_mode = getattr(args, "scan_mode", "deep")
|
||||
llm_config = LLMConfig(scan_mode=scan_mode)
|
||||
llm_config = LLMConfig(scan_mode=scan_mode, interactive=True)
|
||||
|
||||
config = {
|
||||
"llm_config": llm_config,
|
||||
|
||||
Reference in New Issue
Block a user