- Every workflow prompt now shows a plan and asks the user to confirm before executing - /autoresearch asks for execution environment (local, branch, venv, cloud) and confirms before looping - Writer agent and key prompts now generate charts (pi-charts) and diagrams (Mermaid) when data calls for it - Cite alphaXiv and Agent Computer in README and website homepage - Clear terminal screen before launching Pi TUI - Remove Alpha Hub GitHub link in favor of alphaxiv.org Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
16 lines
977 B
Markdown
16 lines
977 B
Markdown
---
|
|
description: Simulate an AI research peer review with likely objections, severity, and a concrete revision plan.
|
|
args: <artifact>
|
|
section: Research Workflows
|
|
topLevelCli: true
|
|
---
|
|
Review this AI research artifact: $@
|
|
|
|
Requirements:
|
|
- Before starting, outline what will be reviewed and the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.). Present the plan to the user and confirm before proceeding.
|
|
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `research.md`.
|
|
- Spawn a `reviewer` subagent with `research.md` to produce the final peer review with inline annotations.
|
|
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
|
- Save exactly one review artifact to `outputs/` as markdown.
|
|
- End with a `Sources` section containing direct URLs for every inspected external source.
|