Files
feynman/prompts/review.md
2026-04-14 09:30:15 -07:00

1.4 KiB

description, args, section, topLevelCli
description args section topLevelCli
Simulate an AI research peer review with likely objections, severity, and a concrete revision plan. <artifact> Research Workflows true

Review this AI research artifact: $@

Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.

Requirements:

  • Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
  • Spawn a researcher subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to <slug>-research.md.
  • Spawn a reviewer subagent with <slug>-research.md to produce the final peer review with inline annotations.
  • For small or simple artifacts where evidence gathering is overkill, run the reviewer subagent directly instead.
  • If the first review finds FATAL issues and you fix them, run one more verification-style review pass before delivering.
  • Save exactly one review artifact to outputs/<slug>-review.md.
  • End with a Sources section containing direct URLs for every inspected external source.