Overhaul Feynman harness: streamline agents, prompts, and extensions
Remove legacy chains, skills, and config modules. Add citation agent, SYSTEM.md, modular research-tools extension, and web-access layer. Add ralph-wiggum to Pi package stack for long-running loops. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -4,21 +4,8 @@ description: Simulate an AI research peer review with likely objections, severit
|
||||
Review this AI research artifact: $@
|
||||
|
||||
Requirements:
|
||||
- Prefer the project `review` chain or the `researcher` + `verifier` + `reviewer` subagents when the artifact is large or the review needs to inspect paper, code, and experiments together.
|
||||
- Inspect the strongest relevant sources directly before making strong review claims.
|
||||
- If the artifact is a paper or draft, evaluate:
|
||||
- novelty and related-work positioning
|
||||
- clarity of claims
|
||||
- baseline fairness
|
||||
- evaluation design
|
||||
- missing ablations
|
||||
- reproducibility details
|
||||
- whether conclusions outrun the evidence
|
||||
- If code or experiment artifacts exist, compare them against the claimed method and evaluation.
|
||||
- Produce:
|
||||
- short verdict
|
||||
- likely reviewer objections
|
||||
- severity for each issue
|
||||
- revision plan in priority order
|
||||
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `research.md`.
|
||||
- Spawn a `reviewer` subagent with `research.md` to produce the final peer review with inline annotations.
|
||||
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
||||
- Save exactly one review artifact to `outputs/` as markdown.
|
||||
- End with a `Sources` section containing direct URLs for every inspected external source.
|
||||
|
||||
Reference in New Issue
Block a user