- Rename project config dir from .pi/ to .feynman/ (Pi supports this via piConfig.configDir) - Rename citation agent to verifier across all prompts, agents, skills, and docs - Add website with homepage and 24 doc pages (Astro + Tailwind) - Add skills for all workflows (deep-research, lit, review, audit, replicate, compare, draft, autoresearch, watch, jobs, session-log, agentcomputer) - Add Pi-native prompt frontmatter (args, section, topLevelCli) and read at runtime - Remove sync-docs generation layer — docs are standalone - Remove metadata/prompts.mjs and metadata/packages.mjs — not needed at runtime - Rewrite README and homepage copy - Add environment selection to /replicate before executing - Add prompts/delegate.md and AGENTS.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
15 lines
782 B
Markdown
15 lines
782 B
Markdown
---
|
|
description: Simulate an AI research peer review with likely objections, severity, and a concrete revision plan.
|
|
args: <artifact>
|
|
section: Research Workflows
|
|
topLevelCli: true
|
|
---
|
|
Review this AI research artifact: $@
|
|
|
|
Requirements:
|
|
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `research.md`.
|
|
- Spawn a `reviewer` subagent with `research.md` to produce the final peer review with inline annotations.
|
|
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
|
- Save exactly one review artifact to `outputs/` as markdown.
|
|
- End with a `Sources` section containing direct URLs for every inspected external source.
|