- Rename project config dir from .pi/ to .feynman/ (Pi supports this via piConfig.configDir) - Rename citation agent to verifier across all prompts, agents, skills, and docs - Add website with homepage and 24 doc pages (Astro + Tailwind) - Add skills for all workflows (deep-research, lit, review, audit, replicate, compare, draft, autoresearch, watch, jobs, session-log, agentcomputer) - Add Pi-native prompt frontmatter (args, section, topLevelCli) and read at runtime - Remove sync-docs generation layer — docs are standalone - Remove metadata/prompts.mjs and metadata/packages.mjs — not needed at runtime - Rewrite README and homepage copy - Add environment selection to /replicate before executing - Add prompts/delegate.md and AGENTS.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
782 B
782 B
description, args, section, topLevelCli
| description | args | section | topLevelCli |
|---|---|---|---|
| Simulate an AI research peer review with likely objections, severity, and a concrete revision plan. | <artifact> | Research Workflows | true |
Review this AI research artifact: $@
Requirements:
- Spawn a
researchersubagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save toresearch.md. - Spawn a
reviewersubagent withresearch.mdto produce the final peer review with inline annotations. - For small or simple artifacts where evidence gathering is overkill, run the
reviewersubagent directly instead. - Save exactly one review artifact to
outputs/as markdown. - End with a
Sourcessection containing direct URLs for every inspected external source.