Add AI research review workflows

This commit is contained in:
Advait Paliwal
2026-03-22 14:36:47 -07:00
parent dd701e9967
commit dbdad94adc
10 changed files with 163 additions and 4 deletions

17
prompts/ablate.md Normal file
View File

@@ -0,0 +1,17 @@
---
description: Design the smallest convincing ablation set for an AI research project.
---
Design an ablation plan for: $@
Requirements:
- Identify the exact claims the paper is making.
- For each claim, determine what ablation or control is necessary to support it.
- Prefer the `verifier` subagent when the claim structure is complicated.
- Distinguish:
- must-have ablations
- nice-to-have ablations
- unnecessary experiments
- Call out where benchmark norms imply mandatory controls.
- Optimize for the minimum convincing set, not experiment sprawl.
- Save the plan to `outputs/` as markdown if the user wants a durable artifact.
- End with a `Sources` section containing direct URLs for any external sources used.

18
prompts/rebuttal.md Normal file
View File

@@ -0,0 +1,18 @@
---
description: Turn reviewer comments into a structured rebuttal and revision plan for an AI research paper.
---
Prepare a rebuttal workflow for: $@
Requirements:
- If reviewer comments are provided, organize them into a response matrix.
- If reviewer comments are not yet provided, infer the likely strongest objections from the current draft and review them before drafting responses.
- Prefer the `reviewer` subagent or the project `review` chain when fresh critical review is still needed.
- For each issue, produce:
- reviewer concern
- whether it is valid
- evidence available now
- paper changes needed
- rebuttal language
- Do not overclaim fixes that have not been implemented.
- Save the rebuttal matrix to `outputs/` as markdown.
- End with a `Sources` section containing direct URLs for all inspected external sources.

19
prompts/related.md Normal file
View File

@@ -0,0 +1,19 @@
---
description: Build a related-work map and justify why an AI research project needs to exist.
---
Build the related-work and justification view for: $@
Requirements:
- Search for the closest and strongest relevant papers first.
- Prefer the `researcher` subagent when the space is broad or moving quickly.
- Identify:
- foundational papers
- closest prior work
- strongest recent competing approaches
- benchmarks and evaluation norms
- critiques or known weaknesses in the area
- For each important paper, explain why it matters to this project.
- Be explicit about what real gap remains after considering the strongest prior work.
- If the project is not differentiated enough, say so clearly.
- Save the artifact to `outputs/` as markdown if the user wants a durable result.
- End with a `Sources` section containing direct URLs.

24
prompts/review.md Normal file
View File

@@ -0,0 +1,24 @@
---
description: Simulate an AI research peer review with likely objections, severity, and a concrete revision plan.
---
Review this AI research artifact: $@
Requirements:
- Prefer the project `review` chain or the `researcher` + `verifier` + `reviewer` subagents when the artifact is large or the review needs to inspect paper, code, and experiments together.
- Inspect the strongest relevant sources directly before making strong review claims.
- If the artifact is a paper or draft, evaluate:
- novelty and related-work positioning
- clarity of claims
- baseline fairness
- evaluation design
- missing ablations
- reproducibility details
- whether conclusions outrun the evidence
- If code or experiment artifacts exist, compare them against the claimed method and evaluation.
- Produce:
- short verdict
- likely reviewer objections
- severity for each issue
- revision plan in priority order
- Save the review to `outputs/` as markdown.
- End with a `Sources` section containing direct URLs for every inspected external source.