Refine Feynman research workflows

This commit is contained in:
Advait Paliwal
2026-03-22 12:19:33 -07:00
parent be97ac7a38
commit dd701e9967
24 changed files with 905 additions and 43 deletions

View File

@@ -4,6 +4,7 @@ description: Compare a paper's claims against its public codebase and identify m
Audit the paper and codebase for: $@
Requirements:
- Prefer the `researcher` subagent for evidence gathering and the `verifier` subagent for the mismatch pass when the audit is non-trivial.
- Identify the canonical paper first with `alpha_search` and `alpha_get_paper`.
- Extract implementation-sensitive claims with `alpha_ask_paper`.
- If a public repo exists, inspect it with `alpha_read_code`.

View File

@@ -4,6 +4,8 @@ description: Turn a research idea into a paper-oriented end-to-end run with lite
Run an autoresearch workflow for: $@
Requirements:
- Prefer the project `auto` chain or the `planner` + `researcher` + `verifier` + `writer` subagents when the task is broad enough to benefit from decomposition.
- If the run is likely to take a while, or the user wants it detached, launch the subagent workflow in background with `clarify: false, async: true` and report how to inspect status.
- Start by clarifying the research objective, scope, and target contribution.
- Search for the strongest relevant primary sources first.
- If the topic is current, product-oriented, market-facing, or asks about latest developments, start with `web_search` and `fetch_content`.

View File

@@ -4,6 +4,7 @@ description: Compare multiple sources on a topic and produce a source-grounded m
Compare sources for: $@
Requirements:
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to pressure-test the resulting matrix when needed.
- Identify the strongest relevant primary sources first.
- For current or market-facing topics, use `web_search` and `fetch_content` to gather up-to-date primary sources before comparing them.
- For academic claims, use `alpha_search` and inspect the strongest papers directly.

View File

@@ -4,6 +4,8 @@ description: Run a thorough, source-heavy investigation on a topic and produce a
Run a deep research workflow for: $@
Requirements:
- If the task is broad, multi-source, or obviously long-running, prefer delegating through the `subagent` tool. Use the project `researcher`, `verifier`, and `writer` agents, or the project `deep` chain when that decomposition fits.
- If the user wants it to run unattended, or the sweep will clearly take a while, prefer background execution with `subagent` using `clarify: false, async: true`, then report how to inspect status.
- If the topic is current, product-oriented, market-facing, regulatory, or asks about latest developments, start with `web_search` and `fetch_content`.
- If the topic has an academic literature component, use `alpha_search`, `alpha_get_paper`, and `alpha_ask_paper` for the strongest papers.
- Do not rely on a single source type when the topic spans both current reality and academic background.

View File

@@ -4,6 +4,7 @@ description: Turn research findings into a polished paper-style draft with equat
Write a paper-style draft for: $@
Requirements:
- Prefer the `writer` subagent when the draft should be produced from already-collected notes, and use `verifier` first if the evidence still looks shaky.
- Ground every claim in inspected sources, experiments, or explicit inference.
- Use clean Markdown structure with LaTeX where equations materially help.
- Include at minimum:

14
prompts/jobs.md Normal file
View File

@@ -0,0 +1,14 @@
---
description: Inspect active background research work, including running processes and scheduled follow-ups.
---
Inspect active background work for this project.
Requirements:
- Use the `process` tool with the `list` action to inspect running and finished managed background processes.
- Use the scheduling tooling to list active recurring or deferred jobs if any are configured.
- Summarize:
- active background processes
- queued or recurring research watches
- failures that need attention
- the next concrete command the user should run if they want logs or detailed status
- Be concise and operational.

View File

@@ -4,6 +4,7 @@ description: Run a literature review on a topic using paper search and primary-s
Investigate the following topic as a literature review: $@
Requirements:
- Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis.
- If the topic is academic or paper-centric, use `alpha_search` first.
- If the topic is current, product-oriented, market-facing, or asks about latest developments, use `web_search` and `fetch_content` first, then use `alpha_search` only for academic background.
- Use `alpha_get_paper` on the most relevant papers before making strong claims.

12
prompts/log.md Normal file
View File

@@ -0,0 +1,12 @@
---
description: Write a durable session log with completed work, findings, open questions, and next steps.
---
Write a session log for the current research work.
Requirements:
- Summarize what was done in this session.
- Capture the strongest findings or decisions.
- List open questions, unresolved risks, and concrete next steps.
- Reference any important artifacts written to `notes/`, `outputs/`, `experiments/`, or `papers/`.
- If any external claims matter, include direct source URLs.
- Save the log to `notes/` as markdown with a date-oriented filename.

View File

@@ -4,6 +4,7 @@ description: Produce a general research memo grounded in explicit sources and di
Write a research memo about: $@
Requirements:
- Use the `researcher` and `writer` subagents when decomposition will improve quality or reduce context pressure.
- Start by finding the strongest relevant sources.
- If the topic is current, market-facing, product-oriented, regulatory, or asks about latest developments, use `web_search` and `fetch_content` first.
- Use `alpha_search` for academic background where relevant, but do not rely on it alone for current topics.

View File

@@ -4,6 +4,7 @@ description: Build a prioritized reading list on a research topic with rationale
Create a research reading list for: $@
Requirements:
- Use the `researcher` subagent when a wider literature sweep would help before curating the final list.
- If the topic is academic, use `alpha_search` with `all` mode.
- If the topic is current, product-oriented, or asks for the latest landscape, use `web_search` and `fetch_content` first, then add `alpha_search` for academic background when relevant.
- Inspect the strongest papers or primary sources directly before recommending them.

View File

@@ -4,6 +4,7 @@ description: Plan or execute a replication workflow for a paper, claim, or bench
Design a replication plan for: $@
Requirements:
- Use the `subagent` tool for decomposition when the replication needs separate planning, evidence extraction, and execution passes.
- Identify the canonical paper or source material first.
- Use `alpha_get_paper` for the target paper.
- Use `alpha_ask_paper` to extract the exact implementation or evaluation details you still need.

14
prompts/watch.md Normal file
View File

@@ -0,0 +1,14 @@
---
description: Set up a recurring or deferred research watch on a topic, company, paper area, or product surface.
---
Create a research watch for: $@
Requirements:
- Start with a baseline sweep of the topic using the strongest relevant sources.
- If the watch is about current events, products, markets, regulations, or releases, use `web_search` and `fetch_content` first.
- If the watch has a literature component, add `alpha_search` and inspect the strongest papers directly.
- Summarize what should be monitored, what signals matter, and what counts as a meaningful change.
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
- If the user wants detached execution for the initial sweep, use `subagent` in background mode and report how to inspect status.
- Save a durable baseline artifact to `outputs/`.
- End with a `Sources` section containing direct URLs for every source used.