Files
feynman/prompts/autoresearch.md
Advait Paliwal f5570b4e5a Rename .pi to .feynman, rename citation agent to verifier, add website, skills, and docs
- Rename project config dir from .pi/ to .feynman/ (Pi supports this via piConfig.configDir)
- Rename citation agent to verifier across all prompts, agents, skills, and docs
- Add website with homepage and 24 doc pages (Astro + Tailwind)
- Add skills for all workflows (deep-research, lit, review, audit, replicate, compare, draft, autoresearch, watch, jobs, session-log, agentcomputer)
- Add Pi-native prompt frontmatter (args, section, topLevelCli) and read at runtime
- Remove sync-docs generation layer — docs are standalone
- Remove metadata/prompts.mjs and metadata/packages.mjs — not needed at runtime
- Rewrite README and homepage copy
- Add environment selection to /replicate before executing
- Add prompts/delegate.md and AGENTS.md

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 17:35:35 -07:00

1.5 KiB

description, args, section, topLevelCli
description args section topLevelCli
Autonomous experiment loop — try ideas, measure results, keep what works, discard what doesn't, repeat. <idea> Research Workflows true

Start an autoresearch optimization loop for: $@

This command uses pi-autoresearch. Enter autoresearch mode and begin the autonomous experiment loop.

Behavior

  • If autoresearch.md and autoresearch.jsonl already exist in the project, resume the existing session with the user's input as additional context.
  • Otherwise, gather the optimization target from the user:
    • What to optimize (test speed, bundle size, training loss, build time, etc.)
    • The benchmark command to run
    • The metric name, unit, and direction (lower/higher is better)
    • Files in scope for changes
  • Then initialize the session: create autoresearch.md, autoresearch.sh, run the baseline, and start looping.

Loop

Each iteration: edit → commit → run_experimentlog_experiment → keep or revert → repeat. Do not stop unless interrupted or maxIterations is reached.

Key tools

  • init_experiment — one-time session config (name, metric, unit, direction)
  • run_experiment — run the benchmark command, capture output and wall-clock time
  • log_experiment — record result, auto-commit, update dashboard

Subcommands

  • /autoresearch <text> — start or resume the loop
  • /autoresearch off — stop the loop, keep data
  • /autoresearch clear — delete all state and start fresh