Rename .pi to .feynman, rename citation agent to verifier, add website, skills, and docs
- Rename project config dir from .pi/ to .feynman/ (Pi supports this via piConfig.configDir) - Rename citation agent to verifier across all prompts, agents, skills, and docs - Add website with homepage and 24 doc pages (Astro + Tailwind) - Add skills for all workflows (deep-research, lit, review, audit, replicate, compare, draft, autoresearch, watch, jobs, session-log, agentcomputer) - Add Pi-native prompt frontmatter (args, section, topLevelCli) and read at runtime - Remove sync-docs generation layer — docs are standalone - Remove metadata/prompts.mjs and metadata/packages.mjs — not needed at runtime - Rewrite README and homepage copy - Add environment selection to /replicate before executing - Add prompts/delegate.md and AGENTS.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -15,9 +15,9 @@ Operating rules:
|
|||||||
- Never answer a latest/current question from arXiv or alpha-backed paper search alone.
|
- Never answer a latest/current question from arXiv or alpha-backed paper search alone.
|
||||||
- For AI model or product claims, prefer official docs/vendor pages plus recent web sources over old papers.
|
- For AI model or product claims, prefer official docs/vendor pages plus recent web sources over old papers.
|
||||||
- Use the installed Pi research packages for broader web/PDF access, document parsing, citation workflows, background processes, memory, session recall, and delegated subtasks when they reduce friction.
|
- Use the installed Pi research packages for broader web/PDF access, document parsing, citation workflows, background processes, memory, session recall, and delegated subtasks when they reduce friction.
|
||||||
- Feynman ships project subagents for research work. Prefer the `researcher`, `writer`, `citation`, and `reviewer` subagents for larger research tasks when decomposition clearly helps.
|
- Feynman ships project subagents for research work. Prefer the `researcher`, `writer`, `verifier`, and `reviewer` subagents for larger research tasks when decomposition clearly helps.
|
||||||
- Use subagents when decomposition meaningfully reduces context pressure or lets you parallelize evidence gathering. For detached long-running work, prefer background subagent execution with `clarify: false, async: true`.
|
- Use subagents when decomposition meaningfully reduces context pressure or lets you parallelize evidence gathering. For detached long-running work, prefer background subagent execution with `clarify: false, async: true`.
|
||||||
- For deep research, act like a lead researcher by default: plan first, use hidden worker batches only when breadth justifies them, synthesize batch results, and finish with a verification/citation pass.
|
- For deep research, act like a lead researcher by default: plan first, use hidden worker batches only when breadth justifies them, synthesize batch results, and finish with a verification pass.
|
||||||
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
||||||
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
||||||
- Use the visualization packages when a chart, diagram, or interactive widget would materially improve understanding. Prefer charts for quantitative comparisons, Mermaid for simple process/architecture diagrams, and interactive HTML widgets for exploratory visual explanations.
|
- Use the visualization packages when a chart, diagram, or interactive widget would materially improve understanding. Prefer charts for quantitative comparisons, Mermaid for simple process/architecture diagrams, and interactive HTML widgets for exploratory visual explanations.
|
||||||
@@ -51,6 +51,12 @@ Numbered list matching the evidence table:
|
|||||||
1. Author/Title — URL
|
1. Author/Title — URL
|
||||||
2. Author/Title — URL
|
2. Author/Title — URL
|
||||||
|
|
||||||
|
## Context hygiene
|
||||||
|
- Write findings to the output file progressively. Do not accumulate full page contents in your working memory — extract what you need, write it to file, move on.
|
||||||
|
- When `includeContent: true` returns large pages, extract relevant quotes and discard the rest immediately.
|
||||||
|
- If your search produces 10+ results, triage by title/snippet first. Only fetch full content for the top candidates.
|
||||||
|
- Return a one-line summary to the parent, not full findings. The parent reads the output file.
|
||||||
|
|
||||||
## Output contract
|
## Output contract
|
||||||
- Save to the output file (default: `research.md`).
|
- Save to the output file (default: `research.md`).
|
||||||
- Minimum viable output: evidence table with ≥5 numbered entries, findings with inline references, and a numbered Sources section.
|
- Minimum viable output: evidence table with ≥5 numbered entries, findings with inline references, and a numbered Sources section.
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
name: citation
|
name: verifier
|
||||||
description: Post-process a draft to add inline citations and verify every source URL.
|
description: Post-process a draft to add inline citations and verify every source URL.
|
||||||
thinking: medium
|
thinking: medium
|
||||||
tools: read, bash, grep, find, ls, write, edit
|
tools: read, bash, grep, find, ls, write, edit
|
||||||
@@ -7,7 +7,7 @@ output: cited.md
|
|||||||
defaultProgress: true
|
defaultProgress: true
|
||||||
---
|
---
|
||||||
|
|
||||||
You are Feynman's citation agent.
|
You are Feynman's verifier agent.
|
||||||
|
|
||||||
You receive a draft document and the research files it was built from. Your job is to:
|
You receive a draft document and the research files it was built from. Your job is to:
|
||||||
|
|
||||||
@@ -36,8 +36,8 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
|||||||
- Use clean Markdown structure and add equations only when they materially help.
|
- Use clean Markdown structure and add equations only when they materially help.
|
||||||
- Keep the narrative readable, but never outrun the evidence.
|
- Keep the narrative readable, but never outrun the evidence.
|
||||||
- Produce artifacts that are ready to review in a browser or PDF preview.
|
- Produce artifacts that are ready to review in a browser or PDF preview.
|
||||||
- Do NOT add inline citations — the citation agent handles that as a separate post-processing step.
|
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
||||||
- Do NOT add a Sources section — the citation agent builds that.
|
- Do NOT add a Sources section — the verifier agent builds that.
|
||||||
|
|
||||||
## Output contract
|
## Output contract
|
||||||
- Save the main artifact to the specified output path (default: `draft.md`).
|
- Save the main artifact to the specified output path (default: `draft.md`).
|
||||||
8
.gitignore
vendored
8
.gitignore
vendored
@@ -1,9 +1,9 @@
|
|||||||
node_modules
|
node_modules
|
||||||
.env
|
.env
|
||||||
.feynman
|
.feynman/npm
|
||||||
.pi/npm
|
.feynman/git
|
||||||
.pi/git
|
.feynman/sessions
|
||||||
.pi/schedule-prompts.json
|
.feynman/schedule-prompts.json
|
||||||
dist
|
dist
|
||||||
*.tgz
|
*.tgz
|
||||||
outputs/*
|
outputs/*
|
||||||
|
|||||||
53
AGENTS.md
Normal file
53
AGENTS.md
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
# Agents
|
||||||
|
|
||||||
|
`AGENTS.md` is the repo-level contract for agents working in this repository.
|
||||||
|
|
||||||
|
Pi subagent behavior does **not** live here. The source of truth for bundled Pi subagents is `.feynman/agents/*.md`, which the runtime syncs into the Pi agent directory. If you need to change how `researcher`, `reviewer`, `writer`, or `verifier` behave, edit the corresponding file in `.feynman/agents/` instead of duplicating those prompts here.
|
||||||
|
|
||||||
|
## Pi subagents
|
||||||
|
|
||||||
|
Feynman ships four bundled research subagents:
|
||||||
|
|
||||||
|
- `researcher`
|
||||||
|
- `reviewer`
|
||||||
|
- `writer`
|
||||||
|
- `verifier`
|
||||||
|
|
||||||
|
They are defined in `.feynman/agents/` and invoked via the Pi `subagent` tool.
|
||||||
|
|
||||||
|
## What belongs here
|
||||||
|
|
||||||
|
Keep this file focused on cross-agent repo conventions:
|
||||||
|
|
||||||
|
- output locations and file naming expectations
|
||||||
|
- provenance and verification requirements
|
||||||
|
- handoff rules between the lead agent and subagents
|
||||||
|
- remote delegation conventions
|
||||||
|
|
||||||
|
Do **not** restate per-agent prompt text here unless there is a repo-wide constraint that applies to all agents.
|
||||||
|
|
||||||
|
## Output conventions
|
||||||
|
|
||||||
|
- Research outputs go in `outputs/`.
|
||||||
|
- Paper-style drafts go in `papers/`.
|
||||||
|
- Session logs go in `notes/`.
|
||||||
|
- Plan artifacts for long-running workflows go in `outputs/.plans/`.
|
||||||
|
- Intermediate research artifacts such as `research-web.md` and `research-papers.md` are written to disk by subagents and read by the lead agent. They are not returned inline unless the user explicitly asks for them.
|
||||||
|
|
||||||
|
## Provenance and verification
|
||||||
|
|
||||||
|
- Every output from `/deepresearch` and `/lit` must include a `.provenance.md` sidecar.
|
||||||
|
- Provenance sidecars should record source accounting and verification status.
|
||||||
|
- Source verification and citation cleanup belong in the `verifier` stage, not in ad hoc edits after delivery.
|
||||||
|
- Verification passes should happen before delivery when the workflow calls for them.
|
||||||
|
|
||||||
|
## Delegation rules
|
||||||
|
|
||||||
|
- The lead agent plans, delegates, synthesizes, and delivers.
|
||||||
|
- Use subagents when the work is meaningfully decomposable; do not spawn them for trivial work.
|
||||||
|
- Prefer file-based handoffs over dumping large intermediate results back into parent context.
|
||||||
|
- When delegating to remote machines, retrieve final artifacts back into the local workspace and save them locally.
|
||||||
|
|
||||||
|
## Remote delegation
|
||||||
|
|
||||||
|
Feynman can delegate tasks to remote cloud machines via the `computer-fleet` and `computer-acp` skills. Load those skills on demand for CLI usage, session management, ACP bridging, and file retrieval.
|
||||||
214
README.md
214
README.md
@@ -1,161 +1,99 @@
|
|||||||
# Feynman
|
# Feynman
|
||||||
|
|
||||||
`feynman` is a research-first CLI built on `@mariozechner/pi-coding-agent`.
|
The open source AI research agent
|
||||||
|
|
||||||
It keeps the useful parts of a coding agent:
|
|
||||||
- file access
|
|
||||||
- shell execution
|
|
||||||
- persistent sessions
|
|
||||||
- custom extensions
|
|
||||||
|
|
||||||
But it biases the runtime toward general research work:
|
|
||||||
- literature review
|
|
||||||
- source discovery and paper lookup
|
|
||||||
- source comparison
|
|
||||||
- research memo writing
|
|
||||||
- paper and report drafting
|
|
||||||
- session recall and durable research memory
|
|
||||||
- recurring and deferred research jobs
|
|
||||||
- replication planning when relevant
|
|
||||||
|
|
||||||
The primary paper backend is `@companion-ai/alpha-hub` and your alphaXiv account.
|
|
||||||
The rest of the workflow is augmented through a curated `.pi/settings.json` package stack.
|
|
||||||
|
|
||||||
## Install
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
npm install -g @companion-ai/feynman
|
npm install -g @companion-ai/feynman
|
||||||
```
|
```
|
||||||
|
|
||||||
Then authenticate alphaXiv and start the CLI:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
feynman setup
|
feynman setup
|
||||||
feynman
|
feynman
|
||||||
```
|
```
|
||||||
|
|
||||||
For local development:
|
---
|
||||||
|
|
||||||
|
## What you type → what happens
|
||||||
|
|
||||||
|
| Prompt | Result |
|
||||||
|
| --- | --- |
|
||||||
|
| `feynman "what do we know about scaling laws"` | Searches papers and web, produces a cited research brief |
|
||||||
|
| `feynman deepresearch "mechanistic interpretability"` | Multi-agent investigation with parallel researchers, synthesis, verification |
|
||||||
|
| `feynman lit "RLHF alternatives"` | Literature review with consensus, disagreements, open questions |
|
||||||
|
| `feynman audit 2401.12345` | Compares paper claims against the public codebase |
|
||||||
|
| `feynman replicate "chain-of-thought improves math"` | Asks where to run, then builds a replication plan |
|
||||||
|
| `feynman "summarize this PDF" --prompt paper.pdf` | One-shot mode, no REPL |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Workflows
|
||||||
|
|
||||||
|
Ask naturally or use slash commands as shortcuts.
|
||||||
|
|
||||||
|
| Command | What it does |
|
||||||
|
| --- | --- |
|
||||||
|
| `/deepresearch <topic>` | Source-heavy multi-agent investigation |
|
||||||
|
| `/lit <topic>` | Literature review from paper search and primary sources |
|
||||||
|
| `/review <artifact>` | Simulated peer review with severity and revision plan |
|
||||||
|
| `/audit <item>` | Paper vs. codebase mismatch audit |
|
||||||
|
| `/replicate <paper>` | Replication plan with environment selection |
|
||||||
|
| `/compare <topic>` | Source comparison matrix |
|
||||||
|
| `/draft <topic>` | Paper-style draft from research findings |
|
||||||
|
| `/autoresearch <idea>` | Autonomous experiment loop |
|
||||||
|
| `/watch <topic>` | Recurring research watch |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agents
|
||||||
|
|
||||||
|
Four bundled research agents, dispatched automatically or via subagent commands.
|
||||||
|
|
||||||
|
- **Researcher** — gather evidence across papers, web, repos, docs
|
||||||
|
- **Reviewer** — simulated peer review with severity-graded feedback
|
||||||
|
- **Writer** — structured drafts from research notes
|
||||||
|
- **Verifier** — inline citations, source URL verification, dead link cleanup
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
- **[AlphaXiv](https://www.alphaxiv.org/)** — paper search, Q&A, code reading, persistent annotations
|
||||||
|
- **Web search** — Gemini or Perplexity, zero-config default via signed-in Chromium
|
||||||
|
- **Session search** — indexed recall across prior research sessions
|
||||||
|
- **Preview** — browser and PDF export of generated artifacts
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CLI
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd /Users/advaitpaliwal/Companion/Code/feynman
|
feynman # REPL
|
||||||
npm install
|
feynman setup # guided setup
|
||||||
cp .env.example .env
|
feynman doctor # diagnose everything
|
||||||
npm run start
|
feynman status # current config summary
|
||||||
|
feynman model login [provider] # model auth
|
||||||
|
feynman model set <provider/model> # set default model
|
||||||
|
feynman alpha login # alphaXiv auth
|
||||||
|
feynman search status # web search config
|
||||||
```
|
```
|
||||||
|
|
||||||
Feynman uses Pi under the hood, but the user-facing entrypoint is `feynman`, not `pi`.
|
---
|
||||||
When you run `feynman`, it launches the real Pi interactive TUI with Feynman's research extensions, prompt templates, package stack, memory snapshot, and branded defaults preloaded.
|
|
||||||
|
|
||||||
Most users should not need slash commands. The intended default is:
|
## How it works
|
||||||
- ask naturally
|
|
||||||
- let Feynman route into the right workflow
|
|
||||||
- use slash commands only as explicit shortcuts or overrides
|
|
||||||
|
|
||||||
## Commands
|
Built on [Pi](https://github.com/mariozechner/pi-coding-agent) and [Alpha Hub](https://github.com/getcompanion-ai/alpha-hub). Pi provides the agent runtime — file access, shell execution, persistent sessions, custom extensions. Alpha Hub connects to [alphaXiv](https://www.alphaxiv.org/) for paper search, Q&A, code reading, and annotations.
|
||||||
|
|
||||||
Inside the REPL:
|
Every output is source-grounded. Claims link to papers, docs, or repos with direct URLs.
|
||||||
|
|
||||||
- `/help` shows local commands
|
---
|
||||||
- `/init` bootstraps `AGENTS.md` and `notes/session-logs/`
|
|
||||||
- `/alpha-login` signs in to alphaXiv
|
|
||||||
- `/alpha-status` checks alphaXiv auth
|
|
||||||
- `/new` starts a new persisted session
|
|
||||||
- `/exit` quits
|
|
||||||
- `/deepresearch <topic>` runs a thorough source-heavy investigation workflow
|
|
||||||
- `/lit <topic>` expands the literature-review prompt template
|
|
||||||
- `/review <artifact>` simulates a peer review for an AI research artifact
|
|
||||||
- `/audit <item>` expands the paper/code audit prompt template
|
|
||||||
- `/replicate <paper or claim>` expands the replication prompt template
|
|
||||||
- `/draft <topic>` expands the paper-style writing prompt template
|
|
||||||
- `/compare <topic>` expands the source comparison prompt template
|
|
||||||
- `/autoresearch <idea>` expands the autonomous experiment loop
|
|
||||||
- `/watch <topic>` schedules or prepares a recurring research watch
|
|
||||||
- `/log` writes a durable session log to `notes/`
|
|
||||||
- `/jobs` inspects active background work
|
|
||||||
|
|
||||||
Package-powered workflows inside the REPL:
|
## Contributing
|
||||||
|
|
||||||
- `/agents` opens the subagent and chain manager
|
```bash
|
||||||
- `/run` and `/parallel` delegate work to subagents when you want explicit decomposition
|
git clone https://github.com/getcompanion-ai/feynman.git
|
||||||
- `/ps` opens the background process panel
|
cd feynman && npm install && npm run start
|
||||||
- `/schedule-prompt` manages recurring and deferred jobs
|
|
||||||
- `/search` opens indexed session search
|
|
||||||
- `/preview` previews generated artifacts in the terminal, browser, or PDF
|
|
||||||
|
|
||||||
Outside the REPL:
|
|
||||||
|
|
||||||
- `feynman setup` runs the guided setup for model auth, alpha login, Pi web access, and preview deps
|
|
||||||
- `feynman model login <provider>` logs into a Pi OAuth model provider from the outer Feynman CLI
|
|
||||||
- `feynman --alpha-login` signs in to alphaXiv
|
|
||||||
- `feynman --alpha-status` checks alphaXiv auth
|
|
||||||
- `feynman --doctor` checks models, auth, preview dependencies, and branded settings
|
|
||||||
- `feynman --setup-preview` installs `pandoc` automatically on macOS/Homebrew systems when preview support is missing
|
|
||||||
|
|
||||||
## Web Search Routing
|
|
||||||
|
|
||||||
Feynman v1 keeps web access simple: it uses the bundled `pi-web-access` package directly instead of maintaining a second Feynman-owned provider layer.
|
|
||||||
|
|
||||||
The Pi web stack underneath supports three runtime routes:
|
|
||||||
|
|
||||||
- `auto` — prefer Perplexity when configured, otherwise fall back to Gemini
|
|
||||||
- `perplexity` — force Perplexity Sonar
|
|
||||||
- `gemini` — force Gemini
|
|
||||||
|
|
||||||
By default, the expected path is zero-config Gemini Browser via a signed-in Chromium profile. Advanced users can edit `~/.pi/web-search.json` directly if they want Gemini API keys, Perplexity keys, or a different route.
|
|
||||||
|
|
||||||
Useful commands:
|
|
||||||
|
|
||||||
- `feynman search status` — show the active Pi web-access route and config path
|
|
||||||
|
|
||||||
## Custom Tools
|
|
||||||
|
|
||||||
The starter extension adds:
|
|
||||||
|
|
||||||
- `alpha_search` for alphaXiv-backed paper discovery
|
|
||||||
- `alpha_get_paper` for fetching paper reports or raw text
|
|
||||||
- `alpha_ask_paper` for targeted paper Q&A
|
|
||||||
- `alpha_annotate_paper` for persistent local notes
|
|
||||||
- `alpha_list_annotations` for recall across sessions
|
|
||||||
- `alpha_read_code` for reading a paper repository
|
|
||||||
- `session_search` for recovering prior Feynman work from stored transcripts
|
|
||||||
- `preview_file` for browser/PDF review of generated artifacts
|
|
||||||
|
|
||||||
Feynman also ships bundled research subagents in `.pi/agents/`:
|
|
||||||
|
|
||||||
- `researcher` for evidence gathering
|
|
||||||
- `reviewer` for peer-review style criticism
|
|
||||||
- `writer` for polished memo and draft writing
|
|
||||||
- `citation` for inline citations and source verification
|
|
||||||
|
|
||||||
Feynman uses `@companion-ai/alpha-hub` directly in-process rather than shelling out to the CLI.
|
|
||||||
|
|
||||||
## Curated Pi Stack
|
|
||||||
|
|
||||||
Feynman loads a lean research stack from [.pi/settings.json](/Users/advaitpaliwal/Companion/Code/feynman/.pi/settings.json):
|
|
||||||
|
|
||||||
- `pi-subagents` for parallel literature gathering and decomposition
|
|
||||||
- `pi-btw` for fast side-thread /btw conversations without interrupting the main run
|
|
||||||
- `pi-docparser` for PDFs, Office docs, spreadsheets, and images
|
|
||||||
- `pi-web-access` for broader web, GitHub, PDF, and media access
|
|
||||||
- `pi-markdown-preview` for polished Markdown and LaTeX-heavy research writeups
|
|
||||||
- `@walterra/pi-charts` for charts and quantitative visualizations
|
|
||||||
- `pi-generative-ui` for interactive HTML-style widgets
|
|
||||||
- `pi-mermaid` for diagrams in the TUI
|
|
||||||
- `@aliou/pi-processes` for long-running experiments and log tails
|
|
||||||
- `pi-zotero` for citation-library workflows
|
|
||||||
- `@kaiserlich-dev/pi-session-search` for indexed session recall and summarize/resume UI
|
|
||||||
- `pi-schedule-prompt` for recurring and deferred research jobs
|
|
||||||
- `@samfp/pi-memory` for automatic preference/correction memory across sessions
|
|
||||||
|
|
||||||
The default expectation is source-grounded outputs with explicit `Sources` sections containing direct URLs and durable artifacts written to `outputs/`, `notes/`, `experiments/`, or `papers/`.
|
|
||||||
|
|
||||||
## Layout
|
|
||||||
|
|
||||||
```text
|
|
||||||
feynman/
|
|
||||||
├── .pi/agents/ # Bundled research subagents and chains
|
|
||||||
├── extensions/ # Custom research tools
|
|
||||||
├── papers/ # Polished paper-style drafts and writeups
|
|
||||||
├── prompts/ # Slash-style prompt templates
|
|
||||||
└── src/ # Branded launcher around the embedded Pi TUI
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[Docs](https://feynman.companion.ai/docs) · [MIT License](LICENSE)
|
||||||
|
|
||||||
|
Built on [Pi](https://github.com/mariozechner/pi-coding-agent) and [Alpha Hub](https://github.com/getcompanion-ai/alpha-hub).
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import { registerHelpCommand } from "./research-tools/help.js";
|
|||||||
import { registerInitCommand, registerPreviewTool, registerSessionSearchTool } from "./research-tools/project.js";
|
import { registerInitCommand, registerPreviewTool, registerSessionSearchTool } from "./research-tools/project.js";
|
||||||
|
|
||||||
export default function researchTools(pi: ExtensionAPI): void {
|
export default function researchTools(pi: ExtensionAPI): void {
|
||||||
const cache: { agentSummaryPromise?: Promise<{ count: number; lines: string[] }> } = {};
|
const cache: { agentSummaryPromise?: Promise<{ agents: string[]; chains: string[] }> } = {};
|
||||||
|
|
||||||
pi.on("session_start", async (_event, ctx) => {
|
pi.on("session_start", async (_event, ctx) => {
|
||||||
await installFeynmanHeader(pi, ctx, cache);
|
await installFeynmanHeader(pi, ctx, cache);
|
||||||
|
|||||||
@@ -15,11 +15,12 @@ import {
|
|||||||
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
||||||
import { Type } from "@sinclair/typebox";
|
import { Type } from "@sinclair/typebox";
|
||||||
|
|
||||||
|
import { getExtensionCommandSpec } from "../../metadata/commands.mjs";
|
||||||
import { formatToolText } from "./shared.js";
|
import { formatToolText } from "./shared.js";
|
||||||
|
|
||||||
export function registerAlphaCommands(pi: ExtensionAPI): void {
|
export function registerAlphaCommands(pi: ExtensionAPI): void {
|
||||||
pi.registerCommand("alpha-login", {
|
pi.registerCommand("alpha-login", {
|
||||||
description: "Sign in to alphaXiv from inside Feynman.",
|
description: getExtensionCommandSpec("alpha-login")?.description ?? "Sign in to alphaXiv from inside Feynman.",
|
||||||
handler: async (_args, ctx) => {
|
handler: async (_args, ctx) => {
|
||||||
if (isAlphaLoggedIn()) {
|
if (isAlphaLoggedIn()) {
|
||||||
const name = getAlphaUserName();
|
const name = getAlphaUserName();
|
||||||
@@ -34,7 +35,7 @@ export function registerAlphaCommands(pi: ExtensionAPI): void {
|
|||||||
});
|
});
|
||||||
|
|
||||||
pi.registerCommand("alpha-logout", {
|
pi.registerCommand("alpha-logout", {
|
||||||
description: "Clear alphaXiv auth from inside Feynman.",
|
description: getExtensionCommandSpec("alpha-logout")?.description ?? "Clear alphaXiv auth from inside Feynman.",
|
||||||
handler: async (_args, ctx) => {
|
handler: async (_args, ctx) => {
|
||||||
logoutAlpha();
|
logoutAlpha();
|
||||||
ctx.ui.notify("alphaXiv auth cleared", "info");
|
ctx.ui.notify("alphaXiv auth cleared", "info");
|
||||||
@@ -42,7 +43,7 @@ export function registerAlphaCommands(pi: ExtensionAPI): void {
|
|||||||
});
|
});
|
||||||
|
|
||||||
pi.registerCommand("alpha-status", {
|
pi.registerCommand("alpha-status", {
|
||||||
description: "Show alphaXiv authentication status.",
|
description: getExtensionCommandSpec("alpha-status")?.description ?? "Show alphaXiv authentication status.",
|
||||||
handler: async (_args, ctx) => {
|
handler: async (_args, ctx) => {
|
||||||
if (!isAlphaLoggedIn()) {
|
if (!isAlphaLoggedIn()) {
|
||||||
ctx.ui.notify("alphaXiv not connected", "warning");
|
ctx.ui.notify("alphaXiv not connected", "warning");
|
||||||
|
|||||||
@@ -106,7 +106,7 @@ async function buildAgentCatalogSummary(): Promise<{ agents: string[]; chains: s
|
|||||||
const agents: string[] = [];
|
const agents: string[] = [];
|
||||||
const chains: string[] = [];
|
const chains: string[] = [];
|
||||||
try {
|
try {
|
||||||
const entries = await readdir(resolvePath(APP_ROOT, ".pi", "agents"), { withFileTypes: true });
|
const entries = await readdir(resolvePath(APP_ROOT, ".feynman", "agents"), { withFileTypes: true });
|
||||||
for (const entry of entries) {
|
for (const entry of entries) {
|
||||||
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
||||||
if (entry.name.endsWith(".chain.md")) {
|
if (entry.name.endsWith(".chain.md")) {
|
||||||
@@ -243,9 +243,13 @@ export function installFeynmanHeader(
|
|||||||
pushList("Chains", agentData.chains);
|
pushList("Chains", agentData.chains);
|
||||||
|
|
||||||
if (activity) {
|
if (activity) {
|
||||||
|
const maxActivityLen = leftW * 2;
|
||||||
|
const trimmed = activity.length > maxActivityLen
|
||||||
|
? `${activity.slice(0, maxActivityLen - 1)}…`
|
||||||
|
: activity;
|
||||||
leftLines.push("");
|
leftLines.push("");
|
||||||
leftLines.push(theme.fg("accent", theme.bold("Last Activity")));
|
leftLines.push(theme.fg("accent", theme.bold("Last Activity")));
|
||||||
for (const line of wrapWords(activity, leftW)) {
|
for (const line of wrapWords(trimmed, leftW)) {
|
||||||
leftLines.push(theme.fg("dim", line));
|
leftLines.push(theme.fg("dim", line));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,59 +1,82 @@
|
|||||||
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
||||||
|
import {
|
||||||
|
extensionCommandSpecs,
|
||||||
|
formatSlashUsage,
|
||||||
|
getExtensionCommandSpec,
|
||||||
|
livePackageCommandGroups,
|
||||||
|
readPromptSpecs,
|
||||||
|
} from "../../metadata/commands.mjs";
|
||||||
|
import { APP_ROOT } from "./shared.js";
|
||||||
|
|
||||||
type HelpCommand = { usage: string; description: string };
|
type HelpCommand = { usage: string; description: string };
|
||||||
type HelpSection = { title: string; commands: HelpCommand[] };
|
type HelpSection = { title: string; commands: HelpCommand[] };
|
||||||
|
|
||||||
function buildHelpSections(): HelpSection[] {
|
function buildHelpSections(pi: ExtensionAPI): HelpSection[] {
|
||||||
|
const liveCommands = new Map(pi.getCommands().map((command) => [command.name, command]));
|
||||||
|
const promptSpecs = readPromptSpecs(APP_ROOT);
|
||||||
|
const sections = new Map<string, HelpCommand[]>();
|
||||||
|
|
||||||
|
for (const command of promptSpecs.filter((entry) => entry.section !== "Internal")) {
|
||||||
|
const live = liveCommands.get(command.name);
|
||||||
|
if (!live) continue;
|
||||||
|
const items = sections.get(command.section) ?? [];
|
||||||
|
items.push({
|
||||||
|
usage: formatSlashUsage(command),
|
||||||
|
description: live.description ?? command.description,
|
||||||
|
});
|
||||||
|
sections.set(command.section, items);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const command of extensionCommandSpecs.filter((entry) => entry.publicDocs)) {
|
||||||
|
const live = liveCommands.get(command.name);
|
||||||
|
if (!live) continue;
|
||||||
|
const items = sections.get(command.section) ?? [];
|
||||||
|
items.push({
|
||||||
|
usage: formatSlashUsage(command),
|
||||||
|
description: live.description ?? command.description,
|
||||||
|
});
|
||||||
|
sections.set(command.section, items);
|
||||||
|
}
|
||||||
|
|
||||||
|
const ownedNames = new Set([
|
||||||
|
...promptSpecs.filter((entry) => entry.section !== "Internal").map((entry) => entry.name),
|
||||||
|
...extensionCommandSpecs.filter((entry) => entry.publicDocs).map((entry) => entry.name),
|
||||||
|
]);
|
||||||
|
|
||||||
|
for (const group of livePackageCommandGroups) {
|
||||||
|
const commands: HelpCommand[] = [];
|
||||||
|
for (const spec of group.commands) {
|
||||||
|
const command = liveCommands.get(spec.name);
|
||||||
|
if (!command || ownedNames.has(command.name)) continue;
|
||||||
|
commands.push({
|
||||||
|
usage: spec.usage,
|
||||||
|
description: command.description ?? "",
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (commands.length > 0) {
|
||||||
|
sections.set(group.title, commands);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return [
|
return [
|
||||||
{
|
"Research Workflows",
|
||||||
title: "Research Workflows",
|
"Project & Session",
|
||||||
commands: [
|
"Setup",
|
||||||
{ usage: "/deepresearch <topic>", description: "Source-heavy investigation with parallel researchers." },
|
"Agents & Delegation",
|
||||||
{ usage: "/lit <topic>", description: "Literature review using paper search." },
|
"Bundled Package Commands",
|
||||||
{ usage: "/review <artifact>", description: "Simulated peer review with objections and revision plan." },
|
]
|
||||||
{ usage: "/audit <item>", description: "Audit a paper against its public codebase." },
|
.map((title) => ({ title, commands: sections.get(title) ?? [] }))
|
||||||
{ usage: "/replicate <paper>", description: "Replication workflow for a paper or claim." },
|
.filter((section) => section.commands.length > 0);
|
||||||
{ usage: "/draft <topic>", description: "Paper-style draft from research findings." },
|
|
||||||
{ usage: "/compare <topic>", description: "Compare sources with agreements and disagreements." },
|
|
||||||
{ usage: "/autoresearch <target>", description: "Autonomous experiment optimization loop." },
|
|
||||||
{ usage: "/watch <topic>", description: "Recurring research watch on a topic." },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
title: "Agents & Delegation",
|
|
||||||
commands: [
|
|
||||||
{ usage: "/agents", description: "Open the agent and chain manager." },
|
|
||||||
{ usage: "/run <agent> <task>", description: "Run a single subagent." },
|
|
||||||
{ usage: "/chain agent1 -> agent2", description: "Run agents in sequence." },
|
|
||||||
{ usage: "/parallel agent1 -> agent2", description: "Run agents in parallel." },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
title: "Project & Session",
|
|
||||||
commands: [
|
|
||||||
{ usage: "/init", description: "Bootstrap AGENTS.md and session-log folders." },
|
|
||||||
{ usage: "/log", description: "Write a session log to notes/." },
|
|
||||||
{ usage: "/jobs", description: "Inspect active background work." },
|
|
||||||
{ usage: "/search", description: "Search prior sessions." },
|
|
||||||
{ usage: "/preview", description: "Preview a generated artifact." },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
{
|
|
||||||
title: "Setup",
|
|
||||||
commands: [
|
|
||||||
{ usage: "/alpha-login", description: "Sign in to alphaXiv." },
|
|
||||||
{ usage: "/alpha-status", description: "Check alphaXiv auth." },
|
|
||||||
{ usage: "/alpha-logout", description: "Clear alphaXiv auth." },
|
|
||||||
],
|
|
||||||
},
|
|
||||||
];
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export function registerHelpCommand(pi: ExtensionAPI): void {
|
export function registerHelpCommand(pi: ExtensionAPI): void {
|
||||||
pi.registerCommand("help", {
|
pi.registerCommand("help", {
|
||||||
description: "Show grouped Feynman commands and prefill the editor with a selected command.",
|
description:
|
||||||
|
getExtensionCommandSpec("help")?.description ??
|
||||||
|
"Show grouped Feynman commands and prefill the editor with a selected command.",
|
||||||
handler: async (_args, ctx) => {
|
handler: async (_args, ctx) => {
|
||||||
const sections = buildHelpSections();
|
const sections = buildHelpSections(pi);
|
||||||
const items = sections.flatMap((section) => [
|
const items = sections.flatMap((section) => [
|
||||||
`--- ${section.title} ---`,
|
`--- ${section.title} ---`,
|
||||||
...section.commands.map((cmd) => `${cmd.usage} — ${cmd.description}`),
|
...section.commands.map((cmd) => `${cmd.usage} — ${cmd.description}`),
|
||||||
|
|||||||
@@ -4,13 +4,14 @@ import { dirname, resolve as resolvePath } from "node:path";
|
|||||||
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
|
||||||
import { Type } from "@sinclair/typebox";
|
import { Type } from "@sinclair/typebox";
|
||||||
|
|
||||||
|
import { getExtensionCommandSpec } from "../../metadata/commands.mjs";
|
||||||
import { renderHtmlPreview, renderPdfPreview, openWithDefaultApp, pathExists, buildProjectAgentsTemplate, buildSessionLogsReadme } from "./preview.js";
|
import { renderHtmlPreview, renderPdfPreview, openWithDefaultApp, pathExists, buildProjectAgentsTemplate, buildSessionLogsReadme } from "./preview.js";
|
||||||
import { formatToolText } from "./shared.js";
|
import { formatToolText } from "./shared.js";
|
||||||
import { searchSessionTranscripts } from "./session-search.js";
|
import { searchSessionTranscripts } from "./session-search.js";
|
||||||
|
|
||||||
export function registerInitCommand(pi: ExtensionAPI): void {
|
export function registerInitCommand(pi: ExtensionAPI): void {
|
||||||
pi.registerCommand("init", {
|
pi.registerCommand("init", {
|
||||||
description: "Initialize AGENTS.md and session-log folders for a research project.",
|
description: getExtensionCommandSpec("init")?.description ?? "Initialize AGENTS.md and session-log folders for a research project.",
|
||||||
handler: async (_args, ctx) => {
|
handler: async (_args, ctx) => {
|
||||||
const agentsPath = resolvePath(ctx.cwd, "AGENTS.md");
|
const agentsPath = resolvePath(ctx.cwd, "AGENTS.md");
|
||||||
const notesDir = resolvePath(ctx.cwd, "notes");
|
const notesDir = resolvePath(ctx.cwd, "notes");
|
||||||
|
|||||||
46
metadata/commands.d.mts
Normal file
46
metadata/commands.d.mts
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
export type PromptSpec = {
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
args: string;
|
||||||
|
section: string;
|
||||||
|
topLevelCli: boolean;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type ExtensionCommandSpec = {
|
||||||
|
name: string;
|
||||||
|
args: string;
|
||||||
|
section: string;
|
||||||
|
description: string;
|
||||||
|
publicDocs: boolean;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type LivePackageCommandSpec = {
|
||||||
|
name: string;
|
||||||
|
usage: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type LivePackageCommandGroup = {
|
||||||
|
title: string;
|
||||||
|
commands: LivePackageCommandSpec[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export type CliCommand = {
|
||||||
|
usage: string;
|
||||||
|
description: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type CliCommandSection = {
|
||||||
|
title: string;
|
||||||
|
commands: CliCommand[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export declare function readPromptSpecs(appRoot: string): PromptSpec[];
|
||||||
|
export declare const extensionCommandSpecs: ExtensionCommandSpec[];
|
||||||
|
export declare const livePackageCommandGroups: LivePackageCommandGroup[];
|
||||||
|
export declare const cliCommandSections: CliCommandSection[];
|
||||||
|
export declare const legacyFlags: CliCommand[];
|
||||||
|
export declare const topLevelCommandNames: string[];
|
||||||
|
|
||||||
|
export declare function formatSlashUsage(command: { name: string; args?: string }): string;
|
||||||
|
export declare function formatCliWorkflowUsage(command: { name: string; args?: string }): string;
|
||||||
|
export declare function getExtensionCommandSpec(name: string): ExtensionCommandSpec | undefined;
|
||||||
133
metadata/commands.mjs
Normal file
133
metadata/commands.mjs
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
import { readFileSync, readdirSync } from "node:fs";
|
||||||
|
import { resolve } from "node:path";
|
||||||
|
|
||||||
|
function parseFrontmatter(text) {
|
||||||
|
const match = text.match(/^---\n([\s\S]*?)\n---\n?/);
|
||||||
|
if (!match) return {};
|
||||||
|
|
||||||
|
const frontmatter = {};
|
||||||
|
for (const line of match[1].split("\n")) {
|
||||||
|
const separator = line.indexOf(":");
|
||||||
|
if (separator === -1) continue;
|
||||||
|
const key = line.slice(0, separator).trim();
|
||||||
|
const value = line.slice(separator + 1).trim();
|
||||||
|
if (!key) continue;
|
||||||
|
frontmatter[key] = value;
|
||||||
|
}
|
||||||
|
return frontmatter;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function readPromptSpecs(appRoot) {
|
||||||
|
const dir = resolve(appRoot, "prompts");
|
||||||
|
return readdirSync(dir)
|
||||||
|
.filter((f) => f.endsWith(".md"))
|
||||||
|
.map((f) => {
|
||||||
|
const text = readFileSync(resolve(dir, f), "utf8");
|
||||||
|
const fm = parseFrontmatter(text);
|
||||||
|
return {
|
||||||
|
name: f.replace(/\.md$/, ""),
|
||||||
|
description: fm.description ?? "",
|
||||||
|
args: fm.args ?? "",
|
||||||
|
section: fm.section ?? "Research Workflows",
|
||||||
|
topLevelCli: fm.topLevelCli === "true",
|
||||||
|
};
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export const extensionCommandSpecs = [
|
||||||
|
{ name: "help", args: "", section: "Project & Session", description: "Show grouped Feynman commands and prefill the editor with a selected command.", publicDocs: true },
|
||||||
|
{ name: "init", args: "", section: "Project & Session", description: "Bootstrap AGENTS.md and session-log folders for a research project.", publicDocs: true },
|
||||||
|
{ name: "alpha-login", args: "", section: "Setup", description: "Sign in to alphaXiv from inside Feynman.", publicDocs: true },
|
||||||
|
{ name: "alpha-status", args: "", section: "Setup", description: "Show alphaXiv authentication status.", publicDocs: true },
|
||||||
|
{ name: "alpha-logout", args: "", section: "Setup", description: "Clear alphaXiv auth from inside Feynman.", publicDocs: true },
|
||||||
|
];
|
||||||
|
|
||||||
|
export const livePackageCommandGroups = [
|
||||||
|
{
|
||||||
|
title: "Agents & Delegation",
|
||||||
|
commands: [
|
||||||
|
{ name: "agents", usage: "/agents" },
|
||||||
|
{ name: "run", usage: "/run <agent> <task>" },
|
||||||
|
{ name: "chain", usage: "/chain agent1 -> agent2" },
|
||||||
|
{ name: "parallel", usage: "/parallel agent1 -> agent2" },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: "Bundled Package Commands",
|
||||||
|
commands: [
|
||||||
|
{ name: "ps", usage: "/ps" },
|
||||||
|
{ name: "schedule-prompt", usage: "/schedule-prompt" },
|
||||||
|
{ name: "search", usage: "/search" },
|
||||||
|
{ name: "preview", usage: "/preview" },
|
||||||
|
{ name: "new", usage: "/new" },
|
||||||
|
{ name: "quit", usage: "/quit" },
|
||||||
|
{ name: "exit", usage: "/exit" },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
export const cliCommandSections = [
|
||||||
|
{
|
||||||
|
title: "Core",
|
||||||
|
commands: [
|
||||||
|
{ usage: "feynman", description: "Launch the interactive REPL." },
|
||||||
|
{ usage: "feynman chat [prompt]", description: "Start chat explicitly, optionally with an initial prompt." },
|
||||||
|
{ usage: "feynman help", description: "Show CLI help." },
|
||||||
|
{ usage: "feynman setup", description: "Run the guided setup wizard." },
|
||||||
|
{ usage: "feynman doctor", description: "Diagnose config, auth, Pi runtime, and preview dependencies." },
|
||||||
|
{ usage: "feynman status", description: "Show the current setup summary." },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: "Model Management",
|
||||||
|
commands: [
|
||||||
|
{ usage: "feynman model list", description: "List available models in Pi auth storage." },
|
||||||
|
{ usage: "feynman model login [id]", description: "Login to a Pi OAuth model provider." },
|
||||||
|
{ usage: "feynman model logout [id]", description: "Logout from a Pi OAuth model provider." },
|
||||||
|
{ usage: "feynman model set <provider/model>", description: "Set the default model." },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: "AlphaXiv",
|
||||||
|
commands: [
|
||||||
|
{ usage: "feynman alpha login", description: "Sign in to alphaXiv." },
|
||||||
|
{ usage: "feynman alpha logout", description: "Clear alphaXiv auth." },
|
||||||
|
{ usage: "feynman alpha status", description: "Check alphaXiv auth status." },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: "Utilities",
|
||||||
|
commands: [
|
||||||
|
{ usage: "feynman search status", description: "Show Pi web-access status and config path." },
|
||||||
|
{ usage: "feynman update [package]", description: "Update installed packages, or a specific package." },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
export const legacyFlags = [
|
||||||
|
{ usage: '--prompt "<text>"', description: "Run one prompt and exit." },
|
||||||
|
{ usage: "--alpha-login", description: "Sign in to alphaXiv and exit." },
|
||||||
|
{ usage: "--alpha-logout", description: "Clear alphaXiv auth and exit." },
|
||||||
|
{ usage: "--alpha-status", description: "Show alphaXiv auth status and exit." },
|
||||||
|
{ usage: "--model <provider:model>", description: "Force a specific model." },
|
||||||
|
{ usage: "--thinking <level>", description: "Set thinking level: off | minimal | low | medium | high | xhigh." },
|
||||||
|
{ usage: "--cwd <path>", description: "Set the working directory for tools." },
|
||||||
|
{ usage: "--session-dir <path>", description: "Set the session storage directory." },
|
||||||
|
{ usage: "--new-session", description: "Start a new persisted session." },
|
||||||
|
{ usage: "--doctor", description: "Alias for `feynman doctor`." },
|
||||||
|
{ usage: "--setup-preview", description: "Alias for `feynman setup preview`." },
|
||||||
|
];
|
||||||
|
|
||||||
|
export const topLevelCommandNames = ["alpha", "chat", "doctor", "help", "model", "search", "setup", "status", "update"];
|
||||||
|
|
||||||
|
export function formatSlashUsage(command) {
|
||||||
|
return `/${command.name}${command.args ? ` ${command.args}` : ""}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function formatCliWorkflowUsage(command) {
|
||||||
|
return `feynman ${command.name}${command.args ? ` ${command.args}` : ""}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getExtensionCommandSpec(name) {
|
||||||
|
return extensionCommandSpecs.find((command) => command.name === name);
|
||||||
|
}
|
||||||
11
package.json
11
package.json
@@ -9,13 +9,16 @@
|
|||||||
"files": [
|
"files": [
|
||||||
"bin/",
|
"bin/",
|
||||||
"dist/",
|
"dist/",
|
||||||
".pi/agents/",
|
"metadata/",
|
||||||
".pi/settings.json",
|
".feynman/agents/",
|
||||||
".pi/SYSTEM.md",
|
".feynman/settings.json",
|
||||||
".pi/themes/",
|
".feynman/SYSTEM.md",
|
||||||
|
".feynman/themes/",
|
||||||
"extensions/",
|
"extensions/",
|
||||||
"prompts/",
|
"prompts/",
|
||||||
"scripts/",
|
"scripts/",
|
||||||
|
"skills/",
|
||||||
|
"AGENTS.md",
|
||||||
"README.md",
|
"README.md",
|
||||||
".env.example"
|
".env.example"
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -1,10 +1,13 @@
|
|||||||
---
|
---
|
||||||
description: Compare a paper's claims against its public codebase and identify mismatches, omissions, and reproducibility risks.
|
description: Compare a paper's claims against its public codebase and identify mismatches, omissions, and reproducibility risks.
|
||||||
|
args: <item>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Audit the paper and codebase for: $@
|
Audit the paper and codebase for: $@
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Use the `researcher` subagent for evidence gathering and the `citation` subagent to verify sources and add inline citations when the audit is non-trivial.
|
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
|
||||||
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
||||||
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
||||||
- Save exactly one audit artifact to `outputs/` as markdown.
|
- Save exactly one audit artifact to `outputs/` as markdown.
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
---
|
---
|
||||||
description: Autonomous experiment loop — try ideas, measure results, keep what works, discard what doesn't, repeat.
|
description: Autonomous experiment loop — try ideas, measure results, keep what works, discard what doesn't, repeat.
|
||||||
|
args: <idea>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Start an autoresearch optimization loop for: $@
|
Start an autoresearch optimization loop for: $@
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,13 @@
|
|||||||
---
|
---
|
||||||
description: Compare multiple sources on a topic and produce a source-grounded matrix of agreements, disagreements, and confidence.
|
description: Compare multiple sources on a topic and produce a source-grounded matrix of agreements, disagreements, and confidence.
|
||||||
|
args: <topic>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Compare sources for: $@
|
Compare sources for: $@
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `citation` subagent to verify sources and add inline citations to the final matrix.
|
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
|
||||||
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
||||||
- Distinguish agreement, disagreement, and uncertainty clearly.
|
- Distinguish agreement, disagreement, and uncertainty clearly.
|
||||||
- Save exactly one comparison to `outputs/` as markdown.
|
- Save exactly one comparison to `outputs/` as markdown.
|
||||||
|
|||||||
@@ -1,9 +1,12 @@
|
|||||||
---
|
---
|
||||||
description: Run a thorough, source-heavy investigation on a topic and produce a durable research brief with inline citations.
|
description: Run a thorough, source-heavy investigation on a topic and produce a durable research brief with inline citations.
|
||||||
|
args: <topic>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Run a deep research workflow for: $@
|
Run a deep research workflow for: $@
|
||||||
|
|
||||||
You are the Lead Researcher. You plan, delegate, evaluate, loop, write, and cite. Internal orchestration is invisible to the user unless they ask.
|
You are the Lead Researcher. You plan, delegate, evaluate, verify, write, and cite. Internal orchestration is invisible to the user unless they ask.
|
||||||
|
|
||||||
## 1. Plan
|
## 1. Plan
|
||||||
|
|
||||||
@@ -12,8 +15,30 @@ Analyze the research question using extended thinking. Develop a research strate
|
|||||||
- Evidence types needed (papers, web, code, data, docs)
|
- Evidence types needed (papers, web, code, data, docs)
|
||||||
- Sub-questions disjoint enough to parallelize
|
- Sub-questions disjoint enough to parallelize
|
||||||
- Source types and time periods that matter
|
- Source types and time periods that matter
|
||||||
|
- Acceptance criteria: what evidence would make the answer "sufficient"
|
||||||
|
|
||||||
Save the plan immediately with `memory_remember` (type: `fact`, key: `deepresearch.plan`). Context windows get truncated on long runs — the plan must survive.
|
Write the plan to `outputs/.plans/deepresearch-plan.md` as a self-contained artifact:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Research Plan: [topic]
|
||||||
|
|
||||||
|
## Questions
|
||||||
|
1. ...
|
||||||
|
|
||||||
|
## Strategy
|
||||||
|
- Researcher allocations and dimensions
|
||||||
|
- Expected rounds
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] All key questions answered with ≥2 independent sources
|
||||||
|
- [ ] Contradictions identified and addressed
|
||||||
|
- [ ] No single-source claims on critical findings
|
||||||
|
|
||||||
|
## Decision Log
|
||||||
|
(Updated as the workflow progresses)
|
||||||
|
```
|
||||||
|
|
||||||
|
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.plan`) so it survives context truncation.
|
||||||
|
|
||||||
## 2. Scale decision
|
## 2. Scale decision
|
||||||
|
|
||||||
@@ -57,7 +82,9 @@ After researchers return, read their output files and critically assess:
|
|||||||
- Are there contradictions needing resolution?
|
- Are there contradictions needing resolution?
|
||||||
- Is any key angle missing entirely?
|
- Is any key angle missing entirely?
|
||||||
|
|
||||||
If gaps are significant, spawn another targeted batch of researchers. No fixed cap on rounds — iterate until evidence is sufficient or sources are exhausted. Update the stored plan with `memory_remember` as it evolves.
|
If gaps are significant, spawn another targeted batch of researchers. No fixed cap on rounds — iterate until evidence is sufficient or sources are exhausted.
|
||||||
|
|
||||||
|
Update the plan artifact (`outputs/.plans/deepresearch-plan.md`) decision log after each round.
|
||||||
|
|
||||||
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
|
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
|
||||||
|
|
||||||
@@ -84,22 +111,51 @@ Save this draft to a temp file (e.g., `draft.md` in the chain artifacts dir or a
|
|||||||
|
|
||||||
## 6. Cite
|
## 6. Cite
|
||||||
|
|
||||||
Spawn the `citation` agent to post-process YOUR draft. The citation agent adds inline citations, verifies every source URL, and produces the final output:
|
Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output:
|
||||||
|
|
||||||
```
|
```
|
||||||
{ agent: "citation", task: "Add inline citations to draft.md using the research files as source material. Verify every URL.", output: "brief.md" }
|
{ agent: "verifier", task: "Add inline citations to draft.md using the research files as source material. Verify every URL.", output: "brief.md" }
|
||||||
```
|
```
|
||||||
|
|
||||||
The citation agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
|
The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
|
||||||
|
|
||||||
## 7. Deliver
|
## 7. Verify
|
||||||
|
|
||||||
Copy the final cited output to the appropriate folder:
|
Spawn the `reviewer` agent against the cited draft. The reviewer checks for:
|
||||||
|
- Unsupported claims that slipped past citation
|
||||||
|
- Logical gaps or contradictions between sections
|
||||||
|
- Single-source claims on critical findings
|
||||||
|
- Overstated confidence relative to evidence quality
|
||||||
|
|
||||||
|
```
|
||||||
|
{ agent: "reviewer", task: "Verify brief.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "verification.md" }
|
||||||
|
```
|
||||||
|
|
||||||
|
If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted.
|
||||||
|
|
||||||
|
## 8. Deliver
|
||||||
|
|
||||||
|
Copy the final cited and verified output to the appropriate folder:
|
||||||
- Paper-style drafts → `papers/`
|
- Paper-style drafts → `papers/`
|
||||||
- Everything else → `outputs/`
|
- Everything else → `outputs/`
|
||||||
|
|
||||||
Use a descriptive filename based on the topic.
|
Use a descriptive filename based on the topic.
|
||||||
|
|
||||||
|
Write a provenance record alongside the main artifact as `<filename>.provenance.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Provenance: [topic]
|
||||||
|
|
||||||
|
- **Date:** [date]
|
||||||
|
- **Rounds:** [number of researcher rounds]
|
||||||
|
- **Sources consulted:** [total unique sources across all research files]
|
||||||
|
- **Sources accepted:** [sources that survived citation verification]
|
||||||
|
- **Sources rejected:** [dead links, unverifiable, or removed]
|
||||||
|
- **Verification:** [PASS / PASS WITH NOTES — summary of reviewer findings]
|
||||||
|
- **Plan:** outputs/.plans/deepresearch-plan.md
|
||||||
|
- **Research files:** [list of intermediate research-*.md files]
|
||||||
|
```
|
||||||
|
|
||||||
## Background execution
|
## Background execution
|
||||||
|
|
||||||
If the user wants unattended execution or the sweep will clearly take a while:
|
If the user wants unattended execution or the sweep will clearly take a while:
|
||||||
|
|||||||
21
prompts/delegate.md
Normal file
21
prompts/delegate.md
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
---
|
||||||
|
description: Delegate a research task to a remote Agent Computer machine for cloud execution.
|
||||||
|
args: <task>
|
||||||
|
section: Internal
|
||||||
|
---
|
||||||
|
Delegate the following task to a remote Agent Computer machine: $@
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. **Check CLI** — Verify `computer` or `aicomputer` is installed and authenticated. If not, install with `npm install -g aicomputer` and run `computer login`.
|
||||||
|
2. **Pick a machine** — Run `computer ls --json` and choose an appropriate machine. If none are running, tell the user to create one with `computer create`.
|
||||||
|
3. **Pick an agent** — Run `computer agent agents <machine> --json` and choose an installed agent with credentials (prefer Claude).
|
||||||
|
4. **Create a session** — Use `computer agent sessions new <machine> --agent claude --name research --json`.
|
||||||
|
5. **Send the task** — Translate the user's research task into a self-contained prompt and send it via `computer agent prompt`. The prompt must include:
|
||||||
|
- The full research objective
|
||||||
|
- Where to write outputs (default: `/workspace/outputs/`)
|
||||||
|
- What artifact to produce when done (summary file)
|
||||||
|
- Any tools or data sources to use
|
||||||
|
6. **Monitor** — Use `computer agent watch <machine> --session <session_id>` to stream progress. Report status to the user at meaningful milestones.
|
||||||
|
7. **Retrieve results** — When the remote agent finishes, pull the summary back with `computer agent prompt <machine> "cat /workspace/outputs/summary.md" --session <session_id>`. Present results to the user.
|
||||||
|
8. **Clean up** — Close the session with `computer agent close <machine> --session <session_id>` unless the user wants to continue.
|
||||||
@@ -1,10 +1,13 @@
|
|||||||
---
|
---
|
||||||
description: Turn research findings into a polished paper-style draft with equations, sections, and explicit claims.
|
description: Turn research findings into a polished paper-style draft with equations, sections, and explicit claims.
|
||||||
|
args: <topic>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Write a paper-style draft for: $@
|
Write a paper-style draft for: $@
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `citation` subagent to add inline citations and verify sources.
|
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
|
||||||
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
||||||
- Use clean Markdown with LaTeX where equations materially help.
|
- Use clean Markdown with LaTeX where equations materially help.
|
||||||
- Save exactly one draft to `papers/` as markdown.
|
- Save exactly one draft to `papers/` as markdown.
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
---
|
---
|
||||||
description: Inspect active background research work, including running processes and scheduled follow-ups.
|
description: Inspect active background research work, including running processes and scheduled follow-ups.
|
||||||
|
section: Project & Session
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Inspect active background work for this project.
|
Inspect active background work for this project.
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,15 @@
|
|||||||
---
|
---
|
||||||
description: Run a literature review on a topic using paper search and primary-source synthesis.
|
description: Run a literature review on a topic using paper search and primary-source synthesis.
|
||||||
|
args: <topic>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Investigate the following topic as a literature review: $@
|
Investigate the following topic as a literature review: $@
|
||||||
|
|
||||||
Requirements:
|
## Workflow
|
||||||
- Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis.
|
|
||||||
- Separate consensus, disagreements, and open questions.
|
1. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly.
|
||||||
- When useful, propose concrete next experiments or follow-up reading.
|
2. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading.
|
||||||
- Save exactly one literature review to `outputs/` as markdown.
|
3. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
|
||||||
- End with a `Sources` section containing direct URLs for every source used.
|
4. **Verify** — Spawn the `reviewer` agent to check the cited draft for unsupported claims, logical gaps, and single-source critical findings. Fix FATAL issues before delivering. Note MAJOR issues in Open Questions.
|
||||||
|
5. **Deliver** — Save exactly one literature review to `outputs/` as markdown. Write a provenance record alongside it as `<filename>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used.
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
---
|
---
|
||||||
description: Write a durable session log with completed work, findings, open questions, and next steps.
|
description: Write a durable session log with completed work, findings, open questions, and next steps.
|
||||||
|
section: Project & Session
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Write a session log for the current research work.
|
Write a session log for the current research work.
|
||||||
|
|
||||||
|
|||||||
@@ -1,12 +1,21 @@
|
|||||||
---
|
---
|
||||||
description: Plan or execute a replication workflow for a paper, claim, or benchmark.
|
description: Plan or execute a replication workflow for a paper, claim, or benchmark.
|
||||||
|
args: <paper>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Design a replication plan for: $@
|
Design a replication plan for: $@
|
||||||
|
|
||||||
Requirements:
|
## Workflow
|
||||||
- Use the `researcher` subagent to extract implementation details from the target paper and any linked code.
|
|
||||||
- Determine what code, datasets, metrics, and environment are needed.
|
1. **Extract** — Use the `researcher` subagent to pull implementation details from the target paper and any linked code.
|
||||||
- If enough information is available locally, implement and run the replication steps.
|
2. **Plan** — Determine what code, datasets, metrics, and environment are needed. Be explicit about what is verified, what is inferred, and what is still missing.
|
||||||
- Save notes, scripts, and results to disk in a reproducible layout.
|
3. **Environment** — Before running anything, ask the user where to execute:
|
||||||
- Be explicit about what is verified, what is inferred, and what is still missing.
|
- **Local** — run in the current working directory
|
||||||
- End with a `Sources` section containing paper and repository URLs.
|
- **Virtual environment** — create an isolated venv/conda env first
|
||||||
|
- **Cloud** — delegate to a remote Agent Computer machine via `/delegate`
|
||||||
|
- **Plan only** — produce the replication plan without executing
|
||||||
|
4. **Execute** — If the user chose an execution environment, implement and run the replication steps there. Save notes, scripts, and results to disk in a reproducible layout.
|
||||||
|
5. **Report** — End with a `Sources` section containing paper and repository URLs.
|
||||||
|
|
||||||
|
Do not install packages, run training, or execute experiments without confirming the execution environment first.
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
---
|
---
|
||||||
description: Simulate an AI research peer review with likely objections, severity, and a concrete revision plan.
|
description: Simulate an AI research peer review with likely objections, severity, and a concrete revision plan.
|
||||||
|
args: <artifact>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Review this AI research artifact: $@
|
Review this AI research artifact: $@
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,8 @@
|
|||||||
---
|
---
|
||||||
description: Set up a recurring or deferred research watch on a topic, company, paper area, or product surface.
|
description: Set up a recurring or deferred research watch on a topic, company, paper area, or product surface.
|
||||||
|
args: <topic>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
---
|
---
|
||||||
Create a research watch for: $@
|
Create a research watch for: $@
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ const interactiveModePath = resolve(piPackageRoot, "dist", "modes", "interactive
|
|||||||
const interactiveThemePath = resolve(piPackageRoot, "dist", "modes", "interactive", "theme", "theme.js");
|
const interactiveThemePath = resolve(piPackageRoot, "dist", "modes", "interactive", "theme", "theme.js");
|
||||||
const piTuiRoot = resolve(appRoot, "node_modules", "@mariozechner", "pi-tui");
|
const piTuiRoot = resolve(appRoot, "node_modules", "@mariozechner", "pi-tui");
|
||||||
const editorPath = resolve(piTuiRoot, "dist", "components", "editor.js");
|
const editorPath = resolve(piTuiRoot, "dist", "components", "editor.js");
|
||||||
const workspaceRoot = resolve(appRoot, ".pi", "npm", "node_modules");
|
const workspaceRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
|
||||||
const webAccessPath = resolve(workspaceRoot, "pi-web-access", "index.ts");
|
const webAccessPath = resolve(workspaceRoot, "pi-web-access", "index.ts");
|
||||||
const sessionSearchIndexerPath = resolve(
|
const sessionSearchIndexerPath = resolve(
|
||||||
workspaceRoot,
|
workspaceRoot,
|
||||||
@@ -23,8 +23,8 @@ const sessionSearchIndexerPath = resolve(
|
|||||||
"indexer.ts",
|
"indexer.ts",
|
||||||
);
|
);
|
||||||
const piMemoryPath = resolve(workspaceRoot, "@samfp", "pi-memory", "src", "index.ts");
|
const piMemoryPath = resolve(workspaceRoot, "@samfp", "pi-memory", "src", "index.ts");
|
||||||
const settingsPath = resolve(appRoot, ".pi", "settings.json");
|
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
||||||
const workspaceDir = resolve(appRoot, ".pi", "npm");
|
const workspaceDir = resolve(appRoot, ".feynman", "npm");
|
||||||
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
||||||
|
|
||||||
function ensurePackageWorkspace() {
|
function ensurePackageWorkspace() {
|
||||||
@@ -69,7 +69,7 @@ function ensurePackageWorkspace() {
|
|||||||
});
|
});
|
||||||
|
|
||||||
if (install.status !== 0) {
|
if (install.status !== 0) {
|
||||||
console.warn("[feynman] warning: failed to preinstall default Pi packages into .pi/npm");
|
console.warn("[feynman] warning: failed to preinstall default Pi packages into .feynman/npm");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
108
skills/agentcomputer/SKILL.md
Normal file
108
skills/agentcomputer/SKILL.md
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
---
|
||||||
|
name: agentcomputer
|
||||||
|
description: Delegate research tasks to remote Agent Computer machines for cloud execution. Manages machine discovery, remote agent sessions, task delegation, progress monitoring, result retrieval, and ACP bridging via the aicomputer CLI.
|
||||||
|
allowed-tools: Bash(npm:*), Bash(npx aicomputer@latest:*), Bash(aicomputer:*), Bash(computer:*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Agent Computer
|
||||||
|
|
||||||
|
Use Agent Computer to run Feynman research workflows on remote cloud machines when local compute is insufficient or when tasks should run unattended.
|
||||||
|
|
||||||
|
## When to use
|
||||||
|
|
||||||
|
- A research task needs GPU, large memory, or long-running compute
|
||||||
|
- `/autoresearch` or `/deepresearch` should run unattended in the cloud
|
||||||
|
- The user explicitly asks to delegate work to a remote machine
|
||||||
|
- An experiment loop would take hours and should not block the local session
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
The `aicomputer` CLI must be installed and authenticated:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
if command -v computer >/dev/null 2>&1; then
|
||||||
|
COMPUTER=computer
|
||||||
|
elif command -v aicomputer >/dev/null 2>&1; then
|
||||||
|
COMPUTER=aicomputer
|
||||||
|
else
|
||||||
|
npm install -g aicomputer
|
||||||
|
COMPUTER=computer
|
||||||
|
fi
|
||||||
|
$COMPUTER whoami || $COMPUTER login
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fleet control
|
||||||
|
|
||||||
|
### Discover machines and agents
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$COMPUTER ls --json
|
||||||
|
$COMPUTER agent agents <machine> --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sessions
|
||||||
|
|
||||||
|
Create, reuse, and manage named sessions on a machine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$COMPUTER agent sessions new <machine> --agent claude --name research --json
|
||||||
|
$COMPUTER agent sessions list <machine> --json
|
||||||
|
$COMPUTER agent status <machine> --session <session_id> --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prompting and monitoring
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$COMPUTER agent prompt <machine> "<task>" --agent claude --name research
|
||||||
|
$COMPUTER agent watch <machine> --session <session_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Stopping and cleanup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$COMPUTER agent cancel <machine> --session <session_id> --json
|
||||||
|
$COMPUTER agent interrupt <machine> --session <session_id> --json
|
||||||
|
$COMPUTER agent close <machine> --session <session_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Research delegation workflow
|
||||||
|
|
||||||
|
1. Pick a machine: `$COMPUTER ls --json`
|
||||||
|
2. Create a session: `$COMPUTER agent sessions new <machine> --agent claude --name research --json`
|
||||||
|
3. Send a self-contained research prompt:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$COMPUTER agent prompt <machine> \
|
||||||
|
"Run a deep research workflow on <topic>. Write all outputs to /workspace/outputs/. When done, write a summary to /workspace/outputs/summary.md." \
|
||||||
|
--agent claude --name research
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Monitor: `$COMPUTER agent watch <machine> --session <session_id>`
|
||||||
|
5. Retrieve: `$COMPUTER agent prompt <machine> "cat /workspace/outputs/summary.md" --session <session_id>`
|
||||||
|
6. Clean up: `$COMPUTER agent close <machine> --session <session_id>`
|
||||||
|
|
||||||
|
## ACP bridge
|
||||||
|
|
||||||
|
Expose a remote machine agent as a local ACP-compatible stdio process:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$COMPUTER acp serve <machine> --agent claude --name research
|
||||||
|
```
|
||||||
|
|
||||||
|
This lets local ACP clients (including Feynman's subagents) talk to a remote agent as if it were local. Keep the bridge process running; reconnect by restarting the command with the same session name.
|
||||||
|
|
||||||
|
## Session naming
|
||||||
|
|
||||||
|
Use short stable names that match the task:
|
||||||
|
|
||||||
|
- `research` — general research delegation
|
||||||
|
- `experiment` — autoresearch loops
|
||||||
|
- `review` — verification passes
|
||||||
|
- `literature` — literature sweeps
|
||||||
|
|
||||||
|
Reuse the same name when continuing the same line of work.
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [CLI cheatsheet](references/cli-cheatsheet.md) — full command reference
|
||||||
|
- [ACP flow](references/acp-flow.md) — protocol details for the ACP bridge
|
||||||
23
skills/agentcomputer/references/acp-flow.md
Normal file
23
skills/agentcomputer/references/acp-flow.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# ACP Flow
|
||||||
|
|
||||||
|
The `computer acp serve` bridge makes a remote machine agent look like a local ACP server over stdio.
|
||||||
|
|
||||||
|
## Basic shape
|
||||||
|
|
||||||
|
1. The local client starts `computer acp serve <machine> --agent <agent> --name <session>`.
|
||||||
|
2. The bridge handles ACP initialization on stdin/stdout.
|
||||||
|
3. The bridge maps ACP session operations onto Agent Computer session APIs.
|
||||||
|
4. Remote session updates are streamed back as ACP `session/update` notifications.
|
||||||
|
|
||||||
|
## Good commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer acp serve my-box --agent claude --name research
|
||||||
|
computer acp serve gpu-worker --agent claude --name experiment
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recommended client behavior
|
||||||
|
|
||||||
|
- Reuse a stable session name when reconnecting.
|
||||||
|
- Treat the bridge as the single local command for remote-agent interaction.
|
||||||
|
- Use the normal `computer agent ...` commands outside ACP when you need manual inspection or cleanup.
|
||||||
68
skills/agentcomputer/references/cli-cheatsheet.md
Normal file
68
skills/agentcomputer/references/cli-cheatsheet.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# CLI Cheatsheet
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer whoami
|
||||||
|
computer login
|
||||||
|
computer claude-login # install Claude credentials on a machine
|
||||||
|
computer codex-login # install Codex credentials on a machine
|
||||||
|
```
|
||||||
|
|
||||||
|
## Machine discovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer ls --json
|
||||||
|
computer fleet status --json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Agent discovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer agent agents <machine> --json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sessions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer agent sessions list <machine> --json
|
||||||
|
computer agent sessions new <machine> --agent claude --name research --json
|
||||||
|
computer agent status <machine> --session <session_id> --json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prompting
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer agent prompt <machine> "run the experiment" --agent claude --name research
|
||||||
|
computer agent prompt <machine> "continue" --session <session_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Streaming and control
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer agent watch <machine> --session <session_id>
|
||||||
|
computer agent cancel <machine> --session <session_id> --json
|
||||||
|
computer agent interrupt <machine> --session <session_id> --json
|
||||||
|
computer agent close <machine> --session <session_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
## ACP bridge
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer acp serve <machine> --agent claude --name research
|
||||||
|
```
|
||||||
|
|
||||||
|
## Machine lifecycle
|
||||||
|
|
||||||
|
```bash
|
||||||
|
computer create my-box
|
||||||
|
computer open my-box
|
||||||
|
computer open my-box --terminal
|
||||||
|
computer ssh my-box
|
||||||
|
```
|
||||||
|
|
||||||
|
## Good defaults
|
||||||
|
|
||||||
|
- Prefer machine handles over machine ids when both are available.
|
||||||
|
- Prefer `--name` for human-meaningful persistent sessions.
|
||||||
|
- Prefer `--json` when another program or agent needs to read the result.
|
||||||
12
skills/autoresearch/SKILL.md
Normal file
12
skills/autoresearch/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: autoresearch
|
||||||
|
description: Autonomous experiment loop that tries ideas, measures results, keeps what works, and discards what doesn't. Use when the user asks to optimize a metric, run an experiment loop, improve performance iteratively, or automate benchmarking.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Autoresearch
|
||||||
|
|
||||||
|
Run the `/autoresearch` workflow. Read the prompt template at `prompts/autoresearch.md` for the full procedure.
|
||||||
|
|
||||||
|
Tools used: `init_experiment`, `run_experiment`, `log_experiment` (from pi-autoresearch)
|
||||||
|
|
||||||
|
Session files: `autoresearch.md`, `autoresearch.sh`, `autoresearch.jsonl`
|
||||||
12
skills/deep-research/SKILL.md
Normal file
12
skills/deep-research/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: deep-research
|
||||||
|
description: Run a thorough, source-heavy investigation on any topic. Use when the user asks for deep research, a comprehensive analysis, an in-depth report, or a multi-source investigation. Produces a cited research brief with provenance tracking.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Deep Research
|
||||||
|
|
||||||
|
Run the `/deepresearch` workflow. Read the prompt template at `prompts/deepresearch.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`, `verifier`, `reviewer`
|
||||||
|
|
||||||
|
Output: cited brief in `outputs/` with `.provenance.md` sidecar.
|
||||||
10
skills/jobs/SKILL.md
Normal file
10
skills/jobs/SKILL.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
name: jobs
|
||||||
|
description: Inspect active background research work including running processes, scheduled follow-ups, and pending tasks. Use when the user asks what's running, checks on background work, or wants to see scheduled jobs.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Jobs
|
||||||
|
|
||||||
|
Run the `/jobs` workflow. Read the prompt template at `prompts/jobs.md` for the full procedure.
|
||||||
|
|
||||||
|
Shows active `pi-processes`, scheduled `pi-schedule-prompt` entries, and running subagent tasks.
|
||||||
12
skills/literature-review/SKILL.md
Normal file
12
skills/literature-review/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: literature-review
|
||||||
|
description: Run a literature review using paper search and primary-source synthesis. Use when the user asks for a lit review, paper survey, state of the art, or academic landscape summary on a research topic.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Literature Review
|
||||||
|
|
||||||
|
Run the `/lit` workflow. Read the prompt template at `prompts/lit.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`, `verifier`, `reviewer`
|
||||||
|
|
||||||
|
Output: literature review in `outputs/` with `.provenance.md` sidecar.
|
||||||
12
skills/paper-code-audit/SKILL.md
Normal file
12
skills/paper-code-audit/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: paper-code-audit
|
||||||
|
description: Compare a paper's claims against its public codebase. Use when the user asks to audit a paper, check code-claim consistency, verify reproducibility of a specific paper, or find mismatches between a paper and its implementation.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Paper-Code Audit
|
||||||
|
|
||||||
|
Run the `/audit` workflow. Read the prompt template at `prompts/audit.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`, `verifier`
|
||||||
|
|
||||||
|
Output: audit report in `outputs/`.
|
||||||
12
skills/paper-writing/SKILL.md
Normal file
12
skills/paper-writing/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: paper-writing
|
||||||
|
description: Turn research findings into a polished paper-style draft with sections, equations, and citations. Use when the user asks to write a paper, draft a report, write up findings, or produce a technical document from collected research.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Paper Writing
|
||||||
|
|
||||||
|
Run the `/draft` workflow. Read the prompt template at `prompts/draft.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `writer`, `verifier`
|
||||||
|
|
||||||
|
Output: paper draft in `papers/`.
|
||||||
12
skills/peer-review/SKILL.md
Normal file
12
skills/peer-review/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: peer-review
|
||||||
|
description: Simulate a tough but constructive peer review of an AI research artifact. Use when the user asks for a review, critique, feedback on a paper or draft, or wants to identify weaknesses before submission.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Peer Review
|
||||||
|
|
||||||
|
Run the `/review` workflow. Read the prompt template at `prompts/review.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`, `reviewer`
|
||||||
|
|
||||||
|
Output: structured review in `outputs/`.
|
||||||
14
skills/replication/SKILL.md
Normal file
14
skills/replication/SKILL.md
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
---
|
||||||
|
name: replication
|
||||||
|
description: Plan or execute a replication of a paper, claim, or benchmark. Use when the user asks to replicate results, reproduce an experiment, verify a claim empirically, or build a replication package.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Replication
|
||||||
|
|
||||||
|
Run the `/replicate` workflow. Read the prompt template at `prompts/replicate.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`
|
||||||
|
|
||||||
|
Asks the user to choose an execution environment (local, virtual env, cloud, or plan-only) before running any code.
|
||||||
|
|
||||||
|
Output: replication plan, scripts, and results saved to disk.
|
||||||
10
skills/session-log/SKILL.md
Normal file
10
skills/session-log/SKILL.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
name: session-log
|
||||||
|
description: Write a durable session log capturing completed work, findings, open questions, and next steps. Use when the user asks to log progress, save session notes, write up what was done, or create a research diary entry.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Session Log
|
||||||
|
|
||||||
|
Run the `/log` workflow. Read the prompt template at `prompts/log.md` for the full procedure.
|
||||||
|
|
||||||
|
Output: session log in `notes/session-logs/`.
|
||||||
12
skills/source-comparison/SKILL.md
Normal file
12
skills/source-comparison/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: source-comparison
|
||||||
|
description: Compare multiple sources on a topic and produce a grounded comparison matrix. Use when the user asks to compare papers, tools, approaches, frameworks, or claims across multiple sources.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Source Comparison
|
||||||
|
|
||||||
|
Run the `/compare` workflow. Read the prompt template at `prompts/compare.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`, `verifier`
|
||||||
|
|
||||||
|
Output: comparison matrix in `outputs/`.
|
||||||
12
skills/watch/SKILL.md
Normal file
12
skills/watch/SKILL.md
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
name: watch
|
||||||
|
description: Set up a recurring research watch on a topic, company, paper area, or product surface. Use when the user asks to monitor a field, track new papers, watch for updates, or set up alerts on a research area.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Watch
|
||||||
|
|
||||||
|
Run the `/watch` workflow. Read the prompt template at `prompts/watch.md` for the full procedure.
|
||||||
|
|
||||||
|
Agents used: `researcher`
|
||||||
|
|
||||||
|
Output: baseline survey in `outputs/`, recurring checks via `pi-schedule-prompt`.
|
||||||
@@ -128,8 +128,8 @@ export function syncBundledAssets(appRoot: string, agentDir: string): BootstrapS
|
|||||||
skipped: [],
|
skipped: [],
|
||||||
};
|
};
|
||||||
|
|
||||||
syncManagedFiles(resolve(appRoot, ".pi", "themes"), resolve(agentDir, "themes"), state, result);
|
syncManagedFiles(resolve(appRoot, ".feynman", "themes"), resolve(agentDir, "themes"), state, result);
|
||||||
syncManagedFiles(resolve(appRoot, ".pi", "agents"), resolve(agentDir, "agents"), state, result);
|
syncManagedFiles(resolve(appRoot, ".feynman", "agents"), resolve(agentDir, "agents"), state, result);
|
||||||
|
|
||||||
writeBootstrapState(statePath, state);
|
writeBootstrapState(statePath, state);
|
||||||
return result;
|
return result;
|
||||||
|
|||||||
85
src/cli.ts
85
src/cli.ts
@@ -28,23 +28,27 @@ import { runDoctor, runStatus } from "./setup/doctor.js";
|
|||||||
import { setupPreviewDependencies } from "./setup/preview.js";
|
import { setupPreviewDependencies } from "./setup/preview.js";
|
||||||
import { runSetup } from "./setup/setup.js";
|
import { runSetup } from "./setup/setup.js";
|
||||||
import { printInfo, printPanel, printSection } from "./ui/terminal.js";
|
import { printInfo, printPanel, printSection } from "./ui/terminal.js";
|
||||||
|
import {
|
||||||
|
cliCommandSections,
|
||||||
|
formatCliWorkflowUsage,
|
||||||
|
legacyFlags,
|
||||||
|
readPromptSpecs,
|
||||||
|
topLevelCommandNames,
|
||||||
|
} from "../metadata/commands.mjs";
|
||||||
|
|
||||||
const TOP_LEVEL_COMMANDS = new Set(["alpha", "chat", "doctor", "help", "model", "search", "setup", "status", "update"]);
|
const TOP_LEVEL_COMMANDS = new Set(topLevelCommandNames);
|
||||||
const RESEARCH_WORKFLOW_COMMANDS = new Set([
|
|
||||||
"audit",
|
function printHelpLine(usage: string, description: string): void {
|
||||||
"autoresearch",
|
const width = 30;
|
||||||
"compare",
|
const padding = Math.max(1, width - usage.length);
|
||||||
"deepresearch",
|
printInfo(`${usage}${" ".repeat(padding)}${description}`);
|
||||||
"draft",
|
}
|
||||||
"jobs",
|
|
||||||
"lit",
|
function printHelp(appRoot: string): void {
|
||||||
"log",
|
const workflowCommands = readPromptSpecs(appRoot).filter(
|
||||||
"replicate",
|
(command) => command.section === "Research Workflows" && command.topLevelCli,
|
||||||
"review",
|
);
|
||||||
"watch",
|
|
||||||
]);
|
|
||||||
|
|
||||||
function printHelp(): void {
|
|
||||||
printPanel("Feynman", [
|
printPanel("Feynman", [
|
||||||
"Research-first agent shell built on Pi.",
|
"Research-first agent shell built on Pi.",
|
||||||
"Use `feynman setup` first if this is a new machine.",
|
"Use `feynman setup` first if this is a new machine.",
|
||||||
@@ -58,39 +62,21 @@ function printHelp(): void {
|
|||||||
printInfo("feynman search status");
|
printInfo("feynman search status");
|
||||||
|
|
||||||
printSection("Commands");
|
printSection("Commands");
|
||||||
printInfo("feynman chat [prompt] Start chat explicitly, optionally with an initial prompt");
|
for (const section of cliCommandSections) {
|
||||||
printInfo("feynman setup Run the guided setup");
|
for (const command of section.commands) {
|
||||||
printInfo("feynman doctor Diagnose config, auth, Pi runtime, and preview deps");
|
printHelpLine(command.usage, command.description);
|
||||||
printInfo("feynman status Show the current setup summary");
|
}
|
||||||
printInfo("feynman model list Show available models in auth storage");
|
}
|
||||||
printInfo("feynman model login [id] Login to a Pi OAuth model provider");
|
|
||||||
printInfo("feynman model logout [id] Logout from a Pi OAuth model provider");
|
|
||||||
printInfo("feynman model set <spec> Set the default model");
|
|
||||||
printInfo("feynman update [package] Update installed packages (or a specific one)");
|
|
||||||
printInfo("feynman search status Show Pi web-access status and config path");
|
|
||||||
printInfo("feynman alpha login|logout|status");
|
|
||||||
|
|
||||||
printSection("Research Workflows");
|
printSection("Research Workflows");
|
||||||
printInfo("feynman deepresearch <topic> Start a thorough source-heavy investigation");
|
for (const command of workflowCommands) {
|
||||||
printInfo("feynman lit <topic> Start the literature-review workflow");
|
printHelpLine(formatCliWorkflowUsage(command), command.description);
|
||||||
printInfo("feynman review <artifact> Start the peer-review workflow");
|
}
|
||||||
printInfo("feynman audit <item> Start the paper/code audit workflow");
|
|
||||||
printInfo("feynman replicate <target> Start the replication workflow");
|
|
||||||
printInfo("feynman draft <topic> Start the paper-style draft workflow");
|
|
||||||
printInfo("feynman compare <topic> Start the source-comparison workflow");
|
|
||||||
printInfo("feynman watch <topic> Start the recurring research watch workflow");
|
|
||||||
|
|
||||||
printSection("Legacy Flags");
|
printSection("Legacy Flags");
|
||||||
printInfo('--prompt "<text>" Run one prompt and exit');
|
for (const flag of legacyFlags) {
|
||||||
printInfo("--alpha-login Sign in to alphaXiv and exit");
|
printHelpLine(flag.usage, flag.description);
|
||||||
printInfo("--alpha-logout Clear alphaXiv auth and exit");
|
}
|
||||||
printInfo("--alpha-status Show alphaXiv auth status and exit");
|
|
||||||
printInfo("--model provider:model Force a specific model");
|
|
||||||
printInfo("--thinking level off | minimal | low | medium | high | xhigh");
|
|
||||||
printInfo("--cwd /path/to/workdir Working directory for tools");
|
|
||||||
printInfo("--session-dir /path Session storage directory");
|
|
||||||
printInfo("--doctor Alias for `feynman doctor`");
|
|
||||||
printInfo("--setup-preview Alias for `feynman setup preview`");
|
|
||||||
|
|
||||||
printSection("REPL");
|
printSection("REPL");
|
||||||
printInfo("Inside the REPL, slash workflows come from the live prompt-template and extension command set.");
|
printInfo("Inside the REPL, slash workflows come from the live prompt-template and extension command set.");
|
||||||
@@ -201,6 +187,7 @@ export function resolveInitialPrompt(
|
|||||||
command: string | undefined,
|
command: string | undefined,
|
||||||
rest: string[],
|
rest: string[],
|
||||||
oneShotPrompt: string | undefined,
|
oneShotPrompt: string | undefined,
|
||||||
|
workflowCommands: Set<string>,
|
||||||
): string | undefined {
|
): string | undefined {
|
||||||
if (oneShotPrompt) {
|
if (oneShotPrompt) {
|
||||||
return oneShotPrompt;
|
return oneShotPrompt;
|
||||||
@@ -211,7 +198,7 @@ export function resolveInitialPrompt(
|
|||||||
if (command === "chat") {
|
if (command === "chat") {
|
||||||
return rest.length > 0 ? rest.join(" ") : undefined;
|
return rest.length > 0 ? rest.join(" ") : undefined;
|
||||||
}
|
}
|
||||||
if (RESEARCH_WORKFLOW_COMMANDS.has(command)) {
|
if (workflowCommands.has(command)) {
|
||||||
return [`/${command}`, ...rest].join(" ").trim();
|
return [`/${command}`, ...rest].join(" ").trim();
|
||||||
}
|
}
|
||||||
if (!TOP_LEVEL_COMMANDS.has(command)) {
|
if (!TOP_LEVEL_COMMANDS.has(command)) {
|
||||||
@@ -224,7 +211,7 @@ export async function main(): Promise<void> {
|
|||||||
const here = dirname(fileURLToPath(import.meta.url));
|
const here = dirname(fileURLToPath(import.meta.url));
|
||||||
const appRoot = resolve(here, "..");
|
const appRoot = resolve(here, "..");
|
||||||
const feynmanVersion = loadPackageVersion(appRoot).version;
|
const feynmanVersion = loadPackageVersion(appRoot).version;
|
||||||
const bundledSettingsPath = resolve(appRoot, ".pi", "settings.json");
|
const bundledSettingsPath = resolve(appRoot, ".feynman", "settings.json");
|
||||||
const feynmanHome = getFeynmanHome();
|
const feynmanHome = getFeynmanHome();
|
||||||
const feynmanAgentDir = getFeynmanAgentDir(feynmanHome);
|
const feynmanAgentDir = getFeynmanAgentDir(feynmanHome);
|
||||||
|
|
||||||
@@ -251,7 +238,7 @@ export async function main(): Promise<void> {
|
|||||||
});
|
});
|
||||||
|
|
||||||
if (values.help) {
|
if (values.help) {
|
||||||
printHelp();
|
printHelp(appRoot);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -297,7 +284,7 @@ export async function main(): Promise<void> {
|
|||||||
|
|
||||||
const [command, ...rest] = positionals;
|
const [command, ...rest] = positionals;
|
||||||
if (command === "help") {
|
if (command === "help") {
|
||||||
printHelp();
|
printHelp(appRoot);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -374,6 +361,6 @@ export async function main(): Promise<void> {
|
|||||||
thinkingLevel,
|
thinkingLevel,
|
||||||
explicitModelSpec,
|
explicitModelSpec,
|
||||||
oneShotPrompt: values.prompt,
|
oneShotPrompt: values.prompt,
|
||||||
initialPrompt: resolveInitialPrompt(command, rest, values.prompt),
|
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name))),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,8 +27,8 @@ export function resolvePiPaths(appRoot: string) {
|
|||||||
promisePolyfillPath: resolve(appRoot, "dist", "system", "promise-polyfill.js"),
|
promisePolyfillPath: resolve(appRoot, "dist", "system", "promise-polyfill.js"),
|
||||||
researchToolsPath: resolve(appRoot, "extensions", "research-tools.ts"),
|
researchToolsPath: resolve(appRoot, "extensions", "research-tools.ts"),
|
||||||
promptTemplatePath: resolve(appRoot, "prompts"),
|
promptTemplatePath: resolve(appRoot, "prompts"),
|
||||||
systemPromptPath: resolve(appRoot, ".pi", "SYSTEM.md"),
|
systemPromptPath: resolve(appRoot, ".feynman", "SYSTEM.md"),
|
||||||
piWorkspaceNodeModulesPath: resolve(appRoot, ".pi", "npm", "node_modules"),
|
piWorkspaceNodeModulesPath: resolve(appRoot, ".feynman", "npm", "node_modules"),
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ export type PiWebAccessStatus = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
export function getPiWebSearchConfigPath(home = process.env.HOME ?? homedir()): string {
|
export function getPiWebSearchConfigPath(home = process.env.HOME ?? homedir()): string {
|
||||||
return resolve(home, ".pi", "web-search.json");
|
return resolve(home, ".feynman", "web-search.json");
|
||||||
}
|
}
|
||||||
|
|
||||||
function normalizeProvider(value: unknown): PiWebSearchProvider | undefined {
|
function normalizeProvider(value: unknown): PiWebSearchProvider | undefined {
|
||||||
|
|||||||
@@ -8,10 +8,10 @@ import { syncBundledAssets } from "../src/bootstrap/sync.js";
|
|||||||
|
|
||||||
function createAppRoot(): string {
|
function createAppRoot(): string {
|
||||||
const appRoot = mkdtempSync(join(tmpdir(), "feynman-app-"));
|
const appRoot = mkdtempSync(join(tmpdir(), "feynman-app-"));
|
||||||
mkdirSync(join(appRoot, ".pi", "themes"), { recursive: true });
|
mkdirSync(join(appRoot, ".feynman", "themes"), { recursive: true });
|
||||||
mkdirSync(join(appRoot, ".pi", "agents"), { recursive: true });
|
mkdirSync(join(appRoot, ".feynman", "agents"), { recursive: true });
|
||||||
writeFileSync(join(appRoot, ".pi", "themes", "feynman.json"), '{"theme":"v1"}\n', "utf8");
|
writeFileSync(join(appRoot, ".feynman", "themes", "feynman.json"), '{"theme":"v1"}\n', "utf8");
|
||||||
writeFileSync(join(appRoot, ".pi", "agents", "researcher.md"), "# v1\n", "utf8");
|
writeFileSync(join(appRoot, ".feynman", "agents", "researcher.md"), "# v1\n", "utf8");
|
||||||
return appRoot;
|
return appRoot;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -38,8 +38,8 @@ test("syncBundledAssets preserves user-modified files and updates managed files"
|
|||||||
|
|
||||||
syncBundledAssets(appRoot, agentDir);
|
syncBundledAssets(appRoot, agentDir);
|
||||||
|
|
||||||
writeFileSync(join(appRoot, ".pi", "themes", "feynman.json"), '{"theme":"v2"}\n', "utf8");
|
writeFileSync(join(appRoot, ".feynman", "themes", "feynman.json"), '{"theme":"v2"}\n', "utf8");
|
||||||
writeFileSync(join(appRoot, ".pi", "agents", "researcher.md"), "# v2\n", "utf8");
|
writeFileSync(join(appRoot, ".feynman", "agents", "researcher.md"), "# v2\n", "utf8");
|
||||||
writeFileSync(join(agentDir, "agents", "researcher.md"), "# user-custom\n", "utf8");
|
writeFileSync(join(agentDir, "agents", "researcher.md"), "# user-custom\n", "utf8");
|
||||||
|
|
||||||
const result = syncBundledAssets(appRoot, agentDir);
|
const result = syncBundledAssets(appRoot, agentDir);
|
||||||
|
|||||||
@@ -58,10 +58,11 @@ test("buildModelStatusSnapshotFromRecords flags an invalid current model and sug
|
|||||||
});
|
});
|
||||||
|
|
||||||
test("resolveInitialPrompt maps top-level research commands to Pi slash workflows", () => {
|
test("resolveInitialPrompt maps top-level research commands to Pi slash workflows", () => {
|
||||||
assert.equal(resolveInitialPrompt("lit", ["tool-using", "agents"], undefined), "/lit tool-using agents");
|
const workflows = new Set(["lit", "watch", "jobs", "deepresearch"]);
|
||||||
assert.equal(resolveInitialPrompt("watch", ["openai"], undefined), "/watch openai");
|
assert.equal(resolveInitialPrompt("lit", ["tool-using", "agents"], undefined, workflows), "/lit tool-using agents");
|
||||||
assert.equal(resolveInitialPrompt("jobs", [], undefined), "/jobs");
|
assert.equal(resolveInitialPrompt("watch", ["openai"], undefined, workflows), "/watch openai");
|
||||||
assert.equal(resolveInitialPrompt("chat", ["hello"], undefined), "hello");
|
assert.equal(resolveInitialPrompt("jobs", [], undefined, workflows), "/jobs");
|
||||||
assert.equal(resolveInitialPrompt("unknown", ["topic"], undefined), "unknown topic");
|
assert.equal(resolveInitialPrompt("chat", ["hello"], undefined, workflows), "hello");
|
||||||
|
assert.equal(resolveInitialPrompt("unknown", ["topic"], undefined, workflows), "unknown topic");
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ test("loadPiWebAccessConfig returns empty config when Pi web config is missing",
|
|||||||
test("getPiWebAccessStatus reads Pi web-access config directly", () => {
|
test("getPiWebAccessStatus reads Pi web-access config directly", () => {
|
||||||
const root = mkdtempSync(join(tmpdir(), "feynman-pi-web-"));
|
const root = mkdtempSync(join(tmpdir(), "feynman-pi-web-"));
|
||||||
const configPath = getPiWebSearchConfigPath(root);
|
const configPath = getPiWebSearchConfigPath(root);
|
||||||
mkdirSync(join(root, ".pi"), { recursive: true });
|
mkdirSync(join(root, ".feynman"), { recursive: true });
|
||||||
writeFileSync(
|
writeFileSync(
|
||||||
configPath,
|
configPath,
|
||||||
JSON.stringify({
|
JSON.stringify({
|
||||||
|
|||||||
33
website/.astro/collections/docs.schema.json
Normal file
33
website/.astro/collections/docs.schema.json
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
{
|
||||||
|
"$ref": "#/definitions/docs",
|
||||||
|
"definitions": {
|
||||||
|
"docs": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"title": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"section": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"order": {
|
||||||
|
"type": "number"
|
||||||
|
},
|
||||||
|
"$schema": {
|
||||||
|
"type": "string"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"title",
|
||||||
|
"description",
|
||||||
|
"section",
|
||||||
|
"order"
|
||||||
|
],
|
||||||
|
"additionalProperties": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"$schema": "http://json-schema.org/draft-07/schema#"
|
||||||
|
}
|
||||||
1
website/.astro/content-assets.mjs
Normal file
1
website/.astro/content-assets.mjs
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export default new Map();
|
||||||
1
website/.astro/content-modules.mjs
Normal file
1
website/.astro/content-modules.mjs
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export default new Map();
|
||||||
209
website/.astro/content.d.ts
vendored
Normal file
209
website/.astro/content.d.ts
vendored
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
declare module 'astro:content' {
|
||||||
|
export interface RenderResult {
|
||||||
|
Content: import('astro/runtime/server/index.js').AstroComponentFactory;
|
||||||
|
headings: import('astro').MarkdownHeading[];
|
||||||
|
remarkPluginFrontmatter: Record<string, any>;
|
||||||
|
}
|
||||||
|
interface Render {
|
||||||
|
'.md': Promise<RenderResult>;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface RenderedContent {
|
||||||
|
html: string;
|
||||||
|
metadata?: {
|
||||||
|
imagePaths: Array<string>;
|
||||||
|
[key: string]: unknown;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
declare module 'astro:content' {
|
||||||
|
type Flatten<T> = T extends { [K: string]: infer U } ? U : never;
|
||||||
|
|
||||||
|
export type CollectionKey = keyof AnyEntryMap;
|
||||||
|
export type CollectionEntry<C extends CollectionKey> = Flatten<AnyEntryMap[C]>;
|
||||||
|
|
||||||
|
export type ContentCollectionKey = keyof ContentEntryMap;
|
||||||
|
export type DataCollectionKey = keyof DataEntryMap;
|
||||||
|
|
||||||
|
type AllValuesOf<T> = T extends any ? T[keyof T] : never;
|
||||||
|
type ValidContentEntrySlug<C extends keyof ContentEntryMap> = AllValuesOf<
|
||||||
|
ContentEntryMap[C]
|
||||||
|
>['slug'];
|
||||||
|
|
||||||
|
export type ReferenceDataEntry<
|
||||||
|
C extends CollectionKey,
|
||||||
|
E extends keyof DataEntryMap[C] = string,
|
||||||
|
> = {
|
||||||
|
collection: C;
|
||||||
|
id: E;
|
||||||
|
};
|
||||||
|
export type ReferenceContentEntry<
|
||||||
|
C extends keyof ContentEntryMap,
|
||||||
|
E extends ValidContentEntrySlug<C> | (string & {}) = string,
|
||||||
|
> = {
|
||||||
|
collection: C;
|
||||||
|
slug: E;
|
||||||
|
};
|
||||||
|
export type ReferenceLiveEntry<C extends keyof LiveContentConfig['collections']> = {
|
||||||
|
collection: C;
|
||||||
|
id: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
/** @deprecated Use `getEntry` instead. */
|
||||||
|
export function getEntryBySlug<
|
||||||
|
C extends keyof ContentEntryMap,
|
||||||
|
E extends ValidContentEntrySlug<C> | (string & {}),
|
||||||
|
>(
|
||||||
|
collection: C,
|
||||||
|
// Note that this has to accept a regular string too, for SSR
|
||||||
|
entrySlug: E,
|
||||||
|
): E extends ValidContentEntrySlug<C>
|
||||||
|
? Promise<CollectionEntry<C>>
|
||||||
|
: Promise<CollectionEntry<C> | undefined>;
|
||||||
|
|
||||||
|
/** @deprecated Use `getEntry` instead. */
|
||||||
|
export function getDataEntryById<C extends keyof DataEntryMap, E extends keyof DataEntryMap[C]>(
|
||||||
|
collection: C,
|
||||||
|
entryId: E,
|
||||||
|
): Promise<CollectionEntry<C>>;
|
||||||
|
|
||||||
|
export function getCollection<C extends keyof AnyEntryMap, E extends CollectionEntry<C>>(
|
||||||
|
collection: C,
|
||||||
|
filter?: (entry: CollectionEntry<C>) => entry is E,
|
||||||
|
): Promise<E[]>;
|
||||||
|
export function getCollection<C extends keyof AnyEntryMap>(
|
||||||
|
collection: C,
|
||||||
|
filter?: (entry: CollectionEntry<C>) => unknown,
|
||||||
|
): Promise<CollectionEntry<C>[]>;
|
||||||
|
|
||||||
|
export function getLiveCollection<C extends keyof LiveContentConfig['collections']>(
|
||||||
|
collection: C,
|
||||||
|
filter?: LiveLoaderCollectionFilterType<C>,
|
||||||
|
): Promise<
|
||||||
|
import('astro').LiveDataCollectionResult<LiveLoaderDataType<C>, LiveLoaderErrorType<C>>
|
||||||
|
>;
|
||||||
|
|
||||||
|
export function getEntry<
|
||||||
|
C extends keyof ContentEntryMap,
|
||||||
|
E extends ValidContentEntrySlug<C> | (string & {}),
|
||||||
|
>(
|
||||||
|
entry: ReferenceContentEntry<C, E>,
|
||||||
|
): E extends ValidContentEntrySlug<C>
|
||||||
|
? Promise<CollectionEntry<C>>
|
||||||
|
: Promise<CollectionEntry<C> | undefined>;
|
||||||
|
export function getEntry<
|
||||||
|
C extends keyof DataEntryMap,
|
||||||
|
E extends keyof DataEntryMap[C] | (string & {}),
|
||||||
|
>(
|
||||||
|
entry: ReferenceDataEntry<C, E>,
|
||||||
|
): E extends keyof DataEntryMap[C]
|
||||||
|
? Promise<DataEntryMap[C][E]>
|
||||||
|
: Promise<CollectionEntry<C> | undefined>;
|
||||||
|
export function getEntry<
|
||||||
|
C extends keyof ContentEntryMap,
|
||||||
|
E extends ValidContentEntrySlug<C> | (string & {}),
|
||||||
|
>(
|
||||||
|
collection: C,
|
||||||
|
slug: E,
|
||||||
|
): E extends ValidContentEntrySlug<C>
|
||||||
|
? Promise<CollectionEntry<C>>
|
||||||
|
: Promise<CollectionEntry<C> | undefined>;
|
||||||
|
export function getEntry<
|
||||||
|
C extends keyof DataEntryMap,
|
||||||
|
E extends keyof DataEntryMap[C] | (string & {}),
|
||||||
|
>(
|
||||||
|
collection: C,
|
||||||
|
id: E,
|
||||||
|
): E extends keyof DataEntryMap[C]
|
||||||
|
? string extends keyof DataEntryMap[C]
|
||||||
|
? Promise<DataEntryMap[C][E]> | undefined
|
||||||
|
: Promise<DataEntryMap[C][E]>
|
||||||
|
: Promise<CollectionEntry<C> | undefined>;
|
||||||
|
export function getLiveEntry<C extends keyof LiveContentConfig['collections']>(
|
||||||
|
collection: C,
|
||||||
|
filter: string | LiveLoaderEntryFilterType<C>,
|
||||||
|
): Promise<import('astro').LiveDataEntryResult<LiveLoaderDataType<C>, LiveLoaderErrorType<C>>>;
|
||||||
|
|
||||||
|
/** Resolve an array of entry references from the same collection */
|
||||||
|
export function getEntries<C extends keyof ContentEntryMap>(
|
||||||
|
entries: ReferenceContentEntry<C, ValidContentEntrySlug<C>>[],
|
||||||
|
): Promise<CollectionEntry<C>[]>;
|
||||||
|
export function getEntries<C extends keyof DataEntryMap>(
|
||||||
|
entries: ReferenceDataEntry<C, keyof DataEntryMap[C]>[],
|
||||||
|
): Promise<CollectionEntry<C>[]>;
|
||||||
|
|
||||||
|
export function render<C extends keyof AnyEntryMap>(
|
||||||
|
entry: AnyEntryMap[C][string],
|
||||||
|
): Promise<RenderResult>;
|
||||||
|
|
||||||
|
export function reference<C extends keyof AnyEntryMap>(
|
||||||
|
collection: C,
|
||||||
|
): import('astro/zod').ZodEffects<
|
||||||
|
import('astro/zod').ZodString,
|
||||||
|
C extends keyof ContentEntryMap
|
||||||
|
? ReferenceContentEntry<C, ValidContentEntrySlug<C>>
|
||||||
|
: ReferenceDataEntry<C, keyof DataEntryMap[C]>
|
||||||
|
>;
|
||||||
|
// Allow generic `string` to avoid excessive type errors in the config
|
||||||
|
// if `dev` is not running to update as you edit.
|
||||||
|
// Invalid collection names will be caught at build time.
|
||||||
|
export function reference<C extends string>(
|
||||||
|
collection: C,
|
||||||
|
): import('astro/zod').ZodEffects<import('astro/zod').ZodString, never>;
|
||||||
|
|
||||||
|
type ReturnTypeOrOriginal<T> = T extends (...args: any[]) => infer R ? R : T;
|
||||||
|
type InferEntrySchema<C extends keyof AnyEntryMap> = import('astro/zod').infer<
|
||||||
|
ReturnTypeOrOriginal<Required<ContentConfig['collections'][C]>['schema']>
|
||||||
|
>;
|
||||||
|
|
||||||
|
type ContentEntryMap = {
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
type DataEntryMap = {
|
||||||
|
"docs": Record<string, {
|
||||||
|
id: string;
|
||||||
|
render(): Render[".md"];
|
||||||
|
slug: string;
|
||||||
|
body: string;
|
||||||
|
collection: "docs";
|
||||||
|
data: InferEntrySchema<"docs">;
|
||||||
|
rendered?: RenderedContent;
|
||||||
|
filePath?: string;
|
||||||
|
}>;
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
type AnyEntryMap = ContentEntryMap & DataEntryMap;
|
||||||
|
|
||||||
|
type ExtractLoaderTypes<T> = T extends import('astro/loaders').LiveLoader<
|
||||||
|
infer TData,
|
||||||
|
infer TEntryFilter,
|
||||||
|
infer TCollectionFilter,
|
||||||
|
infer TError
|
||||||
|
>
|
||||||
|
? { data: TData; entryFilter: TEntryFilter; collectionFilter: TCollectionFilter; error: TError }
|
||||||
|
: { data: never; entryFilter: never; collectionFilter: never; error: never };
|
||||||
|
type ExtractDataType<T> = ExtractLoaderTypes<T>['data'];
|
||||||
|
type ExtractEntryFilterType<T> = ExtractLoaderTypes<T>['entryFilter'];
|
||||||
|
type ExtractCollectionFilterType<T> = ExtractLoaderTypes<T>['collectionFilter'];
|
||||||
|
type ExtractErrorType<T> = ExtractLoaderTypes<T>['error'];
|
||||||
|
|
||||||
|
type LiveLoaderDataType<C extends keyof LiveContentConfig['collections']> =
|
||||||
|
LiveContentConfig['collections'][C]['schema'] extends undefined
|
||||||
|
? ExtractDataType<LiveContentConfig['collections'][C]['loader']>
|
||||||
|
: import('astro/zod').infer<
|
||||||
|
Exclude<LiveContentConfig['collections'][C]['schema'], undefined>
|
||||||
|
>;
|
||||||
|
type LiveLoaderEntryFilterType<C extends keyof LiveContentConfig['collections']> =
|
||||||
|
ExtractEntryFilterType<LiveContentConfig['collections'][C]['loader']>;
|
||||||
|
type LiveLoaderCollectionFilterType<C extends keyof LiveContentConfig['collections']> =
|
||||||
|
ExtractCollectionFilterType<LiveContentConfig['collections'][C]['loader']>;
|
||||||
|
type LiveLoaderErrorType<C extends keyof LiveContentConfig['collections']> = ExtractErrorType<
|
||||||
|
LiveContentConfig['collections'][C]['loader']
|
||||||
|
>;
|
||||||
|
|
||||||
|
export type ContentConfig = typeof import("../src/content/config.js");
|
||||||
|
export type LiveContentConfig = never;
|
||||||
|
}
|
||||||
1
website/.astro/data-store.json
Normal file
1
website/.astro/data-store.json
Normal file
File diff suppressed because one or more lines are too long
5
website/.astro/settings.json
Normal file
5
website/.astro/settings.json
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
{
|
||||||
|
"_variables": {
|
||||||
|
"lastUpdateCheck": 1774305535217
|
||||||
|
}
|
||||||
|
}
|
||||||
2
website/.astro/types.d.ts
vendored
Normal file
2
website/.astro/types.d.ts
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
/// <reference types="astro/client" />
|
||||||
|
/// <reference path="content.d.ts" />
|
||||||
15
website/astro.config.mjs
Normal file
15
website/astro.config.mjs
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
import { defineConfig } from 'astro/config';
|
||||||
|
import tailwind from '@astrojs/tailwind';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
integrations: [tailwind()],
|
||||||
|
site: 'https://feynman.companion.ai',
|
||||||
|
markdown: {
|
||||||
|
shikiConfig: {
|
||||||
|
themes: {
|
||||||
|
light: 'github-light',
|
||||||
|
dark: 'github-dark',
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
6876
website/package-lock.json
generated
Normal file
6876
website/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
17
website/package.json
Normal file
17
website/package.json
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
{
|
||||||
|
"name": "feynman-website",
|
||||||
|
"type": "module",
|
||||||
|
"version": "0.0.1",
|
||||||
|
"private": true,
|
||||||
|
"scripts": {
|
||||||
|
"dev": "astro dev",
|
||||||
|
"build": "astro build",
|
||||||
|
"preview": "astro preview"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"astro": "^5.7.0",
|
||||||
|
"@astrojs/tailwind": "^6.0.2",
|
||||||
|
"tailwindcss": "^3.4.0",
|
||||||
|
"sharp": "^0.33.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
9
website/src/components/Footer.astro
Normal file
9
website/src/components/Footer.astro
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
<footer class="py-8 mt-16">
|
||||||
|
<div class="max-w-6xl mx-auto px-6 flex flex-col sm:flex-row items-center justify-between gap-4">
|
||||||
|
<span class="text-sm text-text-dim">© 2026 Companion Inc.</span>
|
||||||
|
<div class="flex gap-6">
|
||||||
|
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="text-sm text-text-dim hover:text-text-primary transition-colors">GitHub</a>
|
||||||
|
<a href="/docs/getting-started/installation" class="text-sm text-text-dim hover:text-text-primary transition-colors">Docs</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</footer>
|
||||||
26
website/src/components/Nav.astro
Normal file
26
website/src/components/Nav.astro
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
import ThemeToggle from './ThemeToggle.astro';
|
||||||
|
|
||||||
|
interface Props {
|
||||||
|
active?: 'home' | 'docs';
|
||||||
|
}
|
||||||
|
|
||||||
|
const { active = 'home' } = Astro.props;
|
||||||
|
---
|
||||||
|
|
||||||
|
<nav class="sticky top-0 z-50 bg-bg">
|
||||||
|
<div class="max-w-6xl mx-auto px-6 h-14 flex items-center justify-between">
|
||||||
|
<a href="/" class="text-xl font-bold text-accent tracking-tight">Feynman</a>
|
||||||
|
<div class="flex items-center gap-6">
|
||||||
|
<a href="/docs/getting-started/installation"
|
||||||
|
class:list={["text-sm transition-colors", active === 'docs' ? 'text-text-primary' : 'text-text-muted hover:text-text-primary']}>
|
||||||
|
Docs
|
||||||
|
</a>
|
||||||
|
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener"
|
||||||
|
class="text-sm text-text-muted hover:text-text-primary transition-colors">
|
||||||
|
GitHub
|
||||||
|
</a>
|
||||||
|
<ThemeToggle />
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</nav>
|
||||||
80
website/src/components/Sidebar.astro
Normal file
80
website/src/components/Sidebar.astro
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
---
|
||||||
|
interface Props {
|
||||||
|
currentSlug: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { currentSlug } = Astro.props;
|
||||||
|
|
||||||
|
const sections = [
|
||||||
|
{
|
||||||
|
title: 'Getting Started',
|
||||||
|
items: [
|
||||||
|
{ label: 'Installation', slug: 'getting-started/installation' },
|
||||||
|
{ label: 'Quick Start', slug: 'getting-started/quickstart' },
|
||||||
|
{ label: 'Setup', slug: 'getting-started/setup' },
|
||||||
|
{ label: 'Configuration', slug: 'getting-started/configuration' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: 'Workflows',
|
||||||
|
items: [
|
||||||
|
{ label: 'Deep Research', slug: 'workflows/deep-research' },
|
||||||
|
{ label: 'Literature Review', slug: 'workflows/literature-review' },
|
||||||
|
{ label: 'Peer Review', slug: 'workflows/review' },
|
||||||
|
{ label: 'Code Audit', slug: 'workflows/audit' },
|
||||||
|
{ label: 'Replication', slug: 'workflows/replication' },
|
||||||
|
{ label: 'Source Comparison', slug: 'workflows/compare' },
|
||||||
|
{ label: 'Draft Writing', slug: 'workflows/draft' },
|
||||||
|
{ label: 'Autoresearch', slug: 'workflows/autoresearch' },
|
||||||
|
{ label: 'Watch', slug: 'workflows/watch' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: 'Agents',
|
||||||
|
items: [
|
||||||
|
{ label: 'Researcher', slug: 'agents/researcher' },
|
||||||
|
{ label: 'Reviewer', slug: 'agents/reviewer' },
|
||||||
|
{ label: 'Writer', slug: 'agents/writer' },
|
||||||
|
{ label: 'Verifier', slug: 'agents/verifier' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: 'Tools',
|
||||||
|
items: [
|
||||||
|
{ label: 'AlphaXiv', slug: 'tools/alphaxiv' },
|
||||||
|
{ label: 'Web Search', slug: 'tools/web-search' },
|
||||||
|
{ label: 'Session Search', slug: 'tools/session-search' },
|
||||||
|
{ label: 'Preview', slug: 'tools/preview' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
title: 'Reference',
|
||||||
|
items: [
|
||||||
|
{ label: 'CLI Commands', slug: 'reference/cli-commands' },
|
||||||
|
{ label: 'Slash Commands', slug: 'reference/slash-commands' },
|
||||||
|
{ label: 'Package Stack', slug: 'reference/package-stack' },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
];
|
||||||
|
---
|
||||||
|
|
||||||
|
<aside id="sidebar" class="w-64 shrink-0 h-[calc(100vh-3.5rem)] sticky top-14 overflow-y-auto py-6 pr-4 hidden lg:block border-r border-border">
|
||||||
|
{sections.map((section) => (
|
||||||
|
<div class="mb-6">
|
||||||
|
<div class="text-xs font-semibold text-accent uppercase tracking-wider px-3 mb-2">{section.title}</div>
|
||||||
|
{section.items.map((item) => (
|
||||||
|
<a
|
||||||
|
href={`/docs/${item.slug}`}
|
||||||
|
class:list={[
|
||||||
|
'block px-3 py-1.5 text-sm border-l-[2px] transition-colors',
|
||||||
|
currentSlug === item.slug
|
||||||
|
? 'border-accent text-text-primary'
|
||||||
|
: 'border-transparent text-text-muted hover:text-text-primary',
|
||||||
|
]}
|
||||||
|
>
|
||||||
|
{item.label}
|
||||||
|
</a>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</aside>
|
||||||
33
website/src/components/ThemeToggle.astro
Normal file
33
website/src/components/ThemeToggle.astro
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
<button id="theme-toggle" class="p-1.5 rounded-md text-text-muted hover:text-text-primary hover:bg-surface transition-colors" aria-label="Toggle theme">
|
||||||
|
<svg id="sun-icon" class="hidden w-[18px] h-[18px]" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<circle cx="12" cy="12" r="5" />
|
||||||
|
<path d="M12 1v2M12 21v2M4.22 4.22l1.42 1.42M18.36 18.36l1.42 1.42M1 12h2M21 12h2M4.22 19.78l1.42-1.42M18.36 5.64l1.42-1.42" />
|
||||||
|
</svg>
|
||||||
|
<svg id="moon-icon" class="hidden w-[18px] h-[18px]" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z" />
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
<script is:inline>
|
||||||
|
(function() {
|
||||||
|
var stored = localStorage.getItem('theme');
|
||||||
|
var prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
|
||||||
|
var dark = stored === 'dark' || (!stored && prefersDark);
|
||||||
|
if (dark) document.documentElement.classList.add('dark');
|
||||||
|
function update() {
|
||||||
|
var isDark = document.documentElement.classList.contains('dark');
|
||||||
|
document.getElementById('sun-icon').style.display = isDark ? 'block' : 'none';
|
||||||
|
document.getElementById('moon-icon').style.display = isDark ? 'none' : 'block';
|
||||||
|
}
|
||||||
|
update();
|
||||||
|
document.addEventListener('DOMContentLoaded', function() {
|
||||||
|
update();
|
||||||
|
document.getElementById('theme-toggle').addEventListener('click', function() {
|
||||||
|
document.documentElement.classList.toggle('dark');
|
||||||
|
var isDark = document.documentElement.classList.contains('dark');
|
||||||
|
localStorage.setItem('theme', isDark ? 'dark' : 'light');
|
||||||
|
update();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
13
website/src/content/config.ts
Normal file
13
website/src/content/config.ts
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
import { defineCollection, z } from 'astro:content';
|
||||||
|
|
||||||
|
const docs = defineCollection({
|
||||||
|
type: 'content',
|
||||||
|
schema: z.object({
|
||||||
|
title: z.string(),
|
||||||
|
description: z.string(),
|
||||||
|
section: z.string(),
|
||||||
|
order: z.number(),
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const collections = { docs };
|
||||||
75
website/src/content/docs/agents/researcher.md
Normal file
75
website/src/content/docs/agents/researcher.md
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
---
|
||||||
|
title: Researcher
|
||||||
|
description: Gather primary evidence across papers, web sources, repos, docs, and local artifacts.
|
||||||
|
section: Agents
|
||||||
|
order: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source
|
||||||
|
|
||||||
|
Generated from `.feynman/agents/researcher.md`. Edit that prompt file, not this docs page.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
Gather primary evidence across papers, web sources, repos, docs, and local artifacts.
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
`read`, `bash`, `grep`, `find`, `ls`
|
||||||
|
|
||||||
|
## Default Output
|
||||||
|
|
||||||
|
`research.md`
|
||||||
|
|
||||||
|
## Integrity commandments
|
||||||
|
1. **Never fabricate a source.** Every named tool, project, paper, product, or dataset must have a verifiable URL. If you cannot find a URL, do not mention it.
|
||||||
|
2. **Never claim a project exists without checking.** Before citing a GitHub repo, search for it. Before citing a paper, find it. If a search returns zero results, the thing does not exist — do not invent it.
|
||||||
|
3. **Never extrapolate details you haven't read.** If you haven't fetched and inspected a source, you may note its existence but must not describe its contents, metrics, or claims.
|
||||||
|
4. **URL or it didn't happen.** Every entry in your evidence table must include a direct, checkable URL. No URL = not included.
|
||||||
|
|
||||||
|
## Search strategy
|
||||||
|
1. **Start wide.** Begin with short, broad queries to map the landscape. Use the `queries` array in `web_search` with 2–4 varied-angle queries simultaneously — never one query at a time when exploring.
|
||||||
|
2. **Evaluate availability.** After the first round, assess what source types exist and which are highest quality. Adjust strategy accordingly.
|
||||||
|
3. **Progressively narrow.** Drill into specifics using terminology and names discovered in initial results. Refine queries, don't repeat them.
|
||||||
|
4. **Cross-source.** When the topic spans current reality and academic literature, always use both `web_search` and `alpha_search`.
|
||||||
|
|
||||||
|
Use `recencyFilter` on `web_search` for fast-moving topics. Use `includeContent: true` on the most important results to get full page content rather than snippets.
|
||||||
|
|
||||||
|
## Source quality
|
||||||
|
- **Prefer:** academic papers, official documentation, primary datasets, verified benchmarks, government filings, reputable journalism, expert technical blogs, official vendor pages
|
||||||
|
- **Accept with caveats:** well-cited secondary sources, established trade publications
|
||||||
|
- **Deprioritize:** SEO-optimized listicles, undated blog posts, content aggregators, social media without primary links
|
||||||
|
- **Reject:** sources with no author and no date, content that appears AI-generated with no primary backing
|
||||||
|
|
||||||
|
When initial results skew toward low-quality sources, re-search with `domainFilter` targeting authoritative domains.
|
||||||
|
|
||||||
|
## Output format
|
||||||
|
|
||||||
|
Assign each source a stable numeric ID. Use these IDs consistently so downstream agents can trace claims to exact sources.
|
||||||
|
|
||||||
|
### Evidence table
|
||||||
|
|
||||||
|
| # | Source | URL | Key claim | Type | Confidence |
|
||||||
|
|---|--------|-----|-----------|------|------------|
|
||||||
|
| 1 | ... | ... | ... | primary / secondary / self-reported | high / medium / low |
|
||||||
|
|
||||||
|
### Findings
|
||||||
|
|
||||||
|
Write findings using inline source references: `[1]`, `[2]`, etc. Every factual claim must cite at least one source by number.
|
||||||
|
|
||||||
|
### Sources
|
||||||
|
|
||||||
|
Numbered list matching the evidence table:
|
||||||
|
1. Author/Title — URL
|
||||||
|
2. Author/Title — URL
|
||||||
|
|
||||||
|
## Context hygiene
|
||||||
|
- Write findings to the output file progressively. Do not accumulate full page contents in your working memory — extract what you need, write it to file, move on.
|
||||||
|
- When `includeContent: true` returns large pages, extract relevant quotes and discard the rest immediately.
|
||||||
|
- If your search produces 10+ results, triage by title/snippet first. Only fetch full content for the top candidates.
|
||||||
|
- Return a one-line summary to the parent, not full findings. The parent reads the output file.
|
||||||
|
|
||||||
|
## Output contract
|
||||||
|
- Save to the output file (default: `research.md`).
|
||||||
|
- Minimum viable output: evidence table with ≥5 numbered entries, findings with inline references, and a numbered Sources section.
|
||||||
|
- Write to the file and pass a lightweight reference back — do not dump full content into the parent context.
|
||||||
93
website/src/content/docs/agents/reviewer.md
Normal file
93
website/src/content/docs/agents/reviewer.md
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
---
|
||||||
|
title: Reviewer
|
||||||
|
description: Simulate a tough but constructive AI research peer reviewer with inline annotations.
|
||||||
|
section: Agents
|
||||||
|
order: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source
|
||||||
|
|
||||||
|
Generated from `.feynman/agents/reviewer.md`. Edit that prompt file, not this docs page.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
Simulate a tough but constructive AI research peer reviewer with inline annotations.
|
||||||
|
|
||||||
|
## Default Output
|
||||||
|
|
||||||
|
`review.md`
|
||||||
|
|
||||||
|
Your job is to act like a skeptical but fair peer reviewer for AI/ML systems work.
|
||||||
|
|
||||||
|
## Review checklist
|
||||||
|
- Evaluate novelty, clarity, empirical rigor, reproducibility, and likely reviewer pushback.
|
||||||
|
- Do not praise vaguely. Every positive claim should be tied to specific evidence.
|
||||||
|
- Look for:
|
||||||
|
- missing or weak baselines
|
||||||
|
- missing ablations
|
||||||
|
- evaluation mismatches
|
||||||
|
- unclear claims of novelty
|
||||||
|
- weak related-work positioning
|
||||||
|
- insufficient statistical evidence
|
||||||
|
- benchmark leakage or contamination risks
|
||||||
|
- under-specified implementation details
|
||||||
|
- claims that outrun the experiments
|
||||||
|
- Distinguish between fatal issues, strong concerns, and polish issues.
|
||||||
|
- Preserve uncertainty. If the draft might pass depending on venue norms, say so explicitly.
|
||||||
|
|
||||||
|
## Output format
|
||||||
|
|
||||||
|
Produce two sections: a structured review and inline annotations.
|
||||||
|
|
||||||
|
### Part 1: Structured Review
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Summary
|
||||||
|
1-2 paragraph summary of the paper's contributions and approach.
|
||||||
|
|
||||||
|
## Strengths
|
||||||
|
- [S1] ...
|
||||||
|
- [S2] ...
|
||||||
|
|
||||||
|
## Weaknesses
|
||||||
|
- [W1] **FATAL:** ...
|
||||||
|
- [W2] **MAJOR:** ...
|
||||||
|
- [W3] **MINOR:** ...
|
||||||
|
|
||||||
|
## Questions for Authors
|
||||||
|
- [Q1] ...
|
||||||
|
|
||||||
|
## Verdict
|
||||||
|
Overall assessment and confidence score. Would this pass at [venue]?
|
||||||
|
|
||||||
|
## Revision Plan
|
||||||
|
Prioritized, concrete steps to address each weakness.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Part 2: Inline Annotations
|
||||||
|
|
||||||
|
Quote specific passages from the paper and annotate them directly:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Inline Annotations
|
||||||
|
|
||||||
|
> "We achieve state-of-the-art results on all benchmarks"
|
||||||
|
**[W1] FATAL:** This claim is unsupported — Table 3 shows the method underperforms on 2 of 5 benchmarks. Revise to accurately reflect results.
|
||||||
|
|
||||||
|
> "Our approach is novel in combining X with Y"
|
||||||
|
**[W3] MINOR:** Z et al. (2024) combined X with Y in a different domain. Acknowledge this and clarify the distinction.
|
||||||
|
|
||||||
|
> "We use a learning rate of 1e-4"
|
||||||
|
**[Q1]:** Was this tuned? What range was searched? This matters for reproducibility.
|
||||||
|
```
|
||||||
|
|
||||||
|
Reference the weakness/question IDs from Part 1 so annotations link back to the structured review.
|
||||||
|
|
||||||
|
## Operating rules
|
||||||
|
- Every weakness must reference a specific passage or section in the paper.
|
||||||
|
- Inline annotations must quote the exact text being critiqued.
|
||||||
|
- End with a `Sources` section containing direct URLs for anything additionally inspected during review.
|
||||||
|
|
||||||
|
## Output contract
|
||||||
|
- Save the main artifact to `review.md`.
|
||||||
|
- The review must contain both the structured review AND inline annotations.
|
||||||
50
website/src/content/docs/agents/verifier.md
Normal file
50
website/src/content/docs/agents/verifier.md
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
---
|
||||||
|
title: Verifier
|
||||||
|
description: Post-process a draft to add inline citations and verify every source URL.
|
||||||
|
section: Agents
|
||||||
|
order: 4
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source
|
||||||
|
|
||||||
|
Generated from `.feynman/agents/verifier.md`. Edit that prompt file, not this docs page.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
Post-process a draft to add inline citations and verify every source URL.
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
`read`, `bash`, `grep`, `find`, `ls`, `write`, `edit`
|
||||||
|
|
||||||
|
## Default Output
|
||||||
|
|
||||||
|
`cited.md`
|
||||||
|
|
||||||
|
You receive a draft document and the research files it was built from. Your job is to:
|
||||||
|
|
||||||
|
1. **Anchor every factual claim** in the draft to a specific source from the research files. Insert inline citations `[1]`, `[2]`, etc. directly after each claim.
|
||||||
|
2. **Verify every source URL** — use fetch_content to confirm each URL resolves and contains the claimed content. Flag dead links.
|
||||||
|
3. **Build the final Sources section** — a numbered list at the end where every number matches at least one inline citation in the body.
|
||||||
|
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
|
||||||
|
|
||||||
|
## Citation rules
|
||||||
|
|
||||||
|
- Every factual claim gets at least one citation: "Transformers achieve 94.2% on MMLU [3]."
|
||||||
|
- Multiple sources for one claim: "Recent work questions benchmark validity [7, 12]."
|
||||||
|
- No orphan citations — every `[N]` in the body must appear in Sources.
|
||||||
|
- No orphan sources — every entry in Sources must be cited at least once.
|
||||||
|
- Hedged or opinion statements do not need citations.
|
||||||
|
- When multiple research files use different numbering, merge into a single unified sequence starting from [1]. Deduplicate sources that appear in multiple files.
|
||||||
|
|
||||||
|
## Source verification
|
||||||
|
|
||||||
|
For each source URL:
|
||||||
|
- **Live:** keep as-is.
|
||||||
|
- **Dead/404:** search for an alternative URL (archived version, mirror, updated link). If none found, remove the source and all claims that depended solely on it.
|
||||||
|
- **Redirects to unrelated content:** treat as dead.
|
||||||
|
|
||||||
|
## Output contract
|
||||||
|
- Save to the output file (default: `cited.md`).
|
||||||
|
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
|
||||||
|
- Do not change the substance or structure of the draft. Only add citations and fix dead sources.
|
||||||
56
website/src/content/docs/agents/writer.md
Normal file
56
website/src/content/docs/agents/writer.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
---
|
||||||
|
title: Writer
|
||||||
|
description: Turn research notes into clear, structured briefs and drafts.
|
||||||
|
section: Agents
|
||||||
|
order: 3
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source
|
||||||
|
|
||||||
|
Generated from `.feynman/agents/writer.md`. Edit that prompt file, not this docs page.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
Turn research notes into clear, structured briefs and drafts.
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
`read`, `bash`, `grep`, `find`, `ls`, `write`, `edit`
|
||||||
|
|
||||||
|
## Default Output
|
||||||
|
|
||||||
|
`draft.md`
|
||||||
|
|
||||||
|
## Integrity commandments
|
||||||
|
1. **Write only from supplied evidence.** Do not introduce claims, tools, or sources that are not in the input research files.
|
||||||
|
2. **Preserve caveats and disagreements.** Never smooth away uncertainty.
|
||||||
|
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
|
||||||
|
|
||||||
|
## Output structure
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Title
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
2-3 paragraph overview of key findings.
|
||||||
|
|
||||||
|
## Section 1: ...
|
||||||
|
Detailed findings organized by theme or question.
|
||||||
|
|
||||||
|
## Section N: ...
|
||||||
|
...
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
Unresolved issues, disagreements between sources, gaps in evidence.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Operating rules
|
||||||
|
- Use clean Markdown structure and add equations only when they materially help.
|
||||||
|
- Keep the narrative readable, but never outrun the evidence.
|
||||||
|
- Produce artifacts that are ready to review in a browser or PDF preview.
|
||||||
|
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
||||||
|
- Do NOT add a Sources section — the verifier agent builds that.
|
||||||
|
|
||||||
|
## Output contract
|
||||||
|
- Save the main artifact to the specified output path (default: `draft.md`).
|
||||||
|
- Focus on clarity, structure, and evidence traceability.
|
||||||
66
website/src/content/docs/getting-started/configuration.md
Normal file
66
website/src/content/docs/getting-started/configuration.md
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
---
|
||||||
|
title: Configuration
|
||||||
|
description: Configure models, search, and runtime options
|
||||||
|
section: Getting Started
|
||||||
|
order: 4
|
||||||
|
---
|
||||||
|
|
||||||
|
## Model
|
||||||
|
|
||||||
|
Set the default model:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman model set <provider:model>
|
||||||
|
```
|
||||||
|
|
||||||
|
Override at runtime:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --model anthropic:claude-opus-4-6
|
||||||
|
```
|
||||||
|
|
||||||
|
List available models:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman model list
|
||||||
|
```
|
||||||
|
|
||||||
|
## Thinking level
|
||||||
|
|
||||||
|
Control the reasoning depth:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --thinking high
|
||||||
|
```
|
||||||
|
|
||||||
|
Levels: `off`, `minimal`, `low`, `medium`, `high`, `xhigh`.
|
||||||
|
|
||||||
|
## Web search
|
||||||
|
|
||||||
|
Check the current search configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman search status
|
||||||
|
```
|
||||||
|
|
||||||
|
For advanced configuration, edit `~/.feynman/web-search.json` directly to set Gemini API keys, Perplexity keys, or a different route.
|
||||||
|
|
||||||
|
## Working directory
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --cwd /path/to/project
|
||||||
|
```
|
||||||
|
|
||||||
|
## Session storage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --session-dir /path/to/sessions
|
||||||
|
```
|
||||||
|
|
||||||
|
## One-shot mode
|
||||||
|
|
||||||
|
Run a single prompt and exit:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --prompt "summarize the key findings of 2401.12345"
|
||||||
|
```
|
||||||
34
website/src/content/docs/getting-started/installation.md
Normal file
34
website/src/content/docs/getting-started/installation.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
---
|
||||||
|
title: Installation
|
||||||
|
description: Install Feynman and get started
|
||||||
|
section: Getting Started
|
||||||
|
order: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Node.js 20 or later
|
||||||
|
- npm 9 or later
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @companion-ai/feynman
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verify
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## Local Development
|
||||||
|
|
||||||
|
For contributing or local development:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/getcompanion-ai/feynman.git
|
||||||
|
cd feynman
|
||||||
|
npm install
|
||||||
|
npm run start
|
||||||
|
```
|
||||||
44
website/src/content/docs/getting-started/quickstart.md
Normal file
44
website/src/content/docs/getting-started/quickstart.md
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
---
|
||||||
|
title: Quick Start
|
||||||
|
description: Get up and running with Feynman in 60 seconds
|
||||||
|
section: Getting Started
|
||||||
|
order: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
## First run
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman setup
|
||||||
|
feynman
|
||||||
|
```
|
||||||
|
|
||||||
|
`feynman setup` walks you through model authentication, alphaXiv login, web search configuration, and preview dependencies.
|
||||||
|
|
||||||
|
## Ask naturally
|
||||||
|
|
||||||
|
Feynman routes your questions into the right workflow automatically. You don't need slash commands to get started.
|
||||||
|
|
||||||
|
```
|
||||||
|
> What are the main approaches to RLHF alignment?
|
||||||
|
```
|
||||||
|
|
||||||
|
Feynman will search papers, gather web sources, and produce a structured answer with citations.
|
||||||
|
|
||||||
|
## Use workflows directly
|
||||||
|
|
||||||
|
For explicit control, use slash commands inside the REPL:
|
||||||
|
|
||||||
|
```
|
||||||
|
> /deepresearch transformer scaling laws
|
||||||
|
> /lit multimodal reasoning benchmarks
|
||||||
|
> /review paper.pdf
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output locations
|
||||||
|
|
||||||
|
Feynman writes durable artifacts to canonical directories:
|
||||||
|
|
||||||
|
- `outputs/` — Reviews, reading lists, summaries
|
||||||
|
- `papers/` — Polished paper-style drafts
|
||||||
|
- `experiments/` — Runnable code and result logs
|
||||||
|
- `notes/` — Scratch notes and session logs
|
||||||
66
website/src/content/docs/getting-started/setup.md
Normal file
66
website/src/content/docs/getting-started/setup.md
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
---
|
||||||
|
title: Setup
|
||||||
|
description: Detailed setup guide for Feynman
|
||||||
|
section: Getting Started
|
||||||
|
order: 3
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guided setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman setup
|
||||||
|
```
|
||||||
|
|
||||||
|
This walks through four steps:
|
||||||
|
|
||||||
|
### Model provider authentication
|
||||||
|
|
||||||
|
Feynman uses Pi's OAuth system for model access. The setup wizard prompts you to log in to your preferred provider.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman model login
|
||||||
|
```
|
||||||
|
|
||||||
|
### AlphaXiv login
|
||||||
|
|
||||||
|
AlphaXiv powers Feynman's paper search and analysis tools. Sign in with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman alpha login
|
||||||
|
```
|
||||||
|
|
||||||
|
Check status anytime:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman alpha status
|
||||||
|
```
|
||||||
|
|
||||||
|
### Web search routing
|
||||||
|
|
||||||
|
Feynman supports three web search backends:
|
||||||
|
|
||||||
|
- **auto** — Prefer Perplexity when configured, fall back to Gemini
|
||||||
|
- **perplexity** — Force Perplexity Sonar
|
||||||
|
- **gemini** — Force Gemini (default, zero-config via signed-in Chromium)
|
||||||
|
|
||||||
|
The default path requires no API keys — it uses Gemini Browser via your signed-in Chromium profile.
|
||||||
|
|
||||||
|
### Preview dependencies
|
||||||
|
|
||||||
|
For PDF and HTML export of generated artifacts, Feynman needs `pandoc`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --setup-preview
|
||||||
|
```
|
||||||
|
|
||||||
|
This installs pandoc automatically on macOS/Homebrew systems.
|
||||||
|
|
||||||
|
## Diagnostics
|
||||||
|
|
||||||
|
Run the doctor to check everything:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman doctor
|
||||||
|
```
|
||||||
|
|
||||||
|
This verifies model auth, alphaXiv credentials, preview dependencies, and the Pi runtime.
|
||||||
61
website/src/content/docs/reference/cli-commands.md
Normal file
61
website/src/content/docs/reference/cli-commands.md
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
---
|
||||||
|
title: CLI Commands
|
||||||
|
description: Complete reference for Feynman CLI commands
|
||||||
|
section: Reference
|
||||||
|
order: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
This page covers the dedicated Feynman CLI commands and compatibility flags.
|
||||||
|
|
||||||
|
Workflow prompt templates such as `/deepresearch` also run directly from the shell as `feynman <workflow> ...`. Those workflow entries live in the slash-command reference instead of being duplicated here.
|
||||||
|
|
||||||
|
## Core
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `feynman` | Launch the interactive REPL. |
|
||||||
|
| `feynman chat [prompt]` | Start chat explicitly, optionally with an initial prompt. |
|
||||||
|
| `feynman help` | Show CLI help. |
|
||||||
|
| `feynman setup` | Run the guided setup wizard. |
|
||||||
|
| `feynman doctor` | Diagnose config, auth, Pi runtime, and preview dependencies. |
|
||||||
|
| `feynman status` | Show the current setup summary. |
|
||||||
|
|
||||||
|
## Model Management
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `feynman model list` | List available models in Pi auth storage. |
|
||||||
|
| `feynman model login [id]` | Login to a Pi OAuth model provider. |
|
||||||
|
| `feynman model logout [id]` | Logout from a Pi OAuth model provider. |
|
||||||
|
| `feynman model set <provider/model>` | Set the default model. |
|
||||||
|
|
||||||
|
## AlphaXiv
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `feynman alpha login` | Sign in to alphaXiv. |
|
||||||
|
| `feynman alpha logout` | Clear alphaXiv auth. |
|
||||||
|
| `feynman alpha status` | Check alphaXiv auth status. |
|
||||||
|
|
||||||
|
## Utilities
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `feynman search status` | Show Pi web-access status and config path. |
|
||||||
|
| `feynman update [package]` | Update installed packages, or a specific package. |
|
||||||
|
|
||||||
|
## Flags
|
||||||
|
|
||||||
|
| Flag | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `--prompt "<text>"` | Run one prompt and exit. |
|
||||||
|
| `--alpha-login` | Sign in to alphaXiv and exit. |
|
||||||
|
| `--alpha-logout` | Clear alphaXiv auth and exit. |
|
||||||
|
| `--alpha-status` | Show alphaXiv auth status and exit. |
|
||||||
|
| `--model <provider:model>` | Force a specific model. |
|
||||||
|
| `--thinking <level>` | Set thinking level: off | minimal | low | medium | high | xhigh. |
|
||||||
|
| `--cwd <path>` | Set the working directory for tools. |
|
||||||
|
| `--session-dir <path>` | Set the session storage directory. |
|
||||||
|
| `--new-session` | Start a new persisted session. |
|
||||||
|
| `--doctor` | Alias for `feynman doctor`. |
|
||||||
|
| `--setup-preview` | Alias for `feynman setup preview`. |
|
||||||
25
website/src/content/docs/reference/package-stack.md
Normal file
25
website/src/content/docs/reference/package-stack.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
---
|
||||||
|
title: Package Stack
|
||||||
|
description: Curated Pi packages bundled with Feynman
|
||||||
|
section: Reference
|
||||||
|
order: 3
|
||||||
|
---
|
||||||
|
|
||||||
|
Curated Pi packages bundled with Feynman. The runtime package list lives in `.feynman/settings.json`.
|
||||||
|
|
||||||
|
| Package | Purpose |
|
||||||
|
|---------|---------|
|
||||||
|
| `pi-subagents` | Parallel literature gathering and decomposition. |
|
||||||
|
| `pi-btw` | Fast side-thread `/btw` conversations without interrupting the main run. |
|
||||||
|
| `pi-docparser` | PDFs, Office docs, spreadsheets, and images. |
|
||||||
|
| `pi-web-access` | Web, GitHub, PDF, and media access. |
|
||||||
|
| `pi-markdown-preview` | Polished Markdown and LaTeX-heavy research writeups. |
|
||||||
|
| `@walterra/pi-charts` | Charts and quantitative visualizations. |
|
||||||
|
| `pi-generative-ui` | Interactive HTML-style widgets. |
|
||||||
|
| `pi-mermaid` | Diagrams in the TUI. |
|
||||||
|
| `@aliou/pi-processes` | Long-running experiments and log tails. |
|
||||||
|
| `pi-zotero` | Citation-library workflows. |
|
||||||
|
| `@kaiserlich-dev/pi-session-search` | Indexed session recall and summarize/resume UI. |
|
||||||
|
| `pi-schedule-prompt` | Recurring and deferred research jobs. |
|
||||||
|
| `@samfp/pi-memory` | Automatic preference and correction memory across sessions. |
|
||||||
|
| `@tmustier/pi-ralph-wiggum` | Long-running agent loops for iterative development. |
|
||||||
41
website/src/content/docs/reference/slash-commands.md
Normal file
41
website/src/content/docs/reference/slash-commands.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
---
|
||||||
|
title: Slash Commands
|
||||||
|
description: Repo-owned REPL slash commands
|
||||||
|
section: Reference
|
||||||
|
order: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
This page documents the slash commands that Feynman owns in this repository: prompt templates from `prompts/` and extension commands from `extensions/research-tools/`.
|
||||||
|
|
||||||
|
Additional slash commands can appear at runtime from Pi core and bundled packages such as subagents, preview, session search, and scheduling. Use `/help` inside the REPL for the live command list instead of relying on a static copy of package-provided commands.
|
||||||
|
|
||||||
|
## Research Workflows
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `/deepresearch <topic>` | Run a thorough, source-heavy investigation on a topic and produce a durable research brief with inline citations. |
|
||||||
|
| `/lit <topic>` | Run a literature review on a topic using paper search and primary-source synthesis. |
|
||||||
|
| `/review <artifact>` | Simulate an AI research peer review with likely objections, severity, and a concrete revision plan. |
|
||||||
|
| `/audit <item>` | Compare a paper's claims against its public codebase and identify mismatches, omissions, and reproducibility risks. |
|
||||||
|
| `/replicate <paper>` | Plan or execute a replication workflow for a paper, claim, or benchmark. |
|
||||||
|
| `/compare <topic>` | Compare multiple sources on a topic and produce a source-grounded matrix of agreements, disagreements, and confidence. |
|
||||||
|
| `/draft <topic>` | Turn research findings into a polished paper-style draft with equations, sections, and explicit claims. |
|
||||||
|
| `/autoresearch <idea>` | Autonomous experiment loop — try ideas, measure results, keep what works, discard what doesn't, repeat. |
|
||||||
|
| `/watch <topic>` | Set up a recurring or deferred research watch on a topic, company, paper area, or product surface. |
|
||||||
|
|
||||||
|
## Project & Session
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `/log` | Write a durable session log with completed work, findings, open questions, and next steps. |
|
||||||
|
| `/jobs` | Inspect active background research work, including running processes and scheduled follow-ups. |
|
||||||
|
| `/help` | Show grouped Feynman commands and prefill the editor with a selected command. |
|
||||||
|
| `/init` | Bootstrap AGENTS.md and session-log folders for a research project. |
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| --- | --- |
|
||||||
|
| `/alpha-login` | Sign in to alphaXiv from inside Feynman. |
|
||||||
|
| `/alpha-status` | Show alphaXiv authentication status. |
|
||||||
|
| `/alpha-logout` | Clear alphaXiv auth from inside Feynman. |
|
||||||
40
website/src/content/docs/tools/alphaxiv.md
Normal file
40
website/src/content/docs/tools/alphaxiv.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
---
|
||||||
|
title: AlphaXiv
|
||||||
|
description: Paper search and analysis tools
|
||||||
|
section: Tools
|
||||||
|
order: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
AlphaXiv powers Feynman's academic paper workflows. All tools require an alphaXiv account — sign in with `feynman alpha login`.
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
### alpha_search
|
||||||
|
|
||||||
|
Paper discovery with three search modes:
|
||||||
|
|
||||||
|
- **semantic** — Meaning-based search across paper content
|
||||||
|
- **keyword** — Traditional keyword matching
|
||||||
|
- **agentic** — AI-powered search that interprets your intent
|
||||||
|
|
||||||
|
### alpha_get_paper
|
||||||
|
|
||||||
|
Fetch a paper's report (structured summary) or full raw text by arXiv ID.
|
||||||
|
|
||||||
|
### alpha_ask_paper
|
||||||
|
|
||||||
|
Ask a targeted question about a specific paper. Returns an answer grounded in the paper's content.
|
||||||
|
|
||||||
|
### alpha_annotate_paper
|
||||||
|
|
||||||
|
Add persistent local notes to a paper. Annotations are stored locally and persist across sessions.
|
||||||
|
|
||||||
|
### alpha_list_annotations
|
||||||
|
|
||||||
|
Recall all annotations across papers and sessions.
|
||||||
|
|
||||||
|
### alpha_read_code
|
||||||
|
|
||||||
|
Read source code from a paper's linked GitHub repository. Useful for auditing or replication planning.
|
||||||
34
website/src/content/docs/tools/preview.md
Normal file
34
website/src/content/docs/tools/preview.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
---
|
||||||
|
title: Preview
|
||||||
|
description: Preview generated artifacts in browser or PDF
|
||||||
|
section: Tools
|
||||||
|
order: 4
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The `preview_file` tool opens generated artifacts in your browser or PDF viewer.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
Inside the REPL:
|
||||||
|
|
||||||
|
```
|
||||||
|
/preview
|
||||||
|
```
|
||||||
|
|
||||||
|
Or Feynman will suggest previewing when you generate artifacts that benefit from rendered output (Markdown with LaTeX, HTML reports, etc.).
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
Preview requires `pandoc` for PDF/HTML rendering. Install it with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman --setup-preview
|
||||||
|
```
|
||||||
|
|
||||||
|
## Supported formats
|
||||||
|
|
||||||
|
- Markdown (with LaTeX math rendering)
|
||||||
|
- HTML
|
||||||
|
- PDF
|
||||||
26
website/src/content/docs/tools/session-search.md
Normal file
26
website/src/content/docs/tools/session-search.md
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
---
|
||||||
|
title: Session Search
|
||||||
|
description: Search prior Feynman session transcripts
|
||||||
|
section: Tools
|
||||||
|
order: 3
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The `session_search` tool recovers prior Feynman work from stored session transcripts. Useful for picking up previous research threads or finding past findings.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
Inside the REPL:
|
||||||
|
|
||||||
|
```
|
||||||
|
/search
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use the tool directly — Feynman will invoke `session_search` automatically when you reference prior work.
|
||||||
|
|
||||||
|
## What it searches
|
||||||
|
|
||||||
|
- Full session transcripts
|
||||||
|
- Tool outputs and agent results
|
||||||
|
- Generated artifacts and their content
|
||||||
34
website/src/content/docs/tools/web-search.md
Normal file
34
website/src/content/docs/tools/web-search.md
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
---
|
||||||
|
title: Web Search
|
||||||
|
description: Web search routing and configuration
|
||||||
|
section: Tools
|
||||||
|
order: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
## Routing modes
|
||||||
|
|
||||||
|
Feynman supports three web search backends:
|
||||||
|
|
||||||
|
| Mode | Description |
|
||||||
|
|------|-------------|
|
||||||
|
| `auto` | Prefer Perplexity when configured, fall back to Gemini |
|
||||||
|
| `perplexity` | Force Perplexity Sonar |
|
||||||
|
| `gemini` | Force Gemini (default) |
|
||||||
|
|
||||||
|
## Default behavior
|
||||||
|
|
||||||
|
The default path is zero-config Gemini Browser via a signed-in Chromium profile. No API keys required.
|
||||||
|
|
||||||
|
## Check current config
|
||||||
|
|
||||||
|
```bash
|
||||||
|
feynman search status
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced configuration
|
||||||
|
|
||||||
|
Edit `~/.feynman/web-search.json` directly to set:
|
||||||
|
|
||||||
|
- Gemini API keys
|
||||||
|
- Perplexity API keys
|
||||||
|
- Custom routing preferences
|
||||||
39
website/src/content/docs/workflows/audit.md
Normal file
39
website/src/content/docs/workflows/audit.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
---
|
||||||
|
title: Code Audit
|
||||||
|
description: Compare paper claims against public codebases
|
||||||
|
section: Workflows
|
||||||
|
order: 4
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/audit <item>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Compares claims made in a paper against its public codebase. Surfaces mismatches, missing experiments, and reproducibility risks.
|
||||||
|
|
||||||
|
## What it checks
|
||||||
|
|
||||||
|
- Do the reported hyperparameters match the code?
|
||||||
|
- Are all claimed experiments present in the repository?
|
||||||
|
- Does the training loop match the described methodology?
|
||||||
|
- Are there undocumented preprocessing steps?
|
||||||
|
- Do evaluation metrics match the paper's claims?
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/audit 2401.12345
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
An audit report with:
|
||||||
|
|
||||||
|
- Claim-by-claim verification
|
||||||
|
- Identified mismatches
|
||||||
|
- Missing components
|
||||||
|
- Reproducibility risk assessment
|
||||||
44
website/src/content/docs/workflows/autoresearch.md
Normal file
44
website/src/content/docs/workflows/autoresearch.md
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
---
|
||||||
|
title: Autoresearch
|
||||||
|
description: Autonomous experiment optimization loop
|
||||||
|
section: Workflows
|
||||||
|
order: 8
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/autoresearch <idea>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Runs an autonomous experiment loop:
|
||||||
|
|
||||||
|
1. **Edit** — Modify code or configuration
|
||||||
|
2. **Commit** — Save the change
|
||||||
|
3. **Benchmark** — Run evaluation
|
||||||
|
4. **Evaluate** — Compare against baseline
|
||||||
|
5. **Keep or revert** — Persist improvements, roll back regressions
|
||||||
|
6. **Repeat** — Continue until the target is hit
|
||||||
|
|
||||||
|
## Tracking
|
||||||
|
|
||||||
|
Metrics are tracked in:
|
||||||
|
|
||||||
|
- `autoresearch.md` — Human-readable progress log
|
||||||
|
- `autoresearch.jsonl` — Machine-readable metrics over time
|
||||||
|
|
||||||
|
## Controls
|
||||||
|
|
||||||
|
```
|
||||||
|
/autoresearch <idea> # start or resume
|
||||||
|
/autoresearch off # stop, keep data
|
||||||
|
/autoresearch clear # delete all state, start fresh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/autoresearch optimize the learning rate schedule for better convergence
|
||||||
|
```
|
||||||
29
website/src/content/docs/workflows/compare.md
Normal file
29
website/src/content/docs/workflows/compare.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
---
|
||||||
|
title: Source Comparison
|
||||||
|
description: Compare multiple sources with agreement/disagreement matrix
|
||||||
|
section: Workflows
|
||||||
|
order: 6
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/compare <topic>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Compares multiple sources on a topic. Builds an agreement/disagreement matrix showing where sources align and where they conflict.
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/compare approaches to constitutional AI training
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
- Source-by-source breakdown
|
||||||
|
- Agreement/disagreement matrix
|
||||||
|
- Synthesis of key differences
|
||||||
|
- Assessment of which positions have stronger evidence
|
||||||
40
website/src/content/docs/workflows/deep-research.md
Normal file
40
website/src/content/docs/workflows/deep-research.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
---
|
||||||
|
title: Deep Research
|
||||||
|
description: Thorough source-heavy investigation with parallel agents
|
||||||
|
section: Workflows
|
||||||
|
order: 1
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/deepresearch <topic>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Deep research runs a thorough, source-heavy investigation. It plans the research scope, delegates to parallel researcher agents, synthesizes findings, and adds inline citations.
|
||||||
|
|
||||||
|
The workflow follows these steps:
|
||||||
|
|
||||||
|
1. **Plan** — Clarify the research question and identify search strategy
|
||||||
|
2. **Delegate** — Spawn parallel researcher agents to gather evidence from different source types (papers, web, repos)
|
||||||
|
3. **Synthesize** — Merge findings, resolve contradictions, identify gaps
|
||||||
|
4. **Cite** — Add inline citations and verify all source URLs
|
||||||
|
5. **Deliver** — Write a durable research brief to `outputs/`
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/deepresearch transformer scaling laws and their implications for compute-optimal training
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Produces a structured research brief with:
|
||||||
|
|
||||||
|
- Executive summary
|
||||||
|
- Key findings organized by theme
|
||||||
|
- Evidence tables with source links
|
||||||
|
- Open questions and suggested next steps
|
||||||
|
- Numbered sources section with direct URLs
|
||||||
37
website/src/content/docs/workflows/draft.md
Normal file
37
website/src/content/docs/workflows/draft.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
---
|
||||||
|
title: Draft Writing
|
||||||
|
description: Paper-style draft generation from research findings
|
||||||
|
section: Workflows
|
||||||
|
order: 7
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/draft <topic>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Produces a paper-style draft with structured sections. Writes to `papers/`.
|
||||||
|
|
||||||
|
## Structure
|
||||||
|
|
||||||
|
The generated draft includes:
|
||||||
|
|
||||||
|
- Title
|
||||||
|
- Abstract
|
||||||
|
- Introduction / Background
|
||||||
|
- Method or Approach
|
||||||
|
- Evidence and Analysis
|
||||||
|
- Limitations
|
||||||
|
- Conclusion
|
||||||
|
- Sources
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/draft survey of differentiable physics simulators
|
||||||
|
```
|
||||||
|
|
||||||
|
The writer agent works only from supplied evidence — it never fabricates content. If evidence is insufficient, it explicitly notes the gaps.
|
||||||
31
website/src/content/docs/workflows/literature-review.md
Normal file
31
website/src/content/docs/workflows/literature-review.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
---
|
||||||
|
title: Literature Review
|
||||||
|
description: Map consensus, disagreements, and open questions
|
||||||
|
section: Workflows
|
||||||
|
order: 2
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/lit <topic>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Runs a structured literature review that searches across academic papers and web sources. Explicitly separates consensus findings from disagreements and open questions.
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/lit multimodal reasoning benchmarks for large language models
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
A structured review covering:
|
||||||
|
|
||||||
|
- **Consensus** — What the field agrees on
|
||||||
|
- **Disagreements** — Where sources conflict
|
||||||
|
- **Open questions** — What remains unresolved
|
||||||
|
- **Sources** — Direct links to all referenced papers and articles
|
||||||
42
website/src/content/docs/workflows/replication.md
Normal file
42
website/src/content/docs/workflows/replication.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
---
|
||||||
|
title: Replication
|
||||||
|
description: Plan replications of papers and claims
|
||||||
|
section: Workflows
|
||||||
|
order: 5
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/replicate <paper or claim>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Extracts key implementation details from a paper, identifies what's needed to replicate the results, and asks where to run before executing anything.
|
||||||
|
|
||||||
|
Before running code, Feynman asks you to choose an execution environment:
|
||||||
|
|
||||||
|
- **Local** — run in the current working directory
|
||||||
|
- **Virtual environment** — create an isolated venv/conda env first
|
||||||
|
- **Cloud** — delegate to a remote Agent Computer machine
|
||||||
|
- **Plan only** — produce the replication plan without executing
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/replicate "chain-of-thought prompting improves math reasoning"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
A replication plan covering:
|
||||||
|
|
||||||
|
- Key claims to verify
|
||||||
|
- Required resources (compute, data, models)
|
||||||
|
- Implementation details extracted from the paper
|
||||||
|
- Potential pitfalls and underspecified details
|
||||||
|
- Step-by-step replication procedure
|
||||||
|
- Success criteria
|
||||||
|
|
||||||
|
If an execution environment is selected, also produces runnable scripts and captured results.
|
||||||
49
website/src/content/docs/workflows/review.md
Normal file
49
website/src/content/docs/workflows/review.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
---
|
||||||
|
title: Peer Review
|
||||||
|
description: Simulated peer review with severity-graded feedback
|
||||||
|
section: Workflows
|
||||||
|
order: 3
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/review <artifact>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Simulates a tough-but-fair peer review for AI research artifacts. Evaluates novelty, empirical rigor, baselines, ablations, and reproducibility.
|
||||||
|
|
||||||
|
The reviewer agent identifies:
|
||||||
|
|
||||||
|
- Weak baselines
|
||||||
|
- Missing ablations
|
||||||
|
- Evaluation mismatches
|
||||||
|
- Benchmark leakage
|
||||||
|
- Under-specified implementation details
|
||||||
|
|
||||||
|
## Severity levels
|
||||||
|
|
||||||
|
Feedback is graded by severity:
|
||||||
|
|
||||||
|
- **FATAL** — Fundamental issues that invalidate the claims
|
||||||
|
- **MAJOR** — Significant problems that need addressing
|
||||||
|
- **MINOR** — Small improvements or clarifications
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/review outputs/scaling-laws-brief.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Structured review with:
|
||||||
|
|
||||||
|
- Summary of the work
|
||||||
|
- Strengths
|
||||||
|
- Weaknesses (severity-graded)
|
||||||
|
- Questions for the authors
|
||||||
|
- Verdict (accept / revise / reject)
|
||||||
|
- Revision plan
|
||||||
29
website/src/content/docs/workflows/watch.md
Normal file
29
website/src/content/docs/workflows/watch.md
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
---
|
||||||
|
title: Watch
|
||||||
|
description: Recurring research monitoring
|
||||||
|
section: Workflows
|
||||||
|
order: 9
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```
|
||||||
|
/watch <topic>
|
||||||
|
```
|
||||||
|
|
||||||
|
## What it does
|
||||||
|
|
||||||
|
Schedules a recurring research watch. Sets a baseline of current knowledge and defines what constitutes a meaningful change worth reporting.
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
```
|
||||||
|
/watch new papers on test-time compute scaling
|
||||||
|
```
|
||||||
|
|
||||||
|
## How it works
|
||||||
|
|
||||||
|
1. Feynman establishes a baseline by surveying current sources
|
||||||
|
2. Defines change signals (new papers, updated results, new repos)
|
||||||
|
3. Schedules periodic checks via `pi-schedule-prompt`
|
||||||
|
4. Reports only when meaningful changes are detected
|
||||||
55
website/src/layouts/Base.astro
Normal file
55
website/src/layouts/Base.astro
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
---
|
||||||
|
import { ViewTransitions } from 'astro:transitions';
|
||||||
|
import Nav from '../components/Nav.astro';
|
||||||
|
import Footer from '../components/Footer.astro';
|
||||||
|
import '../styles/global.css';
|
||||||
|
|
||||||
|
interface Props {
|
||||||
|
title: string;
|
||||||
|
description?: string;
|
||||||
|
active?: 'home' | 'docs';
|
||||||
|
}
|
||||||
|
|
||||||
|
const { title, description = 'Research-first AI agent', active = 'home' } = Astro.props;
|
||||||
|
---
|
||||||
|
|
||||||
|
<!doctype html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8" />
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
|
<meta name="description" content={description} />
|
||||||
|
<title>{title}</title>
|
||||||
|
<ViewTransitions fallback="none" />
|
||||||
|
<script is:inline>
|
||||||
|
(function() {
|
||||||
|
var stored = localStorage.getItem('theme');
|
||||||
|
var prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
|
||||||
|
if (stored === 'dark' || (!stored && prefersDark)) {
|
||||||
|
document.documentElement.classList.add('dark');
|
||||||
|
}
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
<script is:inline>
|
||||||
|
document.addEventListener('astro:after-swap', function() {
|
||||||
|
var stored = localStorage.getItem('theme');
|
||||||
|
var prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
|
||||||
|
if (stored === 'dark' || (!stored && prefersDark)) {
|
||||||
|
document.documentElement.classList.add('dark');
|
||||||
|
}
|
||||||
|
var isDark = document.documentElement.classList.contains('dark');
|
||||||
|
var sun = document.getElementById('sun-icon');
|
||||||
|
var moon = document.getElementById('moon-icon');
|
||||||
|
if (sun) sun.style.display = isDark ? 'block' : 'none';
|
||||||
|
if (moon) moon.style.display = isDark ? 'none' : 'block';
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
</head>
|
||||||
|
<body class="min-h-screen flex flex-col antialiased">
|
||||||
|
<Nav active={active} />
|
||||||
|
<main class="flex-1">
|
||||||
|
<slot />
|
||||||
|
</main>
|
||||||
|
<Footer />
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
79
website/src/layouts/Docs.astro
Normal file
79
website/src/layouts/Docs.astro
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
---
|
||||||
|
import Base from './Base.astro';
|
||||||
|
import Sidebar from '../components/Sidebar.astro';
|
||||||
|
|
||||||
|
interface Props {
|
||||||
|
title: string;
|
||||||
|
description?: string;
|
||||||
|
currentSlug: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
const { title, description, currentSlug } = Astro.props;
|
||||||
|
---
|
||||||
|
|
||||||
|
<Base title={`${title} — Feynman Docs`} description={description} active="docs">
|
||||||
|
<div class="max-w-6xl mx-auto px-6">
|
||||||
|
<div class="flex gap-8">
|
||||||
|
<Sidebar currentSlug={currentSlug} />
|
||||||
|
|
||||||
|
<button id="mobile-menu-btn" class="lg:hidden fixed bottom-6 right-6 z-40 p-3 rounded-full bg-accent text-bg shadow-lg" aria-label="Toggle sidebar">
|
||||||
|
<svg class="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path d="M4 6h16M4 12h16M4 18h16" />
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
<div id="mobile-overlay" class="hidden fixed inset-0 bg-black/50 z-30 lg:hidden"></div>
|
||||||
|
|
||||||
|
<article class="flex-1 min-w-0 py-8 max-w-3xl">
|
||||||
|
<h1 class="text-3xl font-bold mb-8 tracking-tight">{title}</h1>
|
||||||
|
<div class="prose">
|
||||||
|
<slot />
|
||||||
|
</div>
|
||||||
|
</article>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script is:inline>
|
||||||
|
(function() {
|
||||||
|
function init() {
|
||||||
|
var btn = document.getElementById('mobile-menu-btn');
|
||||||
|
var sidebar = document.getElementById('sidebar');
|
||||||
|
var overlay = document.getElementById('mobile-overlay');
|
||||||
|
if (btn && sidebar && overlay) {
|
||||||
|
function toggle() {
|
||||||
|
sidebar.classList.toggle('hidden');
|
||||||
|
sidebar.classList.toggle('fixed');
|
||||||
|
sidebar.classList.toggle('inset-0');
|
||||||
|
sidebar.classList.toggle('z-40');
|
||||||
|
sidebar.classList.toggle('bg-bg');
|
||||||
|
sidebar.classList.toggle('w-full');
|
||||||
|
sidebar.classList.toggle('p-6');
|
||||||
|
overlay.classList.toggle('hidden');
|
||||||
|
}
|
||||||
|
btn.addEventListener('click', toggle);
|
||||||
|
overlay.addEventListener('click', toggle);
|
||||||
|
}
|
||||||
|
|
||||||
|
document.querySelectorAll('.prose pre').forEach(function(pre) {
|
||||||
|
if (pre.querySelector('.copy-code')) return;
|
||||||
|
var copyBtn = document.createElement('button');
|
||||||
|
copyBtn.className = 'copy-code';
|
||||||
|
copyBtn.setAttribute('aria-label', 'Copy code');
|
||||||
|
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
|
||||||
|
pre.appendChild(copyBtn);
|
||||||
|
copyBtn.addEventListener('click', function() {
|
||||||
|
var code = pre.querySelector('code');
|
||||||
|
var text = code ? code.textContent : pre.textContent;
|
||||||
|
navigator.clipboard.writeText(text);
|
||||||
|
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><path d="M20 6L9 17l-5-5"/></svg>';
|
||||||
|
setTimeout(function() {
|
||||||
|
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
|
||||||
|
}, 2000);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
document.addEventListener('DOMContentLoaded', init);
|
||||||
|
document.addEventListener('astro:after-swap', init);
|
||||||
|
})();
|
||||||
|
</script>
|
||||||
|
</Base>
|
||||||
19
website/src/pages/docs/[...slug].astro
Normal file
19
website/src/pages/docs/[...slug].astro
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
---
|
||||||
|
import { getCollection } from 'astro:content';
|
||||||
|
import Docs from '../../layouts/Docs.astro';
|
||||||
|
|
||||||
|
export async function getStaticPaths() {
|
||||||
|
const docs = await getCollection('docs');
|
||||||
|
return docs.map((entry) => ({
|
||||||
|
params: { slug: entry.slug },
|
||||||
|
props: { entry },
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
const { entry } = Astro.props;
|
||||||
|
const { Content } = await entry.render();
|
||||||
|
---
|
||||||
|
|
||||||
|
<Docs title={entry.data.title} description={entry.data.description} currentSlug={entry.slug}>
|
||||||
|
<Content />
|
||||||
|
</Docs>
|
||||||
155
website/src/pages/index.astro
Normal file
155
website/src/pages/index.astro
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
---
|
||||||
|
import Base from '../layouts/Base.astro';
|
||||||
|
---
|
||||||
|
|
||||||
|
<Base title="Feynman — The open source AI research agent" active="home">
|
||||||
|
<section class="text-center pt-24 pb-20 px-6">
|
||||||
|
<div class="max-w-2xl mx-auto">
|
||||||
|
<h1 class="text-5xl sm:text-6xl font-bold tracking-tight mb-6" style="text-wrap: balance">The open source AI research agent</h1>
|
||||||
|
<p class="text-lg text-text-muted mb-10 leading-relaxed" style="text-wrap: pretty">Investigate topics, write papers, run experiments, review research, audit codebases — every output cited and source-grounded</p>
|
||||||
|
<div class="inline-flex items-center gap-3 bg-surface rounded-lg px-5 py-3 mb-8 font-mono text-sm">
|
||||||
|
<code class="text-accent">npm install -g @companion-ai/feynman</code>
|
||||||
|
<button id="copy-btn" class="text-text-dim hover:text-accent transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent rounded" aria-label="Copy install command">
|
||||||
|
<svg class="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2" /><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1" /></svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
<div class="flex gap-4 justify-center flex-wrap">
|
||||||
|
<a href="/docs/getting-started/installation" class="px-6 py-2.5 rounded-lg bg-accent text-bg font-semibold text-sm hover:bg-accent-hover transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent focus-visible:ring-offset-2 focus-visible:ring-offset-bg">Get started</a>
|
||||||
|
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="px-6 py-2.5 rounded-lg border border-border text-text-muted font-semibold text-sm hover:border-text-dim hover:text-text-primary transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent focus-visible:ring-offset-2 focus-visible:ring-offset-bg">GitHub</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="py-20 px-6">
|
||||||
|
<div class="max-w-5xl mx-auto">
|
||||||
|
<h2 class="text-2xl font-bold text-center mb-12">What you type → what happens</h2>
|
||||||
|
<div class="bg-surface rounded-xl p-6 font-mono text-sm leading-loose max-w-2xl mx-auto">
|
||||||
|
<div class="flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> "what do we know about scaling laws"</span></div>
|
||||||
|
<div class="text-text-dim mt-1 ml-6 text-xs">Searches papers and web, produces a cited research brief</div>
|
||||||
|
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> deepresearch "mechanistic interpretability"</span></div>
|
||||||
|
<div class="text-text-dim mt-1 ml-6 text-xs">Multi-agent investigation with parallel researchers, synthesis, verification</div>
|
||||||
|
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> lit "RLHF alternatives"</span></div>
|
||||||
|
<div class="text-text-dim mt-1 ml-6 text-xs">Literature review with consensus, disagreements, open questions</div>
|
||||||
|
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> audit 2401.12345</span></div>
|
||||||
|
<div class="text-text-dim mt-1 ml-6 text-xs">Compares paper claims against the public codebase</div>
|
||||||
|
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> replicate "chain-of-thought improves math"</span></div>
|
||||||
|
<div class="text-text-dim mt-1 ml-6 text-xs">Asks where to run, then builds a replication plan</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="py-20 px-6">
|
||||||
|
<div class="max-w-5xl mx-auto">
|
||||||
|
<h2 class="text-2xl font-bold text-center mb-12">Workflows</h2>
|
||||||
|
<p class="text-center text-text-muted mb-10">Ask naturally or use slash commands as shortcuts.</p>
|
||||||
|
<div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-3 gap-4 max-w-4xl mx-auto">
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/deepresearch</div>
|
||||||
|
<p class="text-sm text-text-muted">Source-heavy multi-agent investigation</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/lit</div>
|
||||||
|
<p class="text-sm text-text-muted">Literature review from paper search and primary sources</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/review</div>
|
||||||
|
<p class="text-sm text-text-muted">Simulated peer review with severity and revision plan</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/audit</div>
|
||||||
|
<p class="text-sm text-text-muted">Paper vs. codebase mismatch audit</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/replicate</div>
|
||||||
|
<p class="text-sm text-text-muted">Replication plan with environment selection</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/compare</div>
|
||||||
|
<p class="text-sm text-text-muted">Source comparison matrix</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/draft</div>
|
||||||
|
<p class="text-sm text-text-muted">Paper-style draft from research findings</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/autoresearch</div>
|
||||||
|
<p class="text-sm text-text-muted">Autonomous experiment loop</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-mono text-sm text-accent mb-2">/watch</div>
|
||||||
|
<p class="text-sm text-text-muted">Recurring research watch</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="py-20 px-6">
|
||||||
|
<div class="max-w-5xl mx-auto">
|
||||||
|
<h2 class="text-2xl font-bold text-center mb-12">Agents</h2>
|
||||||
|
<p class="text-center text-text-muted mb-10">Four bundled research agents, dispatched automatically.</p>
|
||||||
|
<div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-4">
|
||||||
|
<div class="bg-surface rounded-xl p-6 text-center">
|
||||||
|
<div class="font-semibold text-accent mb-2">Researcher</div>
|
||||||
|
<p class="text-sm text-text-muted">Gathers evidence across papers, web, repos, and docs</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-6 text-center">
|
||||||
|
<div class="font-semibold text-accent mb-2">Reviewer</div>
|
||||||
|
<p class="text-sm text-text-muted">Simulated peer review with severity-graded feedback</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-6 text-center">
|
||||||
|
<div class="font-semibold text-accent mb-2">Writer</div>
|
||||||
|
<p class="text-sm text-text-muted">Structured briefs and drafts from research notes</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-6 text-center">
|
||||||
|
<div class="font-semibold text-accent mb-2">Verifier</div>
|
||||||
|
<p class="text-sm text-text-muted">Inline citations and source URL verification</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="py-20 px-6">
|
||||||
|
<div class="max-w-5xl mx-auto">
|
||||||
|
<h2 class="text-2xl font-bold text-center mb-12">Tools</h2>
|
||||||
|
<div class="grid grid-cols-1 sm:grid-cols-2 gap-4 max-w-2xl mx-auto">
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-semibold mb-1">AlphaXiv</div>
|
||||||
|
<p class="text-sm text-text-muted">Paper search, Q&A, code reading, persistent annotations</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-semibold mb-1">Web search</div>
|
||||||
|
<p class="text-sm text-text-muted">Gemini or Perplexity, zero-config default</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-semibold mb-1">Session search</div>
|
||||||
|
<p class="text-sm text-text-muted">Indexed recall across prior research sessions</p>
|
||||||
|
</div>
|
||||||
|
<div class="bg-surface rounded-xl p-5">
|
||||||
|
<div class="font-semibold mb-1">Preview</div>
|
||||||
|
<p class="text-sm text-text-muted">Browser and PDF export of generated artifacts</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="py-20 px-6 text-center">
|
||||||
|
<div class="max-w-xl mx-auto">
|
||||||
|
<p class="text-text-muted mb-6">Built on <a href="https://github.com/mariozechner/pi-coding-agent" class="text-accent hover:underline">Pi</a> and <a href="https://github.com/getcompanion-ai/alpha-hub" class="text-accent hover:underline">Alpha Hub</a>. MIT licensed. Open source.</p>
|
||||||
|
<div class="flex gap-4 justify-center flex-wrap">
|
||||||
|
<a href="/docs/getting-started/installation" class="px-6 py-2.5 rounded-lg bg-accent text-bg font-semibold text-sm hover:bg-accent-hover transition-colors">Get started</a>
|
||||||
|
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="px-6 py-2.5 rounded-lg border border-border text-text-muted font-semibold text-sm hover:border-text-dim hover:text-text-primary transition-colors">GitHub</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<script is:inline>
|
||||||
|
document.getElementById('copy-btn').addEventListener('click', function() {
|
||||||
|
navigator.clipboard.writeText('npm install -g @companion-ai/feynman');
|
||||||
|
this.innerHTML = '<svg class="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><path d="M20 6L9 17l-5-5"/></svg>';
|
||||||
|
var btn = this;
|
||||||
|
setTimeout(function() {
|
||||||
|
btn.innerHTML = '<svg class="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
|
||||||
|
}, 2000);
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
</Base>
|
||||||
209
website/src/styles/global.css
Normal file
209
website/src/styles/global.css
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
@tailwind base;
|
||||||
|
@tailwind components;
|
||||||
|
@tailwind utilities;
|
||||||
|
|
||||||
|
:root {
|
||||||
|
--color-bg: #f0f5f1;
|
||||||
|
--color-surface: #e4ece6;
|
||||||
|
--color-surface-2: #d8e3db;
|
||||||
|
--color-border: #c2d1c6;
|
||||||
|
--color-text: #1a2e22;
|
||||||
|
--color-text-muted: #3d5c4a;
|
||||||
|
--color-text-dim: #6b8f7a;
|
||||||
|
--color-accent: #0d9668;
|
||||||
|
--color-accent-hover: #077a54;
|
||||||
|
--color-accent-subtle: #c6e4d4;
|
||||||
|
--color-teal: #0e8a7d;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dark {
|
||||||
|
--color-bg: #050a08;
|
||||||
|
--color-surface: #0c1410;
|
||||||
|
--color-surface-2: #131f1a;
|
||||||
|
--color-border: #1b2f26;
|
||||||
|
--color-text: #f0f5f2;
|
||||||
|
--color-text-muted: #8aaa9a;
|
||||||
|
--color-text-dim: #4d7565;
|
||||||
|
--color-accent: #34d399;
|
||||||
|
--color-accent-hover: #10b981;
|
||||||
|
--color-accent-subtle: #064e3b;
|
||||||
|
--color-teal: #2dd4bf;
|
||||||
|
}
|
||||||
|
|
||||||
|
html {
|
||||||
|
scroll-behavior: smooth;
|
||||||
|
}
|
||||||
|
|
||||||
|
::view-transition-old(root),
|
||||||
|
::view-transition-new(root) {
|
||||||
|
animation: none !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
background-color: var(--color-bg);
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose h2 {
|
||||||
|
font-size: 1.5rem;
|
||||||
|
font-weight: 700;
|
||||||
|
margin-top: 2.5rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose h3 {
|
||||||
|
font-size: 1.2rem;
|
||||||
|
font-weight: 600;
|
||||||
|
margin-top: 2rem;
|
||||||
|
margin-bottom: 0.75rem;
|
||||||
|
color: var(--color-teal);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose p {
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
line-height: 1.75;
|
||||||
|
color: var(--color-text-muted);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose ul {
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
padding-left: 1.5rem;
|
||||||
|
list-style-type: disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose ol {
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
padding-left: 1.5rem;
|
||||||
|
list-style-type: decimal;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose li {
|
||||||
|
margin-bottom: 0.375rem;
|
||||||
|
line-height: 1.65;
|
||||||
|
color: var(--color-text-muted);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose code {
|
||||||
|
font-family: 'SF Mono', 'Fira Code', 'JetBrains Mono', monospace;
|
||||||
|
font-size: 0.875rem;
|
||||||
|
background-color: var(--color-surface);
|
||||||
|
padding: 0.125rem 0.375rem;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose pre {
|
||||||
|
position: relative;
|
||||||
|
background-color: var(--color-surface) !important;
|
||||||
|
border-radius: 0.5rem;
|
||||||
|
padding: 1rem 1.25rem;
|
||||||
|
overflow-x: auto;
|
||||||
|
margin-bottom: 1.25rem;
|
||||||
|
font-family: 'SF Mono', 'Fira Code', 'JetBrains Mono', monospace;
|
||||||
|
font-size: 0.875rem;
|
||||||
|
line-height: 1.7;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose pre code {
|
||||||
|
background: none !important;
|
||||||
|
border: none;
|
||||||
|
padding: 0;
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.copy-code {
|
||||||
|
all: unset;
|
||||||
|
position: absolute;
|
||||||
|
top: 0.75rem;
|
||||||
|
right: 0.75rem;
|
||||||
|
display: grid;
|
||||||
|
place-items: center;
|
||||||
|
width: 28px;
|
||||||
|
height: 28px;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
color: var(--color-text-dim);
|
||||||
|
background: var(--color-surface-2);
|
||||||
|
opacity: 0;
|
||||||
|
transition: opacity 0.15s, color 0.15s;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
pre:hover .copy-code {
|
||||||
|
opacity: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.copy-code:hover {
|
||||||
|
color: var(--color-accent);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose pre code span {
|
||||||
|
color: inherit !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose table {
|
||||||
|
width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
margin-bottom: 1.5rem;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose th {
|
||||||
|
background-color: var(--color-surface);
|
||||||
|
padding: 0.625rem 0.875rem;
|
||||||
|
text-align: left;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--color-text);
|
||||||
|
border-bottom: 1px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose td {
|
||||||
|
padding: 0.625rem 0.875rem;
|
||||||
|
border-bottom: 1px solid var(--color-border);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose td code {
|
||||||
|
background-color: var(--color-surface-2);
|
||||||
|
padding: 0.125rem 0.375rem;
|
||||||
|
border-radius: 0.25rem;
|
||||||
|
font-size: 0.85rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose tr:nth-child(even) {
|
||||||
|
background-color: var(--color-surface);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose a {
|
||||||
|
color: var(--color-accent);
|
||||||
|
text-decoration: underline;
|
||||||
|
text-underline-offset: 2px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose a:hover {
|
||||||
|
color: var(--color-accent-hover);
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose strong {
|
||||||
|
color: var(--color-text);
|
||||||
|
font-weight: 600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose hr {
|
||||||
|
border-color: var(--color-border);
|
||||||
|
margin: 2rem 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.prose blockquote {
|
||||||
|
border-left: 2px solid var(--color-text-dim);
|
||||||
|
padding-left: 1rem;
|
||||||
|
color: var(--color-text-dim);
|
||||||
|
font-style: italic;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.agent-entry {
|
||||||
|
background-color: var(--color-surface);
|
||||||
|
border-radius: 0.75rem;
|
||||||
|
padding: 1.25rem 1.5rem;
|
||||||
|
margin-bottom: 1rem;
|
||||||
|
}
|
||||||
25
website/tailwind.config.mjs
Normal file
25
website/tailwind.config.mjs
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
export default {
|
||||||
|
content: ['./src/**/*.{astro,html,js,jsx,md,mdx,svelte,ts,tsx,vue}'],
|
||||||
|
darkMode: 'class',
|
||||||
|
theme: {
|
||||||
|
extend: {
|
||||||
|
colors: {
|
||||||
|
bg: 'var(--color-bg)',
|
||||||
|
surface: 'var(--color-surface)',
|
||||||
|
'surface-2': 'var(--color-surface-2)',
|
||||||
|
border: 'var(--color-border)',
|
||||||
|
'text-primary': 'var(--color-text)',
|
||||||
|
'text-muted': 'var(--color-text-muted)',
|
||||||
|
'text-dim': 'var(--color-text-dim)',
|
||||||
|
accent: 'var(--color-accent)',
|
||||||
|
'accent-hover': 'var(--color-accent-hover)',
|
||||||
|
'accent-subtle': 'var(--color-accent-subtle)',
|
||||||
|
teal: 'var(--color-teal)',
|
||||||
|
},
|
||||||
|
fontFamily: {
|
||||||
|
mono: ['"SF Mono"', '"Fira Code"', '"JetBrains Mono"', 'monospace'],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
plugins: [],
|
||||||
|
};
|
||||||
3
website/tsconfig.json
Normal file
3
website/tsconfig.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"extends": "astro/tsconfigs/strict"
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user