30 Commits

Author SHA1 Message Date
Advait Paliwal
4ac668c50a Update edge installer and release flow 2026-03-25 01:06:11 -07:00
Advait Paliwal
8178173ff7 Use shared theme constants in help output
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 00:42:04 -07:00
Advait Paliwal
4eeccafed0 Match help output colors to Pi TUI theme
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 00:40:43 -07:00
Advait Paliwal
7024a86024 Replace Pi tool registrations with skills and CLI integration
- Remove all manually registered Pi tools (alpha_search, alpha_get_paper,
  alpha_ask_paper, alpha_annotate_paper, alpha_list_annotations,
  alpha_read_code, session_search, preview_file) and their wrappers
  (alpha.ts, preview.ts, session-search.ts, alpha-tools.test.ts)
- Add Pi skill files for alpha-research, session-search, preview,
  modal-compute, and runpod-compute in skills/
- Sync skills to ~/.feynman/agent/skills/ on startup via syncBundledAssets
- Add node_modules/.bin to Pi subprocess PATH so alpha CLI is accessible
- Add /outputs extension command to browse research artifacts via dialog
- Add Modal and RunPod as execution environments in /replicate and
  /autoresearch prompts
- Remove redundant /alpha-login /alpha-logout /alpha-status REPL commands
  (feynman alpha CLI still works)
- Update README, researcher agent, metadata, and website docs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 00:38:45 -07:00
Advait Paliwal
5fab329ad1 Fix homepage install controls and split docs install sections
Move inline script inside Layout for proper View Transitions support,
redesign install pills as connected tabs above the command bar, and
split the combined pnpm/bun docs section into separate headings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 20:50:33 -07:00
Advait Paliwal
563068180f Tighten homepage install controls 2026-03-24 20:00:31 -07:00
Advait Paliwal
8dd20935ad Simplify homepage install toggle 2026-03-24 19:44:25 -07:00
Advait Paliwal
aaa0f63bc7 Release 0.2.13 2026-03-24 19:33:02 -07:00
Advait Paliwal
79e14dd79d Fix packaged runtime startup and version flag 2026-03-24 19:32:10 -07:00
Advait Paliwal
cd85e875df Streamline install paths and runtime bootstrap 2026-03-24 19:24:04 -07:00
Advait Paliwal
3ee6ff4199 Fix release installers and package manager fallbacks 2026-03-24 19:10:21 -07:00
Advait Paliwal
762ca66a68 Add install scripts to website public dir
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 18:19:45 -07:00
Advait Paliwal
2aa4c84ce5 Re-fetch GitHub stars after ViewTransitions navigation
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:32:38 -07:00
Advait Paliwal
3d84624011 Add 'writes drafts' to homepage subtitle
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:29:52 -07:00
Advait Paliwal
6445c20e02 Fix copy button surviving ViewTransitions with astro:after-swap rebind
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:20:41 -07:00
Advait Paliwal
4c0a417232 Fix homepage copy: run immediately instead of waiting for DOMContentLoaded
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:18:24 -07:00
Advait Paliwal
42cedd3137 Fix docs copy button: run init immediately, add astro-code styles, visible by default
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:17:56 -07:00
Advait Paliwal
b07b0f4197 Fix copy buttons: swap to standard copy icon, fix docs copy visibility, DOMContentLoaded guard
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:13:44 -07:00
Advait Paliwal
323faf56ee Fix homepage copy: consistent grammar, remove redundancy, swap Shiki to Vitesse
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:10:52 -07:00
Advait Paliwal
1e333ba490 Update favicon to green f on dark background
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:04:59 -07:00
Advait Paliwal
1dd7f30a37 Swap Shiki to Vitesse themes, fix scrollbar gutter alignment
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 16:02:06 -07:00
Advait Paliwal
17c48be4b5 Fix 404 centering, footer copyright, remove MIT text
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 15:57:46 -07:00
Advait Paliwal
8f8cf2a4a9 Rebuild website from scratch on Tailwind v4 + shadcn/ui
- Fresh Astro 5 project with Tailwind v4 and shadcn/ui olive preset
- All shadcn components installed (Card, Button, Badge, Separator, etc.)
- Homepage with hero, terminal demo, workflows, agents, sources, compute
- Full docs system with 24 markdown pages across 5 sections
- Sidebar navigation with active state highlighting
- Prose styles for markdown content using shadcn color tokens
- Dark/light theme toggle with localStorage persistence
- Shiki everforest syntax themes for code blocks
- 404 page with VT323 font
- /docs redirect to installation page
- GitHub star count fetch
- Earthy green/cream oklch color palette matching TUI theme

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 15:57:03 -07:00
Advait Paliwal
7d3fbc3f6b Rework website palette, copy, and theme support
- Align color palette with TUI Everforest theme (feynman.json)
- Rewrite homepage copy with sharper section headings and descriptions
- Fix dark/light theme toggle persistence across page navigation
- Add Shiki Everforest syntax themes for code blocks
- Fix copy-code button z-index and pointer events
- Add styled scrollbars and text selection colors
- Tighten hero image padding, remove unused public/hero.png
- Remove Modal/RunPod from site (Docker only for now)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 14:42:03 -07:00
Advait Paliwal
e651cb1f9b Fix Windows zip layout and smoke path 2026-03-24 14:37:18 -07:00
Advait Paliwal
21b8bcd4c4 Finalize remaining repo updates 2026-03-24 14:30:09 -07:00
Advait Paliwal
771b39cbba Prune packaged runtime safely 2026-03-24 14:25:50 -07:00
Advait Paliwal
b624921bad Prune packaged runtime dependencies 2026-03-24 14:19:04 -07:00
Advait Paliwal
b7d430ee15 Add spinner to Unix installer extraction 2026-03-24 13:13:09 -07:00
Advait Paliwal
54efae78e1 Show installer download and extract progress 2026-03-24 13:11:14 -07:00
94 changed files with 10540 additions and 3164 deletions

View File

@@ -6,3 +6,7 @@ FEYNMAN_THINKING=medium
OPENAI_API_KEY= OPENAI_API_KEY=
ANTHROPIC_API_KEY= ANTHROPIC_API_KEY=
RUNPOD_API_KEY=
MODAL_TOKEN_ID=
MODAL_TOKEN_SECRET=

View File

@@ -9,7 +9,7 @@ Operating rules:
- State uncertainty explicitly. - State uncertainty explicitly.
- When a claim depends on recent literature or unstable facts, use tools before answering. - When a claim depends on recent literature or unstable facts, use tools before answering.
- When discussing papers, cite title, year, and identifier or URL when possible. - When discussing papers, cite title, year, and identifier or URL when possible.
- Use the alpha-backed research tools for academic paper search, paper reading, paper Q&A, repository inspection, and persistent annotations. - Use the alpha-research skill for academic paper search, paper reading, paper Q&A, repository inspection, and persistent annotations.
- Use `web_search`, `fetch_content`, and `get_search_content` first for current topics: products, companies, markets, regulations, software releases, model availability, model pricing, benchmarks, docs, or anything phrased as latest/current/recent/today. - Use `web_search`, `fetch_content`, and `get_search_content` first for current topics: products, companies, markets, regulations, software releases, model availability, model pricing, benchmarks, docs, or anything phrased as latest/current/recent/today.
- For mixed topics, combine both: use web sources for current reality and paper sources for background literature. - For mixed topics, combine both: use web sources for current reality and paper sources for background literature.
- Never answer a latest/current question from arXiv or alpha-backed paper search alone. - Never answer a latest/current question from arXiv or alpha-backed paper search alone.
@@ -30,7 +30,6 @@ Operating rules:
- Use the visualization packages when a chart, diagram, or interactive widget would materially improve understanding. Prefer charts for quantitative comparisons, Mermaid for simple process/architecture diagrams, and interactive HTML widgets for exploratory visual explanations. - Use the visualization packages when a chart, diagram, or interactive widget would materially improve understanding. Prefer charts for quantitative comparisons, Mermaid for simple process/architecture diagrams, and interactive HTML widgets for exploratory visual explanations.
- Persistent memory is package-backed. Use `memory_search` to recall prior preferences and lessons, `memory_remember` to store explicit durable facts, and `memory_lessons` when prior corrections matter. - Persistent memory is package-backed. Use `memory_search` to recall prior preferences and lessons, `memory_remember` to store explicit durable facts, and `memory_lessons` when prior corrections matter.
- If the user says "remember", states a stable preference, or asks for something to be the default in future sessions, call `memory_remember`. Do not just say you will remember it. - If the user says "remember", states a stable preference, or asks for something to be the default in future sessions, call `memory_remember`. Do not just say you will remember it.
- Session recall is package-backed. Use `session_search` when the user references prior work, asks what has been done before, or when you suspect relevant past context exists.
- Feynman is intended to support always-on research work. Use the scheduling package when recurring or deferred work is appropriate instead of telling the user to remember manually. - Feynman is intended to support always-on research work. Use the scheduling package when recurring or deferred work is appropriate instead of telling the user to remember manually.
- Use `schedule_prompt` for recurring scans, delayed follow-ups, reminders, and periodic research jobs. - Use `schedule_prompt` for recurring scans, delayed follow-ups, reminders, and periodic research jobs.
- If the user asks you to remind, check later, run something nightly, or keep watching something over time, call `schedule_prompt`. Do not just promise to do it later. - If the user asks you to remind, check later, run something nightly, or keep watching something over time, call `schedule_prompt`. Do not just promise to do it later.
@@ -38,11 +37,9 @@ Operating rules:
- Prefer the smallest investigation or experiment that can materially reduce uncertainty before escalating to broader work. - Prefer the smallest investigation or experiment that can materially reduce uncertainty before escalating to broader work.
- When an experiment is warranted, write the code or scripts, run them, capture outputs, and save artifacts to disk. - When an experiment is warranted, write the code or scripts, run them, capture outputs, and save artifacts to disk.
- Before pausing long-running work, update the durable state on disk first: plan artifact, `CHANGELOG.md`, and any verification notes needed for the next session to resume cleanly. - Before pausing long-running work, update the durable state on disk first: plan artifact, `CHANGELOG.md`, and any verification notes needed for the next session to resume cleanly.
- Before recommending an execution environment, consider the system resources shown in the header (CPU, RAM, GPU, Docker availability). Recommend Docker when isolation on the current machine helps, and say explicitly when the workload exceeds local capacity. Do not suggest GPU workloads locally if no GPU is detected.
- Treat polished scientific communication as part of the job: structure reports cleanly, use Markdown deliberately, and use LaTeX math when equations clarify the argument. - Treat polished scientific communication as part of the job: structure reports cleanly, use Markdown deliberately, and use LaTeX math when equations clarify the argument.
- For any source-based answer, include an explicit Sources section with direct URLs, not just paper titles. - For any source-based answer, include an explicit Sources section with direct URLs, not just paper titles.
- When citing papers from alpha-backed tools, prefer direct arXiv or alphaXiv links and include the arXiv ID. - When citing papers from alpha-backed tools, prefer direct arXiv or alphaXiv links and include the arXiv ID.
- After writing a polished artifact, use `preview_file` only when the user wants review or export. Prefer browser preview by default; use PDF only when explicitly requested.
- Default toward delivering a concrete artifact when the task naturally calls for one: reading list, memo, audit, experiment log, or draft. - Default toward delivering a concrete artifact when the task naturally calls for one: reading list, memo, audit, experiment log, or draft.
- For user-facing workflows, produce exactly one canonical durable Markdown artifact unless the user explicitly asks for multiple deliverables. - For user-facing workflows, produce exactly one canonical durable Markdown artifact unless the user explicitly asks for multiple deliverables.
- Do not create extra user-facing intermediate markdown files just because the workflow has multiple reasoning stages. - Do not create extra user-facing intermediate markdown files just because the workflow has multiple reasoning stages.

View File

@@ -21,7 +21,7 @@ You are Feynman's evidence-gathering subagent.
1. **Start wide.** Begin with short, broad queries to map the landscape. Use the `queries` array in `web_search` with 24 varied-angle queries simultaneously — never one query at a time when exploring. 1. **Start wide.** Begin with short, broad queries to map the landscape. Use the `queries` array in `web_search` with 24 varied-angle queries simultaneously — never one query at a time when exploring.
2. **Evaluate availability.** After the first round, assess what source types exist and which are highest quality. Adjust strategy accordingly. 2. **Evaluate availability.** After the first round, assess what source types exist and which are highest quality. Adjust strategy accordingly.
3. **Progressively narrow.** Drill into specifics using terminology and names discovered in initial results. Refine queries, don't repeat them. 3. **Progressively narrow.** Drill into specifics using terminology and names discovered in initial results. Refine queries, don't repeat them.
4. **Cross-source.** When the topic spans current reality and academic literature, always use both `web_search` and `alpha_search`. 4. **Cross-source.** When the topic spans current reality and academic literature, always use both `web_search` and the `alpha` CLI (`alpha search`).
Use `recencyFilter` on `web_search` for fast-moving topics. Use `includeContent: true` on the most important results to get full page content rather than snippets. Use `recencyFilter` on `web_search` for fast-moving topics. Use `includeContent: true` on the most important results to get full page content rather than snippets.

View File

@@ -14,7 +14,6 @@ jobs:
outputs: outputs:
version: ${{ steps.version.outputs.version }} version: ${{ steps.version.outputs.version }}
should_publish: ${{ steps.version.outputs.should_publish }} should_publish: ${{ steps.version.outputs.should_publish }}
should_build_release: ${{ steps.version.outputs.should_build_release }}
steps: steps:
- uses: actions/checkout@v6 - uses: actions/checkout@v6
- uses: actions/setup-node@v5 - uses: actions/setup-node@v5
@@ -28,13 +27,8 @@ jobs:
echo "version=$LOCAL" >> "$GITHUB_OUTPUT" echo "version=$LOCAL" >> "$GITHUB_OUTPUT"
if [ "$CURRENT" != "$LOCAL" ]; then if [ "$CURRENT" != "$LOCAL" ]; then
echo "should_publish=true" >> "$GITHUB_OUTPUT" echo "should_publish=true" >> "$GITHUB_OUTPUT"
echo "should_build_release=true" >> "$GITHUB_OUTPUT"
elif [ "${GITHUB_EVENT_NAME}" = "workflow_dispatch" ]; then
echo "should_publish=false" >> "$GITHUB_OUTPUT"
echo "should_build_release=true" >> "$GITHUB_OUTPUT"
else else
echo "should_publish=false" >> "$GITHUB_OUTPUT" echo "should_publish=false" >> "$GITHUB_OUTPUT"
echo "should_build_release=false" >> "$GITHUB_OUTPUT"
fi fi
publish-npm: publish-npm:
@@ -58,13 +52,12 @@ jobs:
build-native-bundles: build-native-bundles:
needs: version-check needs: version-check
if: needs.version-check.outputs.should_build_release == 'true'
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
include: include:
- id: linux-x64 - id: linux-x64
os: ubuntu-latest os: blacksmith-4vcpu-ubuntu-2404
- id: darwin-x64 - id: darwin-x64
os: macos-15-intel os: macos-15-intel
- id: darwin-arm64 - id: darwin-arm64
@@ -97,18 +90,59 @@ jobs:
$tmp = Join-Path $env:RUNNER_TEMP ("feynman-smoke-" + [guid]::NewGuid().ToString("N")) $tmp = Join-Path $env:RUNNER_TEMP ("feynman-smoke-" + [guid]::NewGuid().ToString("N"))
New-Item -ItemType Directory -Path $tmp | Out-Null New-Item -ItemType Directory -Path $tmp | Out-Null
Expand-Archive -LiteralPath "dist/release/feynman-$version-win32-x64.zip" -DestinationPath $tmp -Force Expand-Archive -LiteralPath "dist/release/feynman-$version-win32-x64.zip" -DestinationPath $tmp -Force
& "$tmp/feynman-$version-win32-x64/feynman.cmd" --help | Select-Object -First 20 $bundleRoot = Join-Path $tmp "feynman-$version-win32-x64"
& (Join-Path $bundleRoot "feynman.cmd") --help | Select-Object -First 20
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v4
with: with:
name: native-${{ matrix.id }} name: native-${{ matrix.id }}
path: dist/release/* path: dist/release/*
release-edge:
needs:
- version-check
- build-native-bundles
if: needs.build-native-bundles.result == 'success'
runs-on: blacksmith-4vcpu-ubuntu-2404
permissions:
contents: write
steps:
- uses: actions/download-artifact@v4
with:
path: release-assets
merge-multiple: true
- shell: bash
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VERSION: ${{ needs.version-check.outputs.version }}
run: |
NOTES="Rolling Feynman bundles from main for the curl/PowerShell installer."
if gh release view edge >/dev/null 2>&1; then
gh release view edge --json assets --jq '.assets[].name' | while IFS= read -r asset; do
[ -n "$asset" ] || continue
gh release delete-asset edge "$asset" --yes
done
gh release upload edge release-assets/*
gh release edit edge \
--title "edge" \
--notes "$NOTES" \
--prerelease \
--draft=false \
--target "$GITHUB_SHA"
else
gh release create edge release-assets/* \
--title "edge" \
--notes "$NOTES" \
--prerelease \
--latest=false \
--target "$GITHUB_SHA"
fi
release-github: release-github:
needs: needs:
- version-check - version-check
- publish-npm - publish-npm
- build-native-bundles - build-native-bundles
if: needs.version-check.outputs.should_build_release == 'true' && needs.build-native-bundles.result == 'success' && (needs.publish-npm.result == 'success' || needs.publish-npm.result == 'skipped') if: needs.version-check.outputs.should_publish == 'true' && needs.build-native-bundles.result == 'success' && needs.publish-npm.result == 'success'
runs-on: blacksmith-4vcpu-ubuntu-2404 runs-on: blacksmith-4vcpu-ubuntu-2404
permissions: permissions:
contents: write contents: write
@@ -127,7 +161,8 @@ jobs:
gh release edit "v$VERSION" \ gh release edit "v$VERSION" \
--title "v$VERSION" \ --title "v$VERSION" \
--notes "Standalone Feynman bundles for native installation." \ --notes "Standalone Feynman bundles for native installation." \
--draft=false --draft=false \
--target "$GITHUB_SHA"
else else
gh release create "v$VERSION" release-assets/* \ gh release create "v$VERSION" release-assets/* \
--title "v$VERSION" \ --title "v$VERSION" \

110
README.md
View File

@@ -1,44 +1,56 @@
# Feynman <p align="center">
<a href="https://feynman.is">
<img src="assets/hero.png" alt="Feynman CLI" width="800" />
</a>
</p>
<p align="center">The open source AI research agent.</p>
<p align="center">
<a href="https://feynman.is/docs"><img alt="Docs" src="https://img.shields.io/badge/docs-feynman.is-0d9668?style=flat-square" /></a>
<a href="https://github.com/getcompanion-ai/feynman/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/github/license/getcompanion-ai/feynman?style=flat-square" /></a>
</p>
The open source AI research agent ---
### Installation
```bash ```bash
curl -fsSL https://feynman.is/install | bash curl -fsSL https://feynman.is/install | bash
# stable release channel
curl -fsSL https://feynman.is/install | bash -s -- stable
# package manager fallback
pnpm add -g @companion-ai/feynman
bun add -g @companion-ai/feynman
``` ```
```powershell The one-line installer tracks the latest `main` build. Use `stable` or an exact version to pin a release. Then run `feynman setup` to configure your model and get started.
irm https://feynman.is/install.ps1 | iex
```
Or install the npm fallback:
```bash
npm install -g @companion-ai/feynman
```
```bash
feynman setup
feynman
```
Feynman works directly inside your folder or repo. For long-running work, keep the stable repo contract in `AGENTS.md`, the current task brief in `outputs/.plans/`, and the chronological lab notebook in `CHANGELOG.md`.
--- ---
## What you type → what happens ### What you type → what happens
| Prompt | Result | ```
| --- | --- | $ feynman "what do we know about scaling laws"
| `feynman "what do we know about scaling laws"` | Searches papers and web, produces a cited research brief | Searches papers and web, produces a cited research brief
| `feynman deepresearch "mechanistic interpretability"` | Multi-agent investigation with parallel researchers, synthesis, verification |
| `feynman lit "RLHF alternatives"` | Literature review with consensus, disagreements, open questions | $ feynman deepresearch "mechanistic interpretability"
| `feynman audit 2401.12345` | Compares paper claims against the public codebase | → Multi-agent investigation with parallel researchers, synthesis, verification
| `feynman replicate "chain-of-thought improves math"` | Asks where to run, then builds a replication plan |
| `feynman "summarize this PDF" --prompt paper.pdf` | One-shot mode, no REPL | $ feynman lit "RLHF alternatives"
→ Literature review with consensus, disagreements, open questions
$ feynman audit 2401.12345
→ Compares paper claims against the public codebase
$ feynman replicate "chain-of-thought improves math"
→ Asks where to run, then builds a replication plan
```
--- ---
## Workflows ### Workflows
Ask naturally or use slash commands as shortcuts. Ask naturally or use slash commands as shortcuts.
@@ -53,12 +65,13 @@ Ask naturally or use slash commands as shortcuts.
| `/draft <topic>` | Paper-style draft from research findings | | `/draft <topic>` | Paper-style draft from research findings |
| `/autoresearch <idea>` | Autonomous experiment loop | | `/autoresearch <idea>` | Autonomous experiment loop |
| `/watch <topic>` | Recurring research watch | | `/watch <topic>` | Recurring research watch |
| `/outputs` | Browse all research artifacts |
--- ---
## Agents ### Agents
Four bundled research agents, dispatched automatically or via subagent commands. Four bundled research agents, dispatched automatically.
- **Researcher** — gather evidence across papers, web, repos, docs - **Researcher** — gather evidence across papers, web, repos, docs
- **Reviewer** — simulated peer review with severity-graded feedback - **Reviewer** — simulated peer review with severity-graded feedback
@@ -67,46 +80,29 @@ Four bundled research agents, dispatched automatically or via subagent commands.
--- ---
## Tools ### Skills & Tools
- **[AlphaXiv](https://www.alphaxiv.org/)** — paper search, Q&A, code reading, persistent annotations - **[AlphaXiv](https://www.alphaxiv.org/)** — paper search, Q&A, code reading, annotations (via `alpha` CLI)
- **Docker** — isolated container execution for safe experiments on your machine - **Docker** — isolated container execution for safe experiments on your machine
- **Web search** — Gemini or Perplexity, zero-config default via signed-in Chromium - **Web search** — Gemini or Perplexity, zero-config default
- **Session search** — optional indexed recall across prior research sessions - **Session search** — indexed recall across prior research sessions
- **Preview** — browser and PDF export of generated artifacts - **Preview** — browser and PDF export of generated artifacts
- **Modal** — serverless GPU compute for burst training and inference
- **RunPod** — persistent GPU pods with SSH access for long-running experiments
--- ---
## CLI ### How it works
```bash Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alphaXiv](https://www.alphaxiv.org/) for paper search and analysis, and CLI tools for compute and execution. Capabilities are delivered as [Pi skills](https://github.com/badlogic/pi-skills) — Markdown instruction files synced to `~/.feynman/agent/skills/` on startup. Every output is source-grounded — claims link to papers, docs, or repos with direct URLs.
feynman # REPL
feynman setup # guided setup
feynman doctor # diagnose everything
feynman status # current config summary
feynman model login [provider] # model auth
feynman model set <provider/model> # set default model
feynman alpha login # alphaXiv auth
feynman packages list # core vs optional packages
feynman packages install memory # opt into heavier packages on demand
feynman search status # web search config
```
--- ---
## How it works ### Contributing
Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alphaXiv](https://www.alphaxiv.org/) for paper search and analysis, and [Docker](https://www.docker.com/) for isolated local execution
Every output is source-grounded — claims link to papers, docs, or repos with direct URLs
---
## Contributing
```bash ```bash
git clone https://github.com/getcompanion-ai/feynman.git git clone https://github.com/getcompanion-ai/feynman.git
cd feynman && npm install && npm run start cd feynman && pnpm install && pnpm start
``` ```
[Docs](https://feynman.is/docs) · [MIT License](LICENSE) [Docs](https://feynman.is/docs) · [MIT License](LICENSE)

BIN
assets/hero-raw.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 884 KiB

BIN
assets/hero.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 MiB

View File

@@ -5,4 +5,5 @@ if (v[0] < 20) {
console.error("upgrade: https://nodejs.org or nvm install 20"); console.error("upgrade: https://nodejs.org or nvm install 20");
process.exit(1); process.exit(1);
} }
import("../dist/index.js"); await import("../scripts/patch-embedded-pi.mjs");
await import("../dist/index.js");

View File

@@ -1,9 +1,8 @@
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent"; import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { registerAlphaCommands, registerAlphaTools } from "./research-tools/alpha.js";
import { installFeynmanHeader } from "./research-tools/header.js"; import { installFeynmanHeader } from "./research-tools/header.js";
import { registerHelpCommand } from "./research-tools/help.js"; import { registerHelpCommand } from "./research-tools/help.js";
import { registerInitCommand, registerPreviewTool, registerSessionSearchTool } from "./research-tools/project.js"; import { registerInitCommand, registerOutputsCommand } from "./research-tools/project.js";
export default function researchTools(pi: ExtensionAPI): void { export default function researchTools(pi: ExtensionAPI): void {
const cache: { agentSummaryPromise?: Promise<{ agents: string[]; chains: string[] }> } = {}; const cache: { agentSummaryPromise?: Promise<{ agents: string[]; chains: string[] }> } = {};
@@ -16,10 +15,7 @@ export default function researchTools(pi: ExtensionAPI): void {
await installFeynmanHeader(pi, ctx, cache); await installFeynmanHeader(pi, ctx, cache);
}); });
registerAlphaCommands(pi);
registerHelpCommand(pi); registerHelpCommand(pi);
registerInitCommand(pi); registerInitCommand(pi);
registerSessionSearchTool(pi); registerOutputsCommand(pi);
registerAlphaTools(pi);
registerPreviewTool(pi);
} }

View File

@@ -1,212 +0,0 @@
import {
annotatePaper,
askPaper,
clearPaperAnnotation,
disconnect,
getPaper,
getUserName as getAlphaUserName,
isLoggedIn as isAlphaLoggedIn,
listPaperAnnotations,
login as loginAlpha,
logout as logoutAlpha,
readPaperCode,
searchPapers,
} from "@companion-ai/alpha-hub/lib";
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { Type } from "@sinclair/typebox";
import { getExtensionCommandSpec } from "../../metadata/commands.mjs";
import { formatToolText } from "./shared.js";
export function registerAlphaCommands(pi: ExtensionAPI): void {
pi.registerCommand("alpha-login", {
description: getExtensionCommandSpec("alpha-login")?.description ?? "Sign in to alphaXiv from inside Feynman.",
handler: async (_args, ctx) => {
if (isAlphaLoggedIn()) {
const name = getAlphaUserName();
ctx.ui.notify(name ? `alphaXiv already connected as ${name}` : "alphaXiv already connected", "info");
return;
}
await loginAlpha();
const name = getAlphaUserName();
ctx.ui.notify(name ? `alphaXiv connected as ${name}` : "alphaXiv login complete", "info");
},
});
pi.registerCommand("alpha-logout", {
description: getExtensionCommandSpec("alpha-logout")?.description ?? "Clear alphaXiv auth from inside Feynman.",
handler: async (_args, ctx) => {
logoutAlpha();
ctx.ui.notify("alphaXiv auth cleared", "info");
},
});
pi.registerCommand("alpha-status", {
description: getExtensionCommandSpec("alpha-status")?.description ?? "Show alphaXiv authentication status.",
handler: async (_args, ctx) => {
if (!isAlphaLoggedIn()) {
ctx.ui.notify("alphaXiv not connected", "warning");
return;
}
const name = getAlphaUserName();
ctx.ui.notify(name ? `alphaXiv connected as ${name}` : "alphaXiv connected", "info");
},
});
}
export function registerAlphaTools(pi: ExtensionAPI): void {
pi.registerTool({
name: "alpha_search",
label: "Alpha Search",
description: "Search papers through alphaXiv using semantic, keyword, both, agentic, or all retrieval modes.",
parameters: Type.Object({
query: Type.String({ description: "Paper search query." }),
mode: Type.Optional(
Type.String({
description: "Search mode: semantic, keyword, both, agentic, or all.",
}),
),
}),
async execute(_toolCallId, params) {
try {
const result = await searchPapers(params.query, params.mode?.trim() || "all");
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
} finally {
await disconnect();
}
},
});
pi.registerTool({
name: "alpha_get_paper",
label: "Alpha Get Paper",
description: "Fetch a paper report or full text, plus any local annotation, using alphaXiv.",
parameters: Type.Object({
paper: Type.String({
description: "arXiv ID, arXiv URL, or alphaXiv URL.",
}),
fullText: Type.Optional(
Type.Boolean({
description: "Return raw full text instead of the AI report.",
}),
),
}),
async execute(_toolCallId, params) {
try {
const result = await getPaper(params.paper, { fullText: params.fullText });
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
} finally {
await disconnect();
}
},
});
pi.registerTool({
name: "alpha_ask_paper",
label: "Alpha Ask Paper",
description: "Ask a targeted question about a paper using alphaXiv's PDF analysis.",
parameters: Type.Object({
paper: Type.String({
description: "arXiv ID, arXiv URL, or alphaXiv URL.",
}),
question: Type.String({
description: "Question to ask about the paper.",
}),
}),
async execute(_toolCallId, params) {
try {
const result = await askPaper(params.paper, params.question);
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
} finally {
await disconnect();
}
},
});
pi.registerTool({
name: "alpha_annotate_paper",
label: "Alpha Annotate Paper",
description: "Write or clear a persistent local annotation for a paper.",
parameters: Type.Object({
paper: Type.String({
description: "Paper ID to annotate.",
}),
note: Type.Optional(
Type.String({
description: "Annotation text. Omit when clear=true.",
}),
),
clear: Type.Optional(
Type.Boolean({
description: "Clear the existing annotation instead of writing one.",
}),
),
}),
async execute(_toolCallId, params) {
const result = params.clear
? await clearPaperAnnotation(params.paper)
: params.note
? await annotatePaper(params.paper, params.note)
: (() => {
throw new Error("Provide either note or clear=true.");
})();
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
},
});
pi.registerTool({
name: "alpha_list_annotations",
label: "Alpha List Annotations",
description: "List all persistent local paper annotations.",
parameters: Type.Object({}),
async execute() {
const result = await listPaperAnnotations();
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
},
});
pi.registerTool({
name: "alpha_read_code",
label: "Alpha Read Code",
description: "Read files from a paper's GitHub repository through alphaXiv.",
parameters: Type.Object({
githubUrl: Type.String({
description: "GitHub repository URL for the paper implementation.",
}),
path: Type.Optional(
Type.String({
description: "Repository path to inspect. Use / for the repo overview.",
}),
),
}),
async execute(_toolCallId, params) {
try {
const result = await readPaperCode(params.githubUrl, params.path?.trim() || "/");
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
} finally {
await disconnect();
}
},
});
}

View File

@@ -1,183 +0,0 @@
import { execFile, spawn } from "node:child_process";
import { mkdir, mkdtemp, readFile, stat, writeFile } from "node:fs/promises";
import { tmpdir } from "node:os";
import { basename, dirname, extname, join } from "node:path";
import { pathToFileURL } from "node:url";
import { promisify } from "node:util";
const execFileAsync = promisify(execFile);
function isMarkdownPath(path: string): boolean {
return [".md", ".markdown", ".txt"].includes(extname(path).toLowerCase());
}
function isLatexPath(path: string): boolean {
return extname(path).toLowerCase() === ".tex";
}
function wrapCodeAsMarkdown(source: string, filePath: string): string {
const language = extname(filePath).replace(/^\./, "") || "text";
return `# ${basename(filePath)}\n\n\`\`\`${language}\n${source}\n\`\`\`\n`;
}
export async function openWithDefaultApp(targetPath: string): Promise<void> {
const target = pathToFileURL(targetPath).href;
if (process.platform === "darwin") {
await execFileAsync("open", [target]);
return;
}
if (process.platform === "win32") {
await execFileAsync("cmd", ["/c", "start", "", target]);
return;
}
await execFileAsync("xdg-open", [target]);
}
async function runCommandWithInput(
command: string,
args: string[],
input: string,
): Promise<{ stdout: string; stderr: string }> {
return await new Promise((resolve, reject) => {
const child = spawn(command, args, { stdio: ["pipe", "pipe", "pipe"] });
const stdoutChunks: Buffer[] = [];
const stderrChunks: Buffer[] = [];
child.stdout.on("data", (chunk: Buffer | string) => {
stdoutChunks.push(typeof chunk === "string" ? Buffer.from(chunk) : chunk);
});
child.stderr.on("data", (chunk: Buffer | string) => {
stderrChunks.push(typeof chunk === "string" ? Buffer.from(chunk) : chunk);
});
child.once("error", reject);
child.once("close", (code) => {
const stdout = Buffer.concat(stdoutChunks).toString("utf8");
const stderr = Buffer.concat(stderrChunks).toString("utf8");
if (code === 0) {
resolve({ stdout, stderr });
return;
}
reject(new Error(`${command} failed with exit code ${code}${stderr ? `: ${stderr.trim()}` : ""}`));
});
child.stdin.end(input);
});
}
export async function renderHtmlPreview(filePath: string): Promise<string> {
const source = await readFile(filePath, "utf8");
const pandocCommand = process.env.PANDOC_PATH?.trim() || "pandoc";
const inputFormat = isLatexPath(filePath)
? "latex"
: "markdown+lists_without_preceding_blankline+tex_math_dollars+autolink_bare_uris-raw_html";
const markdown = isLatexPath(filePath) || isMarkdownPath(filePath) ? source : wrapCodeAsMarkdown(source, filePath);
const args = ["-f", inputFormat, "-t", "html5", "--mathml", "--wrap=none", `--resource-path=${dirname(filePath)}`];
const { stdout } = await runCommandWithInput(pandocCommand, args, markdown);
const html = `<!doctype html><html><head><meta charset="utf-8" /><base href="${pathToFileURL(dirname(filePath) + "/").href}" /><title>${basename(filePath)}</title><style>
:root{
--bg:#faf7f2;
--paper:#fffdf9;
--border:#d7cec1;
--text:#1f1c18;
--muted:#6c645a;
--code:#f3eee6;
--link:#0f6d8c;
--quote:#8b7f70;
}
@media (prefers-color-scheme: dark){
:root{
--bg:#161311;
--paper:#1d1916;
--border:#3b342d;
--text:#ebe3d6;
--muted:#b4ab9f;
--code:#221d19;
--link:#8ac6d6;
--quote:#a89d8f;
}
}
body{
font-family:Charter,"Iowan Old Style","Palatino Linotype","Book Antiqua",Palatino,Georgia,serif;
margin:0;
background:var(--bg);
color:var(--text);
line-height:1.7;
}
main{
max-width:900px;
margin:2rem auto 4rem;
padding:2.5rem 3rem;
background:var(--paper);
border:1px solid var(--border);
border-radius:18px;
box-shadow:0 12px 40px rgba(0,0,0,.06);
}
h1,h2,h3,h4,h5,h6{
font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;
line-height:1.2;
margin-top:1.5em;
}
h1{font-size:2.2rem;border-bottom:1px solid var(--border);padding-bottom:.35rem;}
h2{font-size:1.6rem;border-bottom:1px solid var(--border);padding-bottom:.25rem;}
p,ul,ol,blockquote,table{margin:1rem 0;}
pre,code{font-family:ui-monospace,SFMono-Regular,Menlo,monospace}
pre{
background:var(--code);
border:1px solid var(--border);
border-radius:12px;
padding:1rem 1.1rem;
overflow:auto;
}
code{
background:var(--code);
padding:.12rem .28rem;
border-radius:6px;
}
a{color:var(--link);text-decoration:none}
a:hover{text-decoration:underline}
img{max-width:100%}
blockquote{
border-left:4px solid var(--border);
padding-left:1rem;
color:var(--quote);
}
table{border-collapse:collapse;width:100%}
th,td{border:1px solid var(--border);padding:.55rem .7rem;text-align:left}
</style></head><body><main>${stdout}</main></body></html>`;
const tempDir = await mkdtemp(join(tmpdir(), "feynman-preview-"));
const htmlPath = join(tempDir, `${basename(filePath)}.html`);
await writeFile(htmlPath, html, "utf8");
return htmlPath;
}
export async function renderPdfPreview(filePath: string): Promise<string> {
const source = await readFile(filePath, "utf8");
const pandocCommand = process.env.PANDOC_PATH?.trim() || "pandoc";
const pdfEngine = process.env.PANDOC_PDF_ENGINE?.trim() || "xelatex";
const inputFormat = isLatexPath(filePath)
? "latex"
: "markdown+lists_without_preceding_blankline+tex_math_dollars+autolink_bare_uris-raw_html";
const markdown = isLatexPath(filePath) || isMarkdownPath(filePath) ? source : wrapCodeAsMarkdown(source, filePath);
const tempDir = await mkdtemp(join(tmpdir(), "feynman-preview-"));
const pdfPath = join(tempDir, `${basename(filePath)}.pdf`);
const args = [
"-f",
inputFormat,
"-o",
pdfPath,
`--pdf-engine=${pdfEngine}`,
`--resource-path=${dirname(filePath)}`,
];
await runCommandWithInput(pandocCommand, args, markdown);
return pdfPath;
}
export async function pathExists(path: string): Promise<boolean> {
try {
await stat(path);
return true;
} catch {
return false;
}
}

View File

@@ -1,14 +1,70 @@
import { mkdir, stat, writeFile } from "node:fs/promises"; import { mkdir, readdir, readFile, stat, writeFile } from "node:fs/promises";
import { dirname, resolve as resolvePath } from "node:path"; import { join, relative, resolve as resolvePath } from "node:path";
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent"; import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { Type } from "@sinclair/typebox";
import { getExtensionCommandSpec } from "../../metadata/commands.mjs"; import { getExtensionCommandSpec } from "../../metadata/commands.mjs";
import { renderHtmlPreview, renderPdfPreview, openWithDefaultApp, pathExists } from "./preview.js";
import { buildProjectAgentsTemplate, buildSessionLogsReadme } from "./project-scaffold.js"; import { buildProjectAgentsTemplate, buildSessionLogsReadme } from "./project-scaffold.js";
import { formatToolText } from "./shared.js";
import { searchSessionTranscripts } from "./session-search.js"; async function pathExists(path: string): Promise<boolean> {
try {
await stat(path);
return true;
} catch {
return false;
}
}
const ARTIFACT_DIRS = ["papers", "outputs", "experiments", "notes"];
const ARTIFACT_EXTS = new Set([".md", ".tex", ".pdf", ".py", ".csv", ".json", ".html", ".txt", ".log"]);
async function collectArtifacts(cwd: string): Promise<{ label: string; path: string }[]> {
const items: { label: string; path: string; mtime: number }[] = [];
for (const dir of ARTIFACT_DIRS) {
const dirPath = resolvePath(cwd, dir);
if (!(await pathExists(dirPath))) continue;
const walk = async (current: string): Promise<void> => {
let entries;
try {
entries = await readdir(current, { withFileTypes: true });
} catch {
return;
}
for (const entry of entries) {
const full = join(current, entry.name);
if (entry.isDirectory()) {
await walk(full);
} else if (ARTIFACT_EXTS.has(entry.name.slice(entry.name.lastIndexOf(".")))) {
const rel = relative(cwd, full);
let title = "";
try {
const head = await readFile(full, "utf8").then((c) => c.slice(0, 200));
const match = head.match(/^#\s+(.+)/m);
if (match) title = match[1]!.trim();
} catch {}
const info = await stat(full).catch(() => null);
const mtime = info?.mtimeMs ?? 0;
const size = info ? formatSize(info.size) : "";
const titlePart = title ? `${title}` : "";
items.push({ label: `${rel}${titlePart} (${size})`, path: rel, mtime });
}
}
};
await walk(dirPath);
}
items.sort((a, b) => b.mtime - a.mtime);
return items;
}
function formatSize(bytes: number): string {
if (bytes < 1024) return `${bytes}B`;
if (bytes < 1024 * 1024) return `${Math.round(bytes / 1024)}KB`;
return `${(bytes / (1024 * 1024)).toFixed(1)}MB`;
}
export function registerInitCommand(pi: ExtensionAPI): void { export function registerInitCommand(pi: ExtensionAPI): void {
pi.registerCommand("init", { pi.registerCommand("init", {
@@ -45,73 +101,23 @@ export function registerInitCommand(pi: ExtensionAPI): void {
}); });
} }
export function registerSessionSearchTool(pi: ExtensionAPI): void { export function registerOutputsCommand(pi: ExtensionAPI): void {
pi.registerTool({ pi.registerCommand("outputs", {
name: "session_search", description: "Browse all research artifacts (papers, outputs, experiments, notes).",
label: "Session Search", handler: async (_args, ctx) => {
description: "Search prior Feynman session transcripts to recover what was done, said, or written before.", const items = await collectArtifacts(ctx.cwd);
parameters: Type.Object({ if (items.length === 0) {
query: Type.String({ ctx.ui.notify("No artifacts found. Use /lit, /draft, /review, or /deepresearch to create some.", "info");
description: "Search query to look for in past sessions.", return;
}),
limit: Type.Optional(
Type.Number({
description: "Maximum number of sessions to return. Defaults to 3.",
}),
),
}),
async execute(_toolCallId, params) {
const result = await searchSessionTranscripts(params.query, Math.max(1, Math.min(params.limit ?? 3, 8)));
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
},
});
}
export function registerPreviewTool(pi: ExtensionAPI): void {
pi.registerTool({
name: "preview_file",
label: "Preview File",
description: "Open a markdown, LaTeX, PDF, or code artifact in the browser or a PDF viewer for human review. Rendered HTML/PDF previews are temporary and do not replace the source artifact.",
parameters: Type.Object({
path: Type.String({
description: "Path to the file to preview.",
}),
target: Type.Optional(
Type.String({
description: "Preview target: browser or pdf. Defaults to browser.",
}),
),
}),
async execute(_toolCallId, params, _signal, _onUpdate, ctx) {
const target = (params.target?.trim().toLowerCase() || "browser");
if (target !== "browser" && target !== "pdf") {
throw new Error("target must be browser or pdf");
} }
const resolvedPath = resolvePath(ctx.cwd, params.path); const selected = await ctx.ui.select(`Artifacts (${items.length})`, items.map((i) => i.label));
const openedPath = if (!selected) return;
resolvePath(resolvedPath).toLowerCase().endsWith(".pdf") && target === "pdf"
? resolvedPath
: target === "pdf"
? await renderPdfPreview(resolvedPath)
: await renderHtmlPreview(resolvedPath);
await mkdir(dirname(openedPath), { recursive: true }).catch(() => {}); const match = items.find((i) => i.label === selected);
await openWithDefaultApp(openedPath); if (match) {
ctx.ui.setEditorText(`read ${match.path}`);
const result = { }
sourcePath: resolvedPath,
target,
openedPath,
temporaryPreview: openedPath !== resolvedPath,
};
return {
content: [{ type: "text", text: formatToolText(result) }],
details: result,
};
}, },
}); });
} }

View File

@@ -1,223 +0,0 @@
import { readdir, readFile, stat } from "node:fs/promises";
import { basename, join } from "node:path";
import { pathToFileURL } from "node:url";
import { getFeynmanHome } from "./shared.js";
function extractMessageText(message: unknown): string {
if (!message || typeof message !== "object") {
return "";
}
const content = (message as { content?: unknown }).content;
if (typeof content === "string") {
return content;
}
if (!Array.isArray(content)) {
return "";
}
return content
.map((item) => {
if (!item || typeof item !== "object") {
return "";
}
const record = item as { type?: string; text?: unknown; arguments?: unknown; name?: unknown };
if (record.type === "text" && typeof record.text === "string") {
return record.text;
}
if (record.type === "toolCall") {
const name = typeof record.name === "string" ? record.name : "tool";
const args =
typeof record.arguments === "string"
? record.arguments
: record.arguments
? JSON.stringify(record.arguments)
: "";
return `[tool:${name}] ${args}`;
}
return "";
})
.filter(Boolean)
.join("\n");
}
function buildExcerpt(text: string, query: string, radius = 180): string {
const normalizedText = text.replace(/\s+/g, " ").trim();
if (!normalizedText) {
return "";
}
const lower = normalizedText.toLowerCase();
const q = query.toLowerCase();
const index = lower.indexOf(q);
if (index === -1) {
return normalizedText.slice(0, radius * 2) + (normalizedText.length > radius * 2 ? "..." : "");
}
const start = Math.max(0, index - radius);
const end = Math.min(normalizedText.length, index + q.length + radius);
const prefix = start > 0 ? "..." : "";
const suffix = end < normalizedText.length ? "..." : "";
return `${prefix}${normalizedText.slice(start, end)}${suffix}`;
}
export async function searchSessionTranscripts(query: string, limit: number): Promise<{
query: string;
results: Array<{
sessionId: string;
sessionFile: string;
startedAt?: string;
cwd?: string;
matchCount: number;
topMatches: Array<{ role: string; timestamp?: string; excerpt: string }>;
}>;
}> {
const packageRoot = process.env.FEYNMAN_PI_NPM_ROOT;
if (packageRoot) {
try {
const indexerPath = pathToFileURL(
join(packageRoot, "@kaiserlich-dev", "pi-session-search", "extensions", "indexer.ts"),
).href;
const indexer = await import(indexerPath) as {
updateIndex?: (onProgress?: (msg: string) => void) => Promise<number>;
search?: (query: string, limit?: number) => Array<{
sessionPath: string;
project: string;
timestamp: string;
snippet: string;
rank: number;
title: string | null;
}>;
getSessionSnippets?: (sessionPath: string, query: string, limit?: number) => string[];
};
await indexer.updateIndex?.();
const results = indexer.search?.(query, limit) ?? [];
if (results.length > 0) {
return {
query,
results: results.map((result) => ({
sessionId: basename(result.sessionPath),
sessionFile: result.sessionPath,
startedAt: result.timestamp,
cwd: result.project,
matchCount: 1,
topMatches: (indexer.getSessionSnippets?.(result.sessionPath, query, 4) ?? [result.snippet])
.filter(Boolean)
.map((excerpt) => ({
role: "match",
excerpt,
})),
})),
};
}
} catch {
// Fall back to direct JSONL scanning below.
}
}
const sessionDir = join(getFeynmanHome(), "sessions");
const terms = query
.toLowerCase()
.split(/\s+/)
.map((term) => term.trim())
.filter((term) => term.length >= 2);
const needle = query.toLowerCase();
let files: string[] = [];
try {
files = (await readdir(sessionDir))
.filter((entry) => entry.endsWith(".jsonl"))
.map((entry) => join(sessionDir, entry));
} catch {
return { query, results: [] };
}
const sessions = [];
for (const file of files) {
const raw = await readFile(file, "utf8").catch(() => "");
if (!raw) {
continue;
}
let sessionId = basename(file);
let startedAt: string | undefined;
let cwd: string | undefined;
const matches: Array<{ role: string; timestamp?: string; excerpt: string }> = [];
for (const line of raw.split("\n")) {
if (!line.trim()) {
continue;
}
try {
const record = JSON.parse(line) as {
type?: string;
id?: string;
timestamp?: string;
cwd?: string;
message?: { role?: string; content?: unknown };
};
if (record.type === "session") {
sessionId = record.id ?? sessionId;
startedAt = record.timestamp;
cwd = record.cwd;
continue;
}
if (record.type !== "message" || !record.message) {
continue;
}
const text = extractMessageText(record.message);
if (!text) {
continue;
}
const lower = text.toLowerCase();
const matched = lower.includes(needle) || terms.some((term) => lower.includes(term));
if (!matched) {
continue;
}
matches.push({
role: record.message.role ?? "unknown",
timestamp: record.timestamp,
excerpt: buildExcerpt(text, query),
});
} catch {
continue;
}
}
if (matches.length === 0) {
continue;
}
let mtime = 0;
try {
mtime = (await stat(file)).mtimeMs;
} catch {
mtime = 0;
}
sessions.push({
sessionId,
sessionFile: file,
startedAt,
cwd,
matchCount: matches.length,
topMatches: matches.slice(0, 4),
mtime,
});
}
sessions.sort((a, b) => {
if (b.matchCount !== a.matchCount) {
return b.matchCount - a.matchCount;
}
return b.mtime - a.mtime;
});
return {
query,
results: sessions.slice(0, limit).map(({ mtime: _mtime, ...session }) => session),
};
}

View File

@@ -1,5 +1,4 @@
import { readFileSync } from "node:fs"; import { readFileSync } from "node:fs";
import { homedir } from "node:os";
import { dirname, resolve as resolvePath } from "node:path"; import { dirname, resolve as resolvePath } from "node:path";
import { fileURLToPath } from "node:url"; import { fileURLToPath } from "node:url";
@@ -15,25 +14,3 @@ export const FEYNMAN_VERSION = (() => {
})(); })();
export { FEYNMAN_ASCII_LOGO as FEYNMAN_AGENT_LOGO } from "../../logo.mjs"; export { FEYNMAN_ASCII_LOGO as FEYNMAN_AGENT_LOGO } from "../../logo.mjs";
export const FEYNMAN_RESEARCH_TOOLS = [
"alpha_search",
"alpha_get_paper",
"alpha_ask_paper",
"alpha_annotate_paper",
"alpha_list_annotations",
"alpha_read_code",
"session_search",
"preview_file",
];
export function formatToolText(result: unknown): string {
return typeof result === "string" ? result : JSON.stringify(result, null, 2);
}
export function getFeynmanHome(): string {
const agentDir = process.env.FEYNMAN_CODING_AGENT_DIR ??
process.env.PI_CODING_AGENT_DIR ??
resolvePath(homedir(), ".feynman", "agent");
return dirname(agentDir);
}

View File

@@ -37,9 +37,7 @@ export function readPromptSpecs(appRoot) {
export const extensionCommandSpecs = [ export const extensionCommandSpecs = [
{ name: "help", args: "", section: "Project & Session", description: "Show grouped Feynman commands and prefill the editor with a selected command.", publicDocs: true }, { name: "help", args: "", section: "Project & Session", description: "Show grouped Feynman commands and prefill the editor with a selected command.", publicDocs: true },
{ name: "init", args: "", section: "Project & Session", description: "Bootstrap AGENTS.md and session-log folders for a research project.", publicDocs: true }, { name: "init", args: "", section: "Project & Session", description: "Bootstrap AGENTS.md and session-log folders for a research project.", publicDocs: true },
{ name: "alpha-login", args: "", section: "Setup", description: "Sign in to alphaXiv from inside Feynman.", publicDocs: true }, { name: "outputs", args: "", section: "Project & Session", description: "Browse all research artifacts (papers, outputs, experiments, notes).", publicDocs: true },
{ name: "alpha-status", args: "", section: "Setup", description: "Show alphaXiv authentication status.", publicDocs: true },
{ name: "alpha-logout", args: "", section: "Setup", description: "Clear alphaXiv auth from inside Feynman.", publicDocs: true },
]; ];
export const livePackageCommandGroups = [ export const livePackageCommandGroups = [

6
package-lock.json generated
View File

@@ -1,13 +1,13 @@
{ {
"name": "@companion-ai/feynman", "name": "@companion-ai/feynman",
"version": "0.2.12", "version": "0.2.13",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "@companion-ai/feynman", "name": "@companion-ai/feynman",
"version": "0.2.12", "version": "0.2.13",
"hasInstallScript": true, "license": "MIT",
"dependencies": { "dependencies": {
"@companion-ai/alpha-hub": "^0.1.2", "@companion-ai/alpha-hub": "^0.1.2",
"@mariozechner/pi-ai": "^0.62.0", "@mariozechner/pi-ai": "^0.62.0",

View File

@@ -1,6 +1,6 @@
{ {
"name": "@companion-ai/feynman", "name": "@companion-ai/feynman",
"version": "0.2.12", "version": "0.2.13",
"description": "Research-first CLI agent built on Pi and alphaXiv", "description": "Research-first CLI agent built on Pi and alphaXiv",
"license": "MIT", "license": "MIT",
"type": "module", "type": "module",
@@ -33,8 +33,7 @@
"build": "tsc -p tsconfig.build.json", "build": "tsc -p tsconfig.build.json",
"build:native-bundle": "node ./scripts/build-native-bundle.mjs", "build:native-bundle": "node ./scripts/build-native-bundle.mjs",
"dev": "tsx src/index.ts", "dev": "tsx src/index.ts",
"prepack": "node ./scripts/prepare-runtime-workspace.mjs", "prepack": "npm run build && node ./scripts/prepare-runtime-workspace.mjs",
"postinstall": "node ./scripts/patch-embedded-pi.mjs",
"start": "tsx src/index.ts", "start": "tsx src/index.ts",
"start:dist": "node ./bin/feynman.js", "start:dist": "node ./bin/feynman.js",
"test": "node --import tsx --test --test-concurrency=1 tests/*.test.ts", "test": "node --import tsx --test --test-concurrency=1 tests/*.test.ts",
@@ -52,6 +51,9 @@
], ],
"prompts": [ "prompts": [
"./prompts" "./prompts"
],
"skills": [
"./skills"
] ]
}, },
"dependencies": { "dependencies": {

View File

@@ -27,6 +27,8 @@ Ask the user where to run:
- **New git branch** — create a branch so main stays clean - **New git branch** — create a branch so main stays clean
- **Virtual environment** — create an isolated venv/conda env first - **Virtual environment** — create an isolated venv/conda env first
- **Docker** — run experiment code inside an isolated Docker container - **Docker** — run experiment code inside an isolated Docker container
- **Modal** — run on Modal's serverless GPU infrastructure. Write Modal-decorated scripts and execute with `modal run`. Best for GPU-heavy benchmarks with no persistent state between iterations. Requires `modal` CLI.
- **RunPod** — provision a GPU pod via `runpodctl` and run iterations there over SSH. Best for experiments needing persistent state, large datasets, or SSH access between iterations. Requires `runpodctl` CLI.
Do not proceed without a clear answer. Do not proceed without a clear answer.

View File

@@ -14,6 +14,8 @@ Design a replication plan for: $@
- **Local** — run in the current working directory - **Local** — run in the current working directory
- **Virtual environment** — create an isolated venv/conda env first - **Virtual environment** — create an isolated venv/conda env first
- **Docker** — run experiment code inside an isolated Docker container - **Docker** — run experiment code inside an isolated Docker container
- **Modal** — run on Modal's serverless GPU infrastructure. Write a Modal-decorated Python script and execute with `modal run <script.py>`. Best for burst GPU jobs that don't need persistent state. Requires `modal` CLI (`pip install modal && modal setup`).
- **RunPod** — provision a GPU pod on RunPod and SSH in for execution. Use `runpodctl` to create pods, transfer files, and manage lifecycle. Best for long-running experiments or when you need SSH access and persistent storage. Requires `runpodctl` CLI and `RUNPOD_API_KEY`.
- **Plan only** — produce the replication plan without executing - **Plan only** — produce the replication plan without executing
4. **Execute** — If the user chose an execution environment, implement and run the replication steps there. Save notes, scripts, raw outputs, and results to disk in a reproducible layout. Do not call the outcome replicated unless the planned checks actually passed. 4. **Execute** — If the user chose an execution environment, implement and run the replication steps there. Save notes, scripts, raw outputs, and results to disk in a reproducible layout. Do not call the outcome replicated unless the planned checks actually passed.
5. **Log** — For multi-step or resumable replication work, append concise entries to `CHANGELOG.md` after meaningful progress, failed attempts, major verification outcomes, and before stopping. Record the active objective, what changed, what was checked, and the next step. 5. **Log** — For multi-step or resumable replication work, append concise entries to `CHANGELOG.md` after meaningful progress, failed attempts, major verification outcomes, and before stopping. Record the active objective, what changed, what was checked, and the next step.

View File

@@ -136,6 +136,7 @@ function ensureBundledWorkspace() {
} }
function copyPackageFiles(appDir) { function copyPackageFiles(appDir) {
const releaseDir = resolve(appRoot, "dist", "release");
cpSync(resolve(appRoot, "package.json"), resolve(appDir, "package.json")); cpSync(resolve(appRoot, "package.json"), resolve(appDir, "package.json"));
for (const entry of packageJson.files) { for (const entry of packageJson.files) {
const normalized = entry.endsWith("/") ? entry.slice(0, -1) : entry; const normalized = entry.endsWith("/") ? entry.slice(0, -1) : entry;
@@ -143,7 +144,10 @@ function copyPackageFiles(appDir) {
if (!existsSync(source)) continue; if (!existsSync(source)) continue;
const destination = resolve(appDir, normalized); const destination = resolve(appDir, normalized);
mkdirSync(dirname(destination), { recursive: true }); mkdirSync(dirname(destination), { recursive: true });
cpSync(source, destination, { recursive: true }); cpSync(source, destination, {
recursive: true,
filter: (path) => path !== releaseDir && !path.startsWith(`${releaseDir}/`),
});
} }
cpSync(packageLockPath, resolve(appDir, "package-lock.json")); cpSync(packageLockPath, resolve(appDir, "package-lock.json"));
@@ -160,6 +164,9 @@ function installAppDependencies(appDir, stagingRoot) {
run("npm", ["ci", "--omit=dev", "--ignore-scripts", "--no-audit", "--no-fund", "--loglevel", "error"], { run("npm", ["ci", "--omit=dev", "--ignore-scripts", "--no-audit", "--no-fund", "--loglevel", "error"], {
cwd: depsDir, cwd: depsDir,
}); });
run(process.execPath, [resolve(appRoot, "scripts", "prune-runtime-deps.mjs"), depsDir], {
cwd: appRoot,
});
cpSync(resolve(depsDir, "node_modules"), resolve(appDir, "node_modules"), { recursive: true }); cpSync(resolve(depsDir, "node_modules"), resolve(appDir, "node_modules"), { recursive: true });
} }
@@ -270,10 +277,12 @@ function packBundle(bundleRoot, target, outDir) {
if (target.bundleExtension === "zip") { if (target.bundleExtension === "zip") {
if (process.platform === "win32") { if (process.platform === "win32") {
const bundleDir = dirname(bundleRoot).replace(/'/g, "''");
const bundleName = basename(bundleRoot).replace(/'/g, "''");
run("powershell", [ run("powershell", [
"-NoProfile", "-NoProfile",
"-Command", "-Command",
`Compress-Archive -Path '${bundleRoot.replace(/'/g, "''")}\\*' -DestinationPath '${archivePath.replace(/'/g, "''")}' -Force`, `Push-Location '${bundleDir}'; Compress-Archive -Path '${bundleName}' -DestinationPath '${archivePath.replace(/'/g, "''")}' -Force; Pop-Location`,
]); ]);
} else { } else {
run("zip", ["-qr", archivePath, basename(bundleRoot)], { cwd: resolve(bundleRoot, "..") }); run("zip", ["-qr", archivePath, basename(bundleRoot)], { cwd: resolve(bundleRoot, "..") });

View File

@@ -1,22 +1,75 @@
param( param(
[string]$Version = "latest" [string]$Version = "edge"
) )
$ErrorActionPreference = "Stop" $ErrorActionPreference = "Stop"
function Resolve-Version { function Normalize-Version {
param([string]$RequestedVersion) param([string]$RequestedVersion)
if ($RequestedVersion -and $RequestedVersion -ne "latest") { if (-not $RequestedVersion) {
return $RequestedVersion.TrimStart("v") return "edge"
} }
switch ($RequestedVersion.ToLowerInvariant()) {
"edge" { return "edge" }
"latest" { return "latest" }
"stable" { return "latest" }
default { return $RequestedVersion.TrimStart("v") }
}
}
function Resolve-ReleaseMetadata {
param(
[string]$RequestedVersion,
[string]$AssetTarget,
[string]$BundleExtension
)
$normalizedVersion = Normalize-Version -RequestedVersion $RequestedVersion
if ($normalizedVersion -eq "edge") {
$release = Invoke-RestMethod -Uri "https://api.github.com/repos/getcompanion-ai/feynman/releases/tags/edge"
$asset = $release.assets | Where-Object { $_.name -like "feynman-*-$AssetTarget.$BundleExtension" } | Select-Object -First 1
if (-not $asset) {
throw "Failed to resolve the latest Feynman edge bundle."
}
$archiveName = $asset.name
$suffix = ".$BundleExtension"
$bundleName = $archiveName.Substring(0, $archiveName.Length - $suffix.Length)
$resolvedVersion = $bundleName.Substring("feynman-".Length)
$resolvedVersion = $resolvedVersion.Substring(0, $resolvedVersion.Length - ("-$AssetTarget").Length)
return [PSCustomObject]@{
ResolvedVersion = $resolvedVersion
BundleName = $bundleName
ArchiveName = $archiveName
DownloadUrl = $asset.browser_download_url
}
}
if ($normalizedVersion -eq "latest") {
$release = Invoke-RestMethod -Uri "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest" $release = Invoke-RestMethod -Uri "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest"
if (-not $release.tag_name) { if (-not $release.tag_name) {
throw "Failed to resolve the latest Feynman release version." throw "Failed to resolve the latest Feynman release version."
} }
return $release.tag_name.TrimStart("v") $resolvedVersion = $release.tag_name.TrimStart("v")
} else {
$resolvedVersion = $normalizedVersion
}
$bundleName = "feynman-$resolvedVersion-$AssetTarget"
$archiveName = "$bundleName.$BundleExtension"
$baseUrl = if ($env:FEYNMAN_INSTALL_BASE_URL) { $env:FEYNMAN_INSTALL_BASE_URL } else { "https://github.com/getcompanion-ai/feynman/releases/download/v$resolvedVersion" }
return [PSCustomObject]@{
ResolvedVersion = $resolvedVersion
BundleName = $bundleName
ArchiveName = $archiveName
DownloadUrl = "$baseUrl/$archiveName"
}
} }
function Get-ArchSuffix { function Get-ArchSuffix {
@@ -28,12 +81,13 @@ function Get-ArchSuffix {
} }
} }
$resolvedVersion = Resolve-Version -RequestedVersion $Version
$archSuffix = Get-ArchSuffix $archSuffix = Get-ArchSuffix
$bundleName = "feynman-$resolvedVersion-win32-$archSuffix" $assetTarget = "win32-$archSuffix"
$archiveName = "$bundleName.zip" $release = Resolve-ReleaseMetadata -RequestedVersion $Version -AssetTarget $assetTarget -BundleExtension "zip"
$baseUrl = if ($env:FEYNMAN_INSTALL_BASE_URL) { $env:FEYNMAN_INSTALL_BASE_URL } else { "https://github.com/getcompanion-ai/feynman/releases/download/v$resolvedVersion" } $resolvedVersion = $release.ResolvedVersion
$downloadUrl = "$baseUrl/$archiveName" $bundleName = $release.BundleName
$archiveName = $release.ArchiveName
$downloadUrl = $release.DownloadUrl
$installRoot = Join-Path $env:LOCALAPPDATA "Programs\feynman" $installRoot = Join-Path $env:LOCALAPPDATA "Programs\feynman"
$installBinDir = Join-Path $installRoot "bin" $installBinDir = Join-Path $installRoot "bin"
@@ -44,18 +98,36 @@ New-Item -ItemType Directory -Path $tmpDir | Out-Null
try { try {
$archivePath = Join-Path $tmpDir $archiveName $archivePath = Join-Path $tmpDir $archiveName
Write-Host "==> Downloading $archiveName"
try {
Invoke-WebRequest -Uri $downloadUrl -OutFile $archivePath Invoke-WebRequest -Uri $downloadUrl -OutFile $archivePath
} catch {
throw @"
Failed to download $archiveName from:
$downloadUrl
The win32-$archSuffix bundle is missing from the GitHub release.
This usually means the release exists, but not all platform bundles were uploaded.
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
"@
}
New-Item -ItemType Directory -Path $installRoot -Force | Out-Null New-Item -ItemType Directory -Path $installRoot -Force | Out-Null
if (Test-Path $bundleDir) { if (Test-Path $bundleDir) {
Remove-Item -Recurse -Force $bundleDir Remove-Item -Recurse -Force $bundleDir
} }
Write-Host "==> Extracting $archiveName"
Expand-Archive -LiteralPath $archivePath -DestinationPath $installRoot -Force Expand-Archive -LiteralPath $archivePath -DestinationPath $installRoot -Force
New-Item -ItemType Directory -Path $installBinDir -Force | Out-Null New-Item -ItemType Directory -Path $installBinDir -Force | Out-Null
$shimPath = Join-Path $installBinDir "feynman.cmd" $shimPath = Join-Path $installBinDir "feynman.cmd"
Write-Host "==> Linking feynman into $installBinDir"
@" @"
@echo off @echo off
"$bundleDir\feynman.cmd" %* "$bundleDir\feynman.cmd" %*

View File

@@ -2,7 +2,7 @@
set -eu set -eu
VERSION="${1:-latest}" VERSION="${1:-edge}"
INSTALL_BIN_DIR="${FEYNMAN_INSTALL_BIN_DIR:-$HOME/.local/bin}" INSTALL_BIN_DIR="${FEYNMAN_INSTALL_BIN_DIR:-$HOME/.local/bin}"
INSTALL_APP_DIR="${FEYNMAN_INSTALL_APP_DIR:-$HOME/.local/share/feynman}" INSTALL_APP_DIR="${FEYNMAN_INSTALL_APP_DIR:-$HOME/.local/share/feynman}"
SKIP_PATH_UPDATE="${FEYNMAN_INSTALL_SKIP_PATH_UPDATE:-0}" SKIP_PATH_UPDATE="${FEYNMAN_INSTALL_SKIP_PATH_UPDATE:-0}"
@@ -13,9 +13,51 @@ step() {
printf '==> %s\n' "$1" printf '==> %s\n' "$1"
} }
run_with_spinner() {
label="$1"
shift
if [ ! -t 2 ]; then
step "$label"
"$@"
return
fi
"$@" &
pid=$!
frame=0
set +e
while kill -0 "$pid" 2>/dev/null; do
case "$frame" in
0) spinner='|' ;;
1) spinner='/' ;;
2) spinner='-' ;;
*) spinner='\\' ;;
esac
printf '\r==> %s %s' "$label" "$spinner" >&2
frame=$(( (frame + 1) % 4 ))
sleep 0.1
done
wait "$pid"
status=$?
set -e
printf '\r\033[2K' >&2
if [ "$status" -ne 0 ]; then
printf '==> %s failed\n' "$label" >&2
return "$status"
fi
step "$label"
}
normalize_version() { normalize_version() {
case "$1" in case "$1" in
"" | latest) "" | edge)
printf 'edge\n'
;;
latest | stable)
printf 'latest\n' printf 'latest\n'
;; ;;
v*) v*)
@@ -32,12 +74,20 @@ download_file() {
output="$2" output="$2"
if command -v curl >/dev/null 2>&1; then if command -v curl >/dev/null 2>&1; then
if [ -t 2 ]; then
curl -fL --progress-bar "$url" -o "$output"
else
curl -fsSL "$url" -o "$output" curl -fsSL "$url" -o "$output"
fi
return return
fi fi
if command -v wget >/dev/null 2>&1; then if command -v wget >/dev/null 2>&1; then
if [ -t 2 ]; then
wget --show-progress -O "$output" "$url"
else
wget -q -O "$output" "$url" wget -q -O "$output" "$url"
fi
return return
fi fi
@@ -110,23 +160,53 @@ require_command() {
fi fi
} }
resolve_version() { resolve_release_metadata() {
normalized_version="$(normalize_version "$VERSION")" normalized_version="$(normalize_version "$VERSION")"
if [ "$normalized_version" != "latest" ]; then if [ "$normalized_version" = "edge" ]; then
printf '%s\n' "$normalized_version" release_json="$(download_text "https://api.github.com/repos/getcompanion-ai/feynman/releases/tags/edge")"
return asset_url=""
fi
release_json="$(download_text "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest")" for candidate in $(printf '%s\n' "$release_json" | sed -n 's/.*"browser_download_url":[[:space:]]*"\([^"]*\)".*/\1/p'); do
resolved="$(printf '%s\n' "$release_json" | sed -n 's/.*"tag_name":[[:space:]]*"v\([^"]*\)".*/\1/p' | head -n 1)" case "$candidate" in
*/feynman-*-${asset_target}.${archive_extension})
asset_url="$candidate"
break
;;
esac
done
if [ -z "$resolved" ]; then if [ -z "$asset_url" ]; then
echo "Failed to resolve the latest Feynman release version." >&2 echo "Failed to resolve the latest Feynman edge bundle." >&2
exit 1 exit 1
fi fi
printf '%s\n' "$resolved" archive_name="${asset_url##*/}"
bundle_name="${archive_name%.$archive_extension}"
resolved_version="${bundle_name#feynman-}"
resolved_version="${resolved_version%-${asset_target}}"
printf '%s\n%s\n%s\n%s\n' "$resolved_version" "$bundle_name" "$archive_name" "$asset_url"
return
fi
if [ "$normalized_version" = "latest" ]; then
release_json="$(download_text "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest")"
resolved_version="$(printf '%s\n' "$release_json" | sed -n 's/.*"tag_name":[[:space:]]*"v\([^"]*\)".*/\1/p' | head -n 1)"
if [ -z "$resolved_version" ]; then
echo "Failed to resolve the latest Feynman release version." >&2
exit 1
fi
else
resolved_version="$normalized_version"
fi
bundle_name="feynman-${resolved_version}-${asset_target}"
archive_name="${bundle_name}.${archive_extension}"
download_url="${FEYNMAN_INSTALL_BASE_URL:-https://github.com/getcompanion-ai/feynman/releases/download/v${resolved_version}}/${archive_name}"
printf '%s\n%s\n%s\n%s\n' "$resolved_version" "$bundle_name" "$archive_name" "$download_url"
} }
case "$(uname -s)" in case "$(uname -s)" in
@@ -158,12 +238,13 @@ esac
require_command mktemp require_command mktemp
require_command tar require_command tar
resolved_version="$(resolve_version)"
asset_target="$os-$arch" asset_target="$os-$arch"
bundle_name="feynman-${resolved_version}-${asset_target}" archive_extension="tar.gz"
archive_name="${bundle_name}.tar.gz" release_metadata="$(resolve_release_metadata)"
base_url="${FEYNMAN_INSTALL_BASE_URL:-https://github.com/getcompanion-ai/feynman/releases/download/v${resolved_version}}" resolved_version="$(printf '%s\n' "$release_metadata" | sed -n '1p')"
download_url="${base_url}/${archive_name}" bundle_name="$(printf '%s\n' "$release_metadata" | sed -n '2p')"
archive_name="$(printf '%s\n' "$release_metadata" | sed -n '3p')"
download_url="$(printf '%s\n' "$release_metadata" | sed -n '4p')"
step "Installing Feynman ${resolved_version} for ${asset_target}" step "Installing Feynman ${resolved_version} for ${asset_target}"
@@ -174,13 +255,29 @@ cleanup() {
trap cleanup EXIT INT TERM trap cleanup EXIT INT TERM
archive_path="$tmp_dir/$archive_name" archive_path="$tmp_dir/$archive_name"
download_file "$download_url" "$archive_path" step "Downloading ${archive_name}"
if ! download_file "$download_url" "$archive_path"; then
cat >&2 <<EOF
Failed to download ${archive_name} from:
${download_url}
The ${asset_target} bundle is missing from the GitHub release.
This usually means the release exists, but not all platform bundles were uploaded.
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
EOF
exit 1
fi
mkdir -p "$INSTALL_APP_DIR" mkdir -p "$INSTALL_APP_DIR"
rm -rf "$INSTALL_APP_DIR/$bundle_name" rm -rf "$INSTALL_APP_DIR/$bundle_name"
tar -xzf "$archive_path" -C "$INSTALL_APP_DIR" run_with_spinner "Extracting ${archive_name}" tar -xzf "$archive_path" -C "$INSTALL_APP_DIR"
mkdir -p "$INSTALL_BIN_DIR" mkdir -p "$INSTALL_BIN_DIR"
step "Linking feynman into $INSTALL_BIN_DIR"
cat >"$INSTALL_BIN_DIR/feynman" <<EOF cat >"$INSTALL_BIN_DIR/feynman" <<EOF
#!/bin/sh #!/bin/sh
set -eu set -eu

View File

@@ -1,28 +1,40 @@
import { spawnSync } from "node:child_process"; import { spawnSync } from "node:child_process";
import { existsSync, mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs"; import { existsSync, mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs";
import { createRequire } from "node:module";
import { dirname, resolve } from "node:path"; import { dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url"; import { fileURLToPath } from "node:url";
import { FEYNMAN_LOGO_HTML } from "../logo.mjs"; import { FEYNMAN_LOGO_HTML } from "../logo.mjs";
const here = dirname(fileURLToPath(import.meta.url)); const here = dirname(fileURLToPath(import.meta.url));
const appRoot = resolve(here, ".."); const appRoot = resolve(here, "..");
const appRequire = createRequire(resolve(appRoot, "package.json"));
const isGlobalInstall = process.env.npm_config_global === "true" || process.env.npm_config_location === "global"; const isGlobalInstall = process.env.npm_config_global === "true" || process.env.npm_config_location === "global";
function findNodeModules() {
let dir = appRoot;
while (dir !== dirname(dir)) {
const nm = resolve(dir, "node_modules");
if (existsSync(nm)) return nm;
dir = dirname(dir);
}
return resolve(appRoot, "node_modules");
}
const nodeModules = findNodeModules();
function findPackageRoot(packageName) { function findPackageRoot(packageName) {
const candidate = resolve(nodeModules, packageName); const segments = packageName.split("/");
if (existsSync(resolve(candidate, "package.json"))) return candidate; let current = appRoot;
while (current !== dirname(current)) {
for (const candidate of [resolve(current, "node_modules", ...segments), resolve(current, ...segments)]) {
if (existsSync(resolve(candidate, "package.json"))) {
return candidate;
}
}
current = dirname(current);
}
for (const spec of [`${packageName}/dist/index.js`, `${packageName}/dist/cli.js`, packageName]) {
try {
let current = dirname(appRequire.resolve(spec));
while (current !== dirname(current)) {
if (existsSync(resolve(current, "package.json"))) {
return current;
}
current = dirname(current);
}
} catch {
continue;
}
}
return null; return null;
} }
@@ -31,15 +43,14 @@ const piTuiRoot = findPackageRoot("@mariozechner/pi-tui");
const piAiRoot = findPackageRoot("@mariozechner/pi-ai"); const piAiRoot = findPackageRoot("@mariozechner/pi-ai");
if (!piPackageRoot) { if (!piPackageRoot) {
console.warn("[feynman] pi-coding-agent not found, skipping patches"); console.warn("[feynman] pi-coding-agent not found, skipping Pi patches");
process.exit(0);
} }
const packageJsonPath = resolve(piPackageRoot, "package.json"); const packageJsonPath = piPackageRoot ? resolve(piPackageRoot, "package.json") : null;
const cliPath = resolve(piPackageRoot, "dist", "cli.js"); const cliPath = piPackageRoot ? resolve(piPackageRoot, "dist", "cli.js") : null;
const bunCliPath = resolve(piPackageRoot, "dist", "bun", "cli.js"); const bunCliPath = piPackageRoot ? resolve(piPackageRoot, "dist", "bun", "cli.js") : null;
const interactiveModePath = resolve(piPackageRoot, "dist", "modes", "interactive", "interactive-mode.js"); const interactiveModePath = piPackageRoot ? resolve(piPackageRoot, "dist", "modes", "interactive", "interactive-mode.js") : null;
const interactiveThemePath = resolve(piPackageRoot, "dist", "modes", "interactive", "theme", "theme.js"); const interactiveThemePath = piPackageRoot ? resolve(piPackageRoot, "dist", "modes", "interactive", "theme", "theme.js") : null;
const editorPath = piTuiRoot ? resolve(piTuiRoot, "dist", "components", "editor.js") : null; const editorPath = piTuiRoot ? resolve(piTuiRoot, "dist", "components", "editor.js") : null;
const workspaceRoot = resolve(appRoot, ".feynman", "npm", "node_modules"); const workspaceRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
const webAccessPath = resolve(workspaceRoot, "pi-web-access", "index.ts"); const webAccessPath = resolve(workspaceRoot, "pi-web-access", "index.ts");
@@ -56,6 +67,61 @@ const workspaceDir = resolve(appRoot, ".feynman", "npm");
const workspacePackageJsonPath = resolve(workspaceDir, "package.json"); const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
const workspaceArchivePath = resolve(appRoot, ".feynman", "runtime-workspace.tgz"); const workspaceArchivePath = resolve(appRoot, ".feynman", "runtime-workspace.tgz");
function createInstallCommand(packageManager, packageSpecs) {
switch (packageManager) {
case "npm":
return ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", ...packageSpecs];
case "pnpm":
return ["add", "--prefer-offline", "--reporter", "silent", ...packageSpecs];
case "bun":
return ["add", "--silent", ...packageSpecs];
default:
throw new Error(`Unsupported package manager: ${packageManager}`);
}
}
let cachedPackageManager = undefined;
function resolvePackageManager() {
if (cachedPackageManager !== undefined) return cachedPackageManager;
const requested = process.env.FEYNMAN_PACKAGE_MANAGER?.trim();
const candidates = requested ? [requested] : ["npm", "pnpm", "bun"];
for (const candidate of candidates) {
if (resolveExecutable(candidate)) {
cachedPackageManager = candidate;
return candidate;
}
}
cachedPackageManager = null;
return null;
}
function installWorkspacePackages(packageSpecs) {
const packageManager = resolvePackageManager();
if (!packageManager) {
process.stderr.write(
"[feynman] no supported package manager found; install npm, pnpm, or bun, or set FEYNMAN_PACKAGE_MANAGER.\n",
);
return false;
}
const result = spawnSync(packageManager, createInstallCommand(packageManager, packageSpecs), {
cwd: workspaceDir,
stdio: ["ignore", "ignore", "pipe"],
timeout: 300000,
});
if (result.status !== 0) {
if (result.stderr?.length) process.stderr.write(result.stderr);
process.stderr.write(`[feynman] ${packageManager} failed while setting up bundled packages.\n`);
return false;
}
return true;
}
function parsePackageName(spec) { function parsePackageName(spec) {
const match = spec.match(/^(@?[^@]+(?:\/[^@]+)?)(?:@.+)?$/); const match = spec.match(/^(@?[^@]+(?:\/[^@]+)?)(?:@.+)?$/);
return match?.[1] ?? spec; return match?.[1] ?? spec;
@@ -81,17 +147,7 @@ function restorePackagedWorkspace(packageSpecs) {
} }
function refreshPackagedWorkspace(packageSpecs) { function refreshPackagedWorkspace(packageSpecs) {
const result = spawnSync("npm", ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs], { return installWorkspacePackages(packageSpecs);
stdio: ["ignore", "ignore", "pipe"],
timeout: 300000,
});
if (result.status !== 0) {
if (result.stderr?.length) process.stderr.write(result.stderr);
return false;
}
return true;
} }
function resolveExecutable(name, fallbackPaths = []) { function resolveExecutable(name, fallbackPaths = []) {
@@ -139,17 +195,13 @@ function ensurePackageWorkspace() {
process.stderr.write(`\r${frames[frame++ % frames.length]} setting up feynman... ${elapsed}s`); process.stderr.write(`\r${frames[frame++ % frames.length]} setting up feynman... ${elapsed}s`);
}, 80); }, 80);
const result = spawnSync("npm", ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs], { const result = installWorkspacePackages(packageSpecs);
stdio: ["ignore", "ignore", "pipe"],
timeout: 300000,
});
clearInterval(spinner); clearInterval(spinner);
const elapsed = Math.round((Date.now() - start) / 1000); const elapsed = Math.round((Date.now() - start) / 1000);
if (result.status !== 0) { if (!result) {
process.stderr.write(`\r✗ setup failed (${elapsed}s)\n`); process.stderr.write(`\r✗ setup failed (${elapsed}s)\n`);
if (result.stderr?.length) process.stderr.write(result.stderr);
} else { } else {
process.stderr.write(`\r✓ feynman ready (${elapsed}s)\n`); process.stderr.write(`\r✓ feynman ready (${elapsed}s)\n`);
} }
@@ -178,7 +230,7 @@ function ensurePandoc() {
ensurePandoc(); ensurePandoc();
if (existsSync(packageJsonPath)) { if (packageJsonPath && existsSync(packageJsonPath)) {
const pkg = JSON.parse(readFileSync(packageJsonPath, "utf8")); const pkg = JSON.parse(readFileSync(packageJsonPath, "utf8"));
if (pkg.piConfig?.name !== "feynman" || pkg.piConfig?.configDir !== ".feynman") { if (pkg.piConfig?.name !== "feynman" || pkg.piConfig?.configDir !== ".feynman") {
pkg.piConfig = { pkg.piConfig = {
@@ -190,7 +242,7 @@ if (existsSync(packageJsonPath)) {
} }
} }
for (const entryPath of [cliPath, bunCliPath]) { for (const entryPath of [cliPath, bunCliPath].filter(Boolean)) {
if (!existsSync(entryPath)) { if (!existsSync(entryPath)) {
continue; continue;
} }
@@ -201,7 +253,7 @@ for (const entryPath of [cliPath, bunCliPath]) {
} }
} }
if (existsSync(interactiveModePath)) { if (interactiveModePath && existsSync(interactiveModePath)) {
const interactiveModeSource = readFileSync(interactiveModePath, "utf8"); const interactiveModeSource = readFileSync(interactiveModePath, "utf8");
if (interactiveModeSource.includes("`π - ${sessionName} - ${cwdBasename}`")) { if (interactiveModeSource.includes("`π - ${sessionName} - ${cwdBasename}`")) {
writeFileSync( writeFileSync(
@@ -214,7 +266,7 @@ if (existsSync(interactiveModePath)) {
} }
} }
if (existsSync(interactiveThemePath)) { if (interactiveThemePath && existsSync(interactiveThemePath)) {
let themeSource = readFileSync(interactiveThemePath, "utf8"); let themeSource = readFileSync(interactiveThemePath, "utf8");
const desiredGetEditorTheme = [ const desiredGetEditorTheme = [
"export function getEditorTheme() {", "export function getEditorTheme() {",

View File

@@ -10,6 +10,7 @@ const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
const manifestPath = resolve(workspaceDir, ".runtime-manifest.json"); const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
const workspacePackageJsonPath = resolve(workspaceDir, "package.json"); const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz"); const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
const PRUNE_VERSION = 3;
function readPackageSpecs() { function readPackageSpecs() {
const settings = JSON.parse(readFileSync(settingsPath, "utf8")); const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
@@ -44,7 +45,8 @@ function workspaceIsCurrent(packageSpecs) {
if ( if (
manifest.nodeAbi !== process.versions.modules || manifest.nodeAbi !== process.versions.modules ||
manifest.platform !== process.platform || manifest.platform !== process.platform ||
manifest.arch !== process.arch manifest.arch !== process.arch ||
manifest.pruneVersion !== PRUNE_VERSION
) { ) {
return false; return false;
} }
@@ -102,6 +104,7 @@ function writeManifest(packageSpecs) {
nodeVersion: process.version, nodeVersion: process.version,
platform: process.platform, platform: process.platform,
arch: process.arch, arch: process.arch,
pruneVersion: PRUNE_VERSION,
}, },
null, null,
2, 2,
@@ -110,6 +113,15 @@ function writeManifest(packageSpecs) {
); );
} }
function pruneWorkspace() {
const result = spawnSync(process.execPath, [resolve(appRoot, "scripts", "prune-runtime-deps.mjs"), workspaceDir], {
stdio: "inherit",
});
if (result.status !== 0) {
process.exit(result.status ?? 1);
}
}
function archiveIsCurrent() { function archiveIsCurrent() {
if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) { if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) {
return false; return false;
@@ -144,6 +156,7 @@ if (workspaceIsCurrent(packageSpecs)) {
console.log("[feynman] preparing vendored runtime workspace..."); console.log("[feynman] preparing vendored runtime workspace...");
prepareWorkspace(packageSpecs); prepareWorkspace(packageSpecs);
pruneWorkspace();
writeManifest(packageSpecs); writeManifest(packageSpecs);
createWorkspaceArchive(); createWorkspaceArchive();
console.log("[feynman] vendored runtime workspace ready"); console.log("[feynman] vendored runtime workspace ready");

View File

@@ -0,0 +1,131 @@
import { existsSync, readdirSync, rmSync, statSync } from "node:fs";
import { basename, join, resolve } from "node:path";
const root = resolve(process.argv[2] ?? ".");
const nodeModulesDir = resolve(root, "node_modules");
const STRIP_FILE_PATTERNS = [
/\.map$/i,
/\.d\.cts$/i,
/\.d\.ts$/i,
/^README(\..+)?\.md$/i,
/^CHANGELOG(\..+)?\.md$/i,
];
function safeStat(path) {
try {
return statSync(path);
} catch {
return null;
}
}
function removePath(path) {
rmSync(path, { recursive: true, force: true });
}
function walkAndPrune(dir) {
if (!existsSync(dir)) return;
for (const entry of readdirSync(dir, { withFileTypes: true })) {
const path = join(dir, entry.name);
const stats = entry.isSymbolicLink() ? safeStat(path) : null;
const isDirectory = entry.isDirectory() || stats?.isDirectory();
const isFile = entry.isFile() || stats?.isFile();
if (isDirectory) {
walkAndPrune(path);
continue;
}
if (isFile && STRIP_FILE_PATTERNS.some((pattern) => pattern.test(entry.name))) {
removePath(path);
}
}
}
function currentKoffiVariant() {
if (process.platform === "darwin" && process.arch === "arm64") return "darwin_arm64";
if (process.platform === "darwin" && process.arch === "x64") return "darwin_x64";
if (process.platform === "linux" && process.arch === "arm64") return "linux_arm64";
if (process.platform === "linux" && process.arch === "x64") return "linux_x64";
if (process.platform === "win32" && process.arch === "arm64") return "win32_arm64";
if (process.platform === "win32" && process.arch === "x64") return "win32_x64";
return null;
}
function pruneKoffi(nodeModulesRoot) {
const koffiRoot = join(nodeModulesRoot, "koffi");
if (!existsSync(koffiRoot)) return;
for (const dirName of ["doc", "src", "vendor"]) {
removePath(join(koffiRoot, dirName));
}
const buildRoot = join(koffiRoot, "build", "koffi");
if (!existsSync(buildRoot)) return;
const keep = currentKoffiVariant();
for (const entry of readdirSync(buildRoot, { withFileTypes: true })) {
if (entry.name === keep) continue;
removePath(join(buildRoot, entry.name));
}
}
function pruneBetterSqlite3(nodeModulesRoot) {
const pkgRoot = join(nodeModulesRoot, "better-sqlite3");
if (!existsSync(pkgRoot)) return;
removePath(join(pkgRoot, "deps"));
removePath(join(pkgRoot, "src"));
removePath(join(pkgRoot, "binding.gyp"));
const buildRoot = join(pkgRoot, "build");
const releaseRoot = join(buildRoot, "Release");
if (existsSync(releaseRoot)) {
for (const entry of readdirSync(releaseRoot, { withFileTypes: true })) {
if (entry.name === "better_sqlite3.node") continue;
removePath(join(releaseRoot, entry.name));
}
}
for (const entry of ["Makefile", "binding.Makefile", "config.gypi", "deps", "gyp-mac-tool", "test_extension.target.mk", "better_sqlite3.target.mk"]) {
removePath(join(buildRoot, entry));
}
}
function pruneLiteparse(nodeModulesRoot) {
const pkgRoot = join(nodeModulesRoot, "@llamaindex", "liteparse");
if (!existsSync(pkgRoot)) return;
if (existsSync(join(pkgRoot, "dist"))) {
removePath(join(pkgRoot, "src"));
}
}
function prunePiCodingAgent(nodeModulesRoot) {
const pkgRoot = join(nodeModulesRoot, "@mariozechner", "pi-coding-agent");
if (!existsSync(pkgRoot)) return;
removePath(join(pkgRoot, "docs"));
removePath(join(pkgRoot, "examples"));
}
function pruneMermaid(nodeModulesRoot) {
const pkgRoot = join(nodeModulesRoot, "mermaid", "dist");
if (!existsSync(pkgRoot)) return;
removePath(join(pkgRoot, "docs"));
removePath(join(pkgRoot, "tests"));
removePath(join(pkgRoot, "__mocks__"));
}
if (!existsSync(nodeModulesDir)) {
process.exit(0);
}
walkAndPrune(nodeModulesDir);
pruneKoffi(nodeModulesDir);
pruneBetterSqlite3(nodeModulesDir);
pruneLiteparse(nodeModulesDir);
prunePiCodingAgent(nodeModulesDir);
pruneMermaid(nodeModulesDir);
console.log(`[feynman] pruned runtime deps in ${basename(root)}`);

View File

@@ -0,0 +1,42 @@
---
name: alpha-research
description: Search, read, and query research papers via the `alpha` CLI (alphaXiv-backed). Use when the user asks about academic papers, wants to find research on a topic, needs to read a specific paper, ask questions about a paper, inspect a paper's code repository, or manage paper annotations.
---
# Alpha Research CLI
Use the `alpha` CLI via bash for all paper research operations.
## Commands
| Command | Description |
|---------|-------------|
| `alpha search "<query>"` | Search papers. Modes: `--mode semantic`, `--mode keyword`, `--mode agentic` |
| `alpha get <arxiv-id-or-url>` | Fetch paper content and any local annotation |
| `alpha get --full-text <arxiv-id>` | Get raw full text instead of AI report |
| `alpha ask <arxiv-id> "<question>"` | Ask a question about a paper's PDF |
| `alpha code <github-url> [path]` | Read files from a paper's GitHub repo. Use `/` for overview |
| `alpha annotate <paper-id> "<note>"` | Save a persistent annotation on a paper |
| `alpha annotate --clear <paper-id>` | Remove an annotation |
| `alpha annotate --list` | List all annotations |
## Auth
Run `alpha login` to authenticate with alphaXiv. Check status with `alpha status`.
## Examples
```bash
alpha search "transformer scaling laws"
alpha search --mode agentic "efficient attention mechanisms for long context"
alpha get 2106.09685
alpha ask 2106.09685 "What optimizer did they use?"
alpha code https://github.com/karpathy/nanoGPT src/model.py
alpha annotate 2106.09685 "Key paper on LoRA - revisit for adapter comparison"
```
## When to use
- Academic paper search, reading, Q&A → `alpha`
- Current topics (products, releases, docs) → web search tools
- Mixed topics → combine both

View File

@@ -0,0 +1,56 @@
---
name: modal-compute
description: Run GPU workloads on Modal's serverless infrastructure. Use when the user needs remote GPU compute for training, inference, benchmarks, or batch processing and Modal CLI is available.
---
# Modal Compute
Use the `modal` CLI for serverless GPU workloads. No pod lifecycle to manage — write a decorated Python script and run it.
## Setup
```bash
pip install modal
modal setup
```
## Commands
| Command | Description |
|---------|-------------|
| `modal run script.py` | Run a script on Modal (ephemeral) |
| `modal run --detach script.py` | Run detached (background) |
| `modal deploy script.py` | Deploy persistently |
| `modal serve script.py` | Serve with hot-reload (dev) |
| `modal shell --gpu a100` | Interactive shell with GPU |
| `modal app list` | List deployed apps |
## GPU types
`T4`, `L4`, `A10G`, `L40S`, `A100`, `A100-80GB`, `H100`, `H200`, `B200`
Multi-GPU: `"H100:4"` for 4x H100s.
## Script pattern
```python
import modal
app = modal.App("experiment")
image = modal.Image.debian_slim(python_version="3.11").pip_install("torch==2.8.0")
@app.function(gpu="A100", image=image, timeout=600)
def train():
import torch
# training code here
@app.local_entrypoint()
def main():
train.remote()
```
## When to use
- Stateless burst GPU jobs (training, inference, benchmarks)
- No persistent state needed between runs
- Check availability: `command -v modal`

27
skills/preview/SKILL.md Normal file
View File

@@ -0,0 +1,27 @@
---
name: preview
description: Preview Markdown, LaTeX, PDF, or code artifacts in the browser or as PDF. Use when the user wants to review a written artifact, export a report, or view a rendered document.
---
# Preview
Use the `/preview` command to render and open artifacts.
## Commands
| Command | Description |
|---------|-------------|
| `/preview` | Preview the most recent artifact in the browser |
| `/preview --file <path>` | Preview a specific file |
| `/preview-browser` | Force browser preview |
| `/preview-pdf` | Export to PDF via pandoc + LaTeX |
| `/preview-clear-cache` | Clear rendered preview cache |
## Fallback
If the preview commands are not available, use bash:
```bash
open <file.md> # macOS — opens in default app
open <file.pdf> # macOS — opens in Preview
```

View File

@@ -0,0 +1,48 @@
---
name: runpod-compute
description: Provision and manage GPU pods on RunPod for long-running experiments. Use when the user needs persistent GPU compute with SSH access, large datasets, or multi-step experiments.
---
# RunPod Compute
Use `runpodctl` CLI for persistent GPU pods with SSH access.
## Setup
```bash
brew install runpod/runpodctl/runpodctl # macOS
runpodctl config --apiKey=YOUR_KEY
```
## Commands
| Command | Description |
|---------|-------------|
| `runpodctl create pod --gpuType "NVIDIA A100 80GB PCIe" --imageName "runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04" --name experiment` | Create a pod |
| `runpodctl get pod` | List all pods |
| `runpodctl stop pod <id>` | Stop (preserves volume) |
| `runpodctl start pod <id>` | Resume a stopped pod |
| `runpodctl remove pod <id>` | Terminate and delete |
| `runpodctl gpu list` | List available GPU types and prices |
| `runpodctl send <file>` | Transfer files to/from pods |
| `runpodctl receive <code>` | Receive transferred files |
## SSH access
```bash
ssh root@<IP> -p <PORT> -i ~/.ssh/id_ed25519
```
Get connection details from `runpodctl get pod <id>`. Pods must expose port `22/tcp`.
## GPU types
`NVIDIA GeForce RTX 4090`, `NVIDIA RTX A6000`, `NVIDIA A40`, `NVIDIA A100 80GB PCIe`, `NVIDIA H100 80GB HBM3`
## When to use
- Long-running experiments needing persistent state
- Large dataset processing
- Multi-step work with SSH access between iterations
- Always stop or remove pods after experiments
- Check availability: `command -v runpodctl`

View File

@@ -0,0 +1,26 @@
---
name: session-search
description: Search past Feynman session transcripts to recover prior work, conversations, and research context. Use when the user references something from a previous session, asks "what did we do before", or when you suspect relevant past context exists.
---
# Session Search
Use the `/search` command to search prior Feynman sessions interactively, or search session JSONL files directly via bash.
## Interactive search
```
/search <query>
```
Opens the session search UI. Supports `resume <sessionPath>` to continue a found session.
## Direct file search
Session transcripts are stored as JSONL files in `~/.feynman/sessions/`. Each line is a JSON record with `type` (session, message, model_change) and `message.content` fields.
```bash
grep -ril "scaling laws" ~/.feynman/sessions/
```
For structured search across sessions, use the interactive `/search` command.

View File

@@ -130,6 +130,7 @@ export function syncBundledAssets(appRoot: string, agentDir: string): BootstrapS
syncManagedFiles(resolve(appRoot, ".feynman", "themes"), resolve(agentDir, "themes"), state, result); syncManagedFiles(resolve(appRoot, ".feynman", "themes"), resolve(agentDir, "themes"), state, result);
syncManagedFiles(resolve(appRoot, ".feynman", "agents"), resolve(agentDir, "agents"), state, result); syncManagedFiles(resolve(appRoot, ".feynman", "agents"), resolve(agentDir, "agents"), state, result);
syncManagedFiles(resolve(appRoot, "skills"), resolve(agentDir, "skills"), state, result);
writeBootstrapState(statePath, state); writeBootstrapState(statePath, state);
return result; return result;

View File

@@ -29,7 +29,7 @@ import { printSearchStatus } from "./search/commands.js";
import { runDoctor, runStatus } from "./setup/doctor.js"; import { runDoctor, runStatus } from "./setup/doctor.js";
import { setupPreviewDependencies } from "./setup/preview.js"; import { setupPreviewDependencies } from "./setup/preview.js";
import { runSetup } from "./setup/setup.js"; import { runSetup } from "./setup/setup.js";
import { printAsciiHeader, printInfo, printPanel, printSection } from "./ui/terminal.js"; import { ASH, printAsciiHeader, printInfo, printPanel, printSection, RESET, SAGE } from "./ui/terminal.js";
import { import {
cliCommandSections, cliCommandSections,
formatCliWorkflowUsage, formatCliWorkflowUsage,
@@ -43,7 +43,7 @@ const TOP_LEVEL_COMMANDS = new Set(topLevelCommandNames);
function printHelpLine(usage: string, description: string): void { function printHelpLine(usage: string, description: string): void {
const width = 30; const width = 30;
const padding = Math.max(1, width - usage.length); const padding = Math.max(1, width - usage.length);
printInfo(`${usage}${" ".repeat(padding)}${description}`); console.log(` ${SAGE}${usage}${RESET}${" ".repeat(padding)}${ASH}${description}${RESET}`);
} }
function printHelp(appRoot: string): void { function printHelp(appRoot: string): void {
@@ -293,6 +293,7 @@ export async function main(): Promise<void> {
cwd: { type: "string" }, cwd: { type: "string" },
doctor: { type: "boolean" }, doctor: { type: "boolean" },
help: { type: "boolean" }, help: { type: "boolean" },
version: { type: "boolean" },
"alpha-login": { type: "boolean" }, "alpha-login": { type: "boolean" },
"alpha-logout": { type: "boolean" }, "alpha-logout": { type: "boolean" },
"alpha-status": { type: "boolean" }, "alpha-status": { type: "boolean" },
@@ -310,6 +311,14 @@ export async function main(): Promise<void> {
return; return;
} }
if (values.version) {
if (feynmanVersion) {
console.log(feynmanVersion);
return;
}
throw new Error("Unable to determine the installed Feynman version.");
}
const workingDir = resolve(values.cwd ?? process.cwd()); const workingDir = resolve(values.cwd ?? process.cwd());
const sessionDir = resolve(values["session-dir"] ?? getDefaultSessionDir(feynmanHome)); const sessionDir = resolve(values["session-dir"] ?? getDefaultSessionDir(feynmanHome));
const feynmanSettingsPath = resolve(feynmanAgentDir, "settings.json"); const feynmanSettingsPath = resolve(feynmanAgentDir, "settings.json");

View File

@@ -23,13 +23,13 @@ export const OPTIONAL_PACKAGE_PRESETS = {
}, },
} as const; } as const;
export type OptionalPackagePresetName = keyof typeof OPTIONAL_PACKAGE_PRESETS;
const LEGACY_DEFAULT_PACKAGE_SOURCES = [ const LEGACY_DEFAULT_PACKAGE_SOURCES = [
...CORE_PACKAGE_SOURCES, ...CORE_PACKAGE_SOURCES,
"npm:pi-generative-ui", "npm:pi-generative-ui",
] as const; ] as const;
export type OptionalPackagePresetName = keyof typeof OPTIONAL_PACKAGE_PRESETS;
function arraysMatchAsSets(left: readonly string[], right: readonly string[]): boolean { function arraysMatchAsSets(left: readonly string[], right: readonly string[]): boolean {
if (left.length !== right.length) { if (left.length !== right.length) {
return false; return false;

View File

@@ -29,6 +29,7 @@ export function resolvePiPaths(appRoot: string) {
promptTemplatePath: resolve(appRoot, "prompts"), promptTemplatePath: resolve(appRoot, "prompts"),
systemPromptPath: resolve(appRoot, ".feynman", "SYSTEM.md"), systemPromptPath: resolve(appRoot, ".feynman", "SYSTEM.md"),
piWorkspaceNodeModulesPath: resolve(appRoot, ".feynman", "npm", "node_modules"), piWorkspaceNodeModulesPath: resolve(appRoot, ".feynman", "npm", "node_modules"),
nodeModulesBinPath: resolve(appRoot, "node_modules", ".bin"),
}; };
} }
@@ -77,8 +78,12 @@ export function buildPiArgs(options: PiRuntimeOptions): string[] {
export function buildPiEnv(options: PiRuntimeOptions): NodeJS.ProcessEnv { export function buildPiEnv(options: PiRuntimeOptions): NodeJS.ProcessEnv {
const paths = resolvePiPaths(options.appRoot); const paths = resolvePiPaths(options.appRoot);
const currentPath = process.env.PATH ?? "";
const binPath = paths.nodeModulesBinPath;
return { return {
...process.env, ...process.env,
PATH: `${binPath}:${currentPath}`,
FEYNMAN_VERSION: options.feynmanVersion, FEYNMAN_VERSION: options.feynmanVersion,
FEYNMAN_SESSION_DIR: options.sessionDir, FEYNMAN_SESSION_DIR: options.sessionDir,
FEYNMAN_MEMORY_DIR: resolve(dirname(options.feynmanAgentDir), "memory"), FEYNMAN_MEMORY_DIR: resolve(dirname(options.feynmanAgentDir), "memory"),

View File

@@ -1,6 +1,6 @@
import { FEYNMAN_ASCII_LOGO } from "../../logo.mjs"; import { FEYNMAN_ASCII_LOGO } from "../../logo.mjs";
const RESET = "\x1b[0m"; export const RESET = "\x1b[0m";
const BOLD = "\x1b[1m"; const BOLD = "\x1b[1m";
const DIM = "\x1b[2m"; const DIM = "\x1b[2m";
@@ -11,9 +11,9 @@ function rgb(red: number, green: number, blue: number): string {
// Match the outer CLI to the bundled Feynman Pi theme instead of generic magenta panels. // Match the outer CLI to the bundled Feynman Pi theme instead of generic magenta panels.
const INK = rgb(211, 198, 170); const INK = rgb(211, 198, 170);
const STONE = rgb(157, 169, 160); const STONE = rgb(157, 169, 160);
const ASH = rgb(133, 146, 137); export const ASH = rgb(133, 146, 137);
const DARK_ASH = rgb(92, 106, 114); const DARK_ASH = rgb(92, 106, 114);
const SAGE = rgb(167, 192, 128); export const SAGE = rgb(167, 192, 128);
const TEAL = rgb(127, 187, 179); const TEAL = rgb(127, 187, 179);
const ROSE = rgb(230, 126, 128); const ROSE = rgb(230, 126, 128);

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,5 @@
{ {
"_variables": { "_variables": {
"lastUpdateCheck": 1774305535217 "lastUpdateCheck": 1774391908508
} }
} }

23
website/.gitignore vendored Normal file
View File

@@ -0,0 +1,23 @@
# build output
dist/
# generated types
.astro/
# dependencies
node_modules/
# logs
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
# environment variables
.env
.env.production
# macOS-specific files
.DS_Store
# jetbrains setting folder
.idea/

6
website/.prettierignore Normal file
View File

@@ -0,0 +1,6 @@
node_modules/
coverage/
.pnpm-store/
pnpm-lock.yaml
package-lock.json
yarn.lock

19
website/.prettierrc Normal file
View File

@@ -0,0 +1,19 @@
{
"endOfLine": "lf",
"semi": false,
"singleQuote": false,
"tabWidth": 2,
"trailingComma": "es5",
"printWidth": 80,
"plugins": ["prettier-plugin-astro", "prettier-plugin-tailwindcss"],
"tailwindStylesheet": "src/styles/global.css",
"tailwindFunctions": ["cn", "cva"],
"overrides": [
{
"files": "*.astro",
"options": {
"parser": "astro"
}
}
]
}

36
website/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Astro + React + TypeScript + shadcn/ui
This is a template for a new Astro project with React, TypeScript, and shadcn/ui.
## Adding components
To add components to your app, run the following command:
```bash
npx shadcn@latest add button
```
This will place the ui components in the `src/components` directory.
## Using components
To use the components in your app, import them in an `.astro` file:
```astro
---
import { Button } from "@/components/ui/button"
---
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width" />
<title>Astro App</title>
</head>
<body>
<div class="grid h-screen place-items-center content-center">
<Button>Button</Button>
</div>
</body>
</html>
```

View File

@@ -1,15 +1,22 @@
import { defineConfig } from 'astro/config'; // @ts-check
import tailwind from '@astrojs/tailwind';
import tailwindcss from "@tailwindcss/vite"
import { defineConfig } from "astro/config"
import react from "@astrojs/react"
// https://astro.build/config
export default defineConfig({ export default defineConfig({
integrations: [tailwind()], vite: {
plugins: [tailwindcss()],
},
integrations: [react()],
site: 'https://feynman.is', site: 'https://feynman.is',
markdown: { markdown: {
shikiConfig: { shikiConfig: {
themes: { themes: {
light: 'github-light', light: 'vitesse-light',
dark: 'github-dark', dark: 'vitesse-dark',
}, },
}, },
}, },
}); })

25
website/components.json Normal file
View File

@@ -0,0 +1,25 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "radix-vega",
"rsc": false,
"tsx": true,
"tailwind": {
"config": "",
"css": "src/styles/global.css",
"baseColor": "olive",
"cssVariables": true,
"prefix": ""
},
"iconLibrary": "lucide",
"rtl": false,
"aliases": {
"components": "@/components",
"utils": "@/lib/utils",
"ui": "@/components/ui",
"lib": "@/lib",
"hooks": "@/hooks"
},
"menuColor": "default",
"menuAccent": "subtle",
"registries": {}
}

23
website/eslint.config.js Normal file
View File

@@ -0,0 +1,23 @@
import js from "@eslint/js"
import globals from "globals"
import reactHooks from "eslint-plugin-react-hooks"
import reactRefresh from "eslint-plugin-react-refresh"
import tseslint from "typescript-eslint"
import { defineConfig, globalIgnores } from "eslint/config"
export default defineConfig([
globalIgnores(["dist", ".astro"]),
{
files: ["**/*.{ts,tsx}"],
extends: [
js.configs.recommended,
tseslint.configs.recommended,
reactHooks.configs.flat.recommended,
reactRefresh.configs.vite,
],
languageOptions: {
ecmaVersion: 2020,
globals: globals.browser,
},
},
])

8344
website/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,45 @@
{ {
"name": "feynman-website", "name": "website",
"type": "module", "type": "module",
"version": "0.0.1", "version": "0.0.1",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "astro dev", "dev": "astro dev",
"build": "node ../scripts/sync-website-installers.mjs && astro build", "build": "node ../scripts/sync-website-installers.mjs && astro build",
"preview": "astro preview" "preview": "astro preview",
"astro": "astro",
"lint": "eslint .",
"format": "prettier --write \"**/*.{ts,tsx,astro}\"",
"typecheck": "astro check"
}, },
"dependencies": { "dependencies": {
"astro": "^5.7.0", "@astrojs/react": "^4.4.2",
"@astrojs/tailwind": "^6.0.2", "@fontsource-variable/ibm-plex-sans": "^5.2.8",
"tailwindcss": "^3.4.0", "@tailwindcss/vite": "^4.2.1",
"sharp": "^0.33.0" "@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"astro": "^5.18.1",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"lucide-react": "^1.6.0",
"radix-ui": "^1.4.3",
"react": "^19.2.4",
"react-dom": "^19.2.4",
"shadcn": "^4.1.0",
"tailwind-merge": "^3.5.0",
"tailwindcss": "^4.2.1",
"tw-animate-css": "^1.4.0"
},
"devDependencies": {
"@eslint/js": "^9.39.4",
"eslint": "^9.39.4",
"eslint-plugin-react-hooks": "^7.0.1",
"eslint-plugin-react-refresh": "^0.5.2",
"globals": "^16.5.0",
"prettier": "^3.8.1",
"prettier-plugin-astro": "^0.14.1",
"prettier-plugin-tailwindcss": "^0.7.2",
"typescript": "~5.9.3",
"typescript-eslint": "^8.57.1"
} }
} }

View File

@@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32">
<rect width="32" height="32" rx="6" fill="#2d353b"/>
<text x="16" y="26" text-anchor="middle" font-family="monospace" font-weight="bold" font-size="26" fill="#a7c080">f</text>
</svg>

After

Width:  |  Height:  |  Size: 248 B

BIN
website/public/hero.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 884 KiB

View File

@@ -2,7 +2,7 @@
set -eu set -eu
VERSION="${1:-latest}" VERSION="${1:-edge}"
INSTALL_BIN_DIR="${FEYNMAN_INSTALL_BIN_DIR:-$HOME/.local/bin}" INSTALL_BIN_DIR="${FEYNMAN_INSTALL_BIN_DIR:-$HOME/.local/bin}"
INSTALL_APP_DIR="${FEYNMAN_INSTALL_APP_DIR:-$HOME/.local/share/feynman}" INSTALL_APP_DIR="${FEYNMAN_INSTALL_APP_DIR:-$HOME/.local/share/feynman}"
SKIP_PATH_UPDATE="${FEYNMAN_INSTALL_SKIP_PATH_UPDATE:-0}" SKIP_PATH_UPDATE="${FEYNMAN_INSTALL_SKIP_PATH_UPDATE:-0}"
@@ -13,9 +13,51 @@ step() {
printf '==> %s\n' "$1" printf '==> %s\n' "$1"
} }
run_with_spinner() {
label="$1"
shift
if [ ! -t 2 ]; then
step "$label"
"$@"
return
fi
"$@" &
pid=$!
frame=0
set +e
while kill -0 "$pid" 2>/dev/null; do
case "$frame" in
0) spinner='|' ;;
1) spinner='/' ;;
2) spinner='-' ;;
*) spinner='\\' ;;
esac
printf '\r==> %s %s' "$label" "$spinner" >&2
frame=$(( (frame + 1) % 4 ))
sleep 0.1
done
wait "$pid"
status=$?
set -e
printf '\r\033[2K' >&2
if [ "$status" -ne 0 ]; then
printf '==> %s failed\n' "$label" >&2
return "$status"
fi
step "$label"
}
normalize_version() { normalize_version() {
case "$1" in case "$1" in
"" | latest) "" | edge)
printf 'edge\n'
;;
latest | stable)
printf 'latest\n' printf 'latest\n'
;; ;;
v*) v*)
@@ -32,12 +74,20 @@ download_file() {
output="$2" output="$2"
if command -v curl >/dev/null 2>&1; then if command -v curl >/dev/null 2>&1; then
if [ -t 2 ]; then
curl -fL --progress-bar "$url" -o "$output"
else
curl -fsSL "$url" -o "$output" curl -fsSL "$url" -o "$output"
fi
return return
fi fi
if command -v wget >/dev/null 2>&1; then if command -v wget >/dev/null 2>&1; then
if [ -t 2 ]; then
wget --show-progress -O "$output" "$url"
else
wget -q -O "$output" "$url" wget -q -O "$output" "$url"
fi
return return
fi fi
@@ -110,23 +160,53 @@ require_command() {
fi fi
} }
resolve_version() { resolve_release_metadata() {
normalized_version="$(normalize_version "$VERSION")" normalized_version="$(normalize_version "$VERSION")"
if [ "$normalized_version" != "latest" ]; then if [ "$normalized_version" = "edge" ]; then
printf '%s\n' "$normalized_version" release_json="$(download_text "https://api.github.com/repos/getcompanion-ai/feynman/releases/tags/edge")"
return asset_url=""
fi
release_json="$(download_text "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest")" for candidate in $(printf '%s\n' "$release_json" | sed -n 's/.*"browser_download_url":[[:space:]]*"\([^"]*\)".*/\1/p'); do
resolved="$(printf '%s\n' "$release_json" | sed -n 's/.*"tag_name":[[:space:]]*"v\([^"]*\)".*/\1/p' | head -n 1)" case "$candidate" in
*/feynman-*-${asset_target}.${archive_extension})
asset_url="$candidate"
break
;;
esac
done
if [ -z "$resolved" ]; then if [ -z "$asset_url" ]; then
echo "Failed to resolve the latest Feynman release version." >&2 echo "Failed to resolve the latest Feynman edge bundle." >&2
exit 1 exit 1
fi fi
printf '%s\n' "$resolved" archive_name="${asset_url##*/}"
bundle_name="${archive_name%.$archive_extension}"
resolved_version="${bundle_name#feynman-}"
resolved_version="${resolved_version%-${asset_target}}"
printf '%s\n%s\n%s\n%s\n' "$resolved_version" "$bundle_name" "$archive_name" "$asset_url"
return
fi
if [ "$normalized_version" = "latest" ]; then
release_json="$(download_text "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest")"
resolved_version="$(printf '%s\n' "$release_json" | sed -n 's/.*"tag_name":[[:space:]]*"v\([^"]*\)".*/\1/p' | head -n 1)"
if [ -z "$resolved_version" ]; then
echo "Failed to resolve the latest Feynman release version." >&2
exit 1
fi
else
resolved_version="$normalized_version"
fi
bundle_name="feynman-${resolved_version}-${asset_target}"
archive_name="${bundle_name}.${archive_extension}"
download_url="${FEYNMAN_INSTALL_BASE_URL:-https://github.com/getcompanion-ai/feynman/releases/download/v${resolved_version}}/${archive_name}"
printf '%s\n%s\n%s\n%s\n' "$resolved_version" "$bundle_name" "$archive_name" "$download_url"
} }
case "$(uname -s)" in case "$(uname -s)" in
@@ -158,12 +238,13 @@ esac
require_command mktemp require_command mktemp
require_command tar require_command tar
resolved_version="$(resolve_version)"
asset_target="$os-$arch" asset_target="$os-$arch"
bundle_name="feynman-${resolved_version}-${asset_target}" archive_extension="tar.gz"
archive_name="${bundle_name}.tar.gz" release_metadata="$(resolve_release_metadata)"
base_url="${FEYNMAN_INSTALL_BASE_URL:-https://github.com/getcompanion-ai/feynman/releases/download/v${resolved_version}}" resolved_version="$(printf '%s\n' "$release_metadata" | sed -n '1p')"
download_url="${base_url}/${archive_name}" bundle_name="$(printf '%s\n' "$release_metadata" | sed -n '2p')"
archive_name="$(printf '%s\n' "$release_metadata" | sed -n '3p')"
download_url="$(printf '%s\n' "$release_metadata" | sed -n '4p')"
step "Installing Feynman ${resolved_version} for ${asset_target}" step "Installing Feynman ${resolved_version} for ${asset_target}"
@@ -174,13 +255,29 @@ cleanup() {
trap cleanup EXIT INT TERM trap cleanup EXIT INT TERM
archive_path="$tmp_dir/$archive_name" archive_path="$tmp_dir/$archive_name"
download_file "$download_url" "$archive_path" step "Downloading ${archive_name}"
if ! download_file "$download_url" "$archive_path"; then
cat >&2 <<EOF
Failed to download ${archive_name} from:
${download_url}
The ${asset_target} bundle is missing from the GitHub release.
This usually means the release exists, but not all platform bundles were uploaded.
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
EOF
exit 1
fi
mkdir -p "$INSTALL_APP_DIR" mkdir -p "$INSTALL_APP_DIR"
rm -rf "$INSTALL_APP_DIR/$bundle_name" rm -rf "$INSTALL_APP_DIR/$bundle_name"
tar -xzf "$archive_path" -C "$INSTALL_APP_DIR" run_with_spinner "Extracting ${archive_name}" tar -xzf "$archive_path" -C "$INSTALL_APP_DIR"
mkdir -p "$INSTALL_BIN_DIR" mkdir -p "$INSTALL_BIN_DIR"
step "Linking feynman into $INSTALL_BIN_DIR"
cat >"$INSTALL_BIN_DIR/feynman" <<EOF cat >"$INSTALL_BIN_DIR/feynman" <<EOF
#!/bin/sh #!/bin/sh
set -eu set -eu

View File

@@ -1,22 +1,75 @@
param( param(
[string]$Version = "latest" [string]$Version = "edge"
) )
$ErrorActionPreference = "Stop" $ErrorActionPreference = "Stop"
function Resolve-Version { function Normalize-Version {
param([string]$RequestedVersion) param([string]$RequestedVersion)
if ($RequestedVersion -and $RequestedVersion -ne "latest") { if (-not $RequestedVersion) {
return $RequestedVersion.TrimStart("v") return "edge"
} }
switch ($RequestedVersion.ToLowerInvariant()) {
"edge" { return "edge" }
"latest" { return "latest" }
"stable" { return "latest" }
default { return $RequestedVersion.TrimStart("v") }
}
}
function Resolve-ReleaseMetadata {
param(
[string]$RequestedVersion,
[string]$AssetTarget,
[string]$BundleExtension
)
$normalizedVersion = Normalize-Version -RequestedVersion $RequestedVersion
if ($normalizedVersion -eq "edge") {
$release = Invoke-RestMethod -Uri "https://api.github.com/repos/getcompanion-ai/feynman/releases/tags/edge"
$asset = $release.assets | Where-Object { $_.name -like "feynman-*-$AssetTarget.$BundleExtension" } | Select-Object -First 1
if (-not $asset) {
throw "Failed to resolve the latest Feynman edge bundle."
}
$archiveName = $asset.name
$suffix = ".$BundleExtension"
$bundleName = $archiveName.Substring(0, $archiveName.Length - $suffix.Length)
$resolvedVersion = $bundleName.Substring("feynman-".Length)
$resolvedVersion = $resolvedVersion.Substring(0, $resolvedVersion.Length - ("-$AssetTarget").Length)
return [PSCustomObject]@{
ResolvedVersion = $resolvedVersion
BundleName = $bundleName
ArchiveName = $archiveName
DownloadUrl = $asset.browser_download_url
}
}
if ($normalizedVersion -eq "latest") {
$release = Invoke-RestMethod -Uri "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest" $release = Invoke-RestMethod -Uri "https://api.github.com/repos/getcompanion-ai/feynman/releases/latest"
if (-not $release.tag_name) { if (-not $release.tag_name) {
throw "Failed to resolve the latest Feynman release version." throw "Failed to resolve the latest Feynman release version."
} }
return $release.tag_name.TrimStart("v") $resolvedVersion = $release.tag_name.TrimStart("v")
} else {
$resolvedVersion = $normalizedVersion
}
$bundleName = "feynman-$resolvedVersion-$AssetTarget"
$archiveName = "$bundleName.$BundleExtension"
$baseUrl = if ($env:FEYNMAN_INSTALL_BASE_URL) { $env:FEYNMAN_INSTALL_BASE_URL } else { "https://github.com/getcompanion-ai/feynman/releases/download/v$resolvedVersion" }
return [PSCustomObject]@{
ResolvedVersion = $resolvedVersion
BundleName = $bundleName
ArchiveName = $archiveName
DownloadUrl = "$baseUrl/$archiveName"
}
} }
function Get-ArchSuffix { function Get-ArchSuffix {
@@ -28,12 +81,13 @@ function Get-ArchSuffix {
} }
} }
$resolvedVersion = Resolve-Version -RequestedVersion $Version
$archSuffix = Get-ArchSuffix $archSuffix = Get-ArchSuffix
$bundleName = "feynman-$resolvedVersion-win32-$archSuffix" $assetTarget = "win32-$archSuffix"
$archiveName = "$bundleName.zip" $release = Resolve-ReleaseMetadata -RequestedVersion $Version -AssetTarget $assetTarget -BundleExtension "zip"
$baseUrl = if ($env:FEYNMAN_INSTALL_BASE_URL) { $env:FEYNMAN_INSTALL_BASE_URL } else { "https://github.com/getcompanion-ai/feynman/releases/download/v$resolvedVersion" } $resolvedVersion = $release.ResolvedVersion
$downloadUrl = "$baseUrl/$archiveName" $bundleName = $release.BundleName
$archiveName = $release.ArchiveName
$downloadUrl = $release.DownloadUrl
$installRoot = Join-Path $env:LOCALAPPDATA "Programs\feynman" $installRoot = Join-Path $env:LOCALAPPDATA "Programs\feynman"
$installBinDir = Join-Path $installRoot "bin" $installBinDir = Join-Path $installRoot "bin"
@@ -44,18 +98,36 @@ New-Item -ItemType Directory -Path $tmpDir | Out-Null
try { try {
$archivePath = Join-Path $tmpDir $archiveName $archivePath = Join-Path $tmpDir $archiveName
Write-Host "==> Downloading $archiveName"
try {
Invoke-WebRequest -Uri $downloadUrl -OutFile $archivePath Invoke-WebRequest -Uri $downloadUrl -OutFile $archivePath
} catch {
throw @"
Failed to download $archiveName from:
$downloadUrl
The win32-$archSuffix bundle is missing from the GitHub release.
This usually means the release exists, but not all platform bundles were uploaded.
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
"@
}
New-Item -ItemType Directory -Path $installRoot -Force | Out-Null New-Item -ItemType Directory -Path $installRoot -Force | Out-Null
if (Test-Path $bundleDir) { if (Test-Path $bundleDir) {
Remove-Item -Recurse -Force $bundleDir Remove-Item -Recurse -Force $bundleDir
} }
Write-Host "==> Extracting $archiveName"
Expand-Archive -LiteralPath $archivePath -DestinationPath $installRoot -Force Expand-Archive -LiteralPath $archivePath -DestinationPath $installRoot -Force
New-Item -ItemType Directory -Path $installBinDir -Force | Out-Null New-Item -ItemType Directory -Path $installBinDir -Force | Out-Null
$shimPath = Join-Path $installBinDir "feynman.cmd" $shimPath = Join-Path $installBinDir "feynman.cmd"
Write-Host "==> Linking feynman into $installBinDir"
@" @"
@echo off @echo off
"$bundleDir\feynman.cmd" %* "$bundleDir\feynman.cmd" %*

View File

@@ -1,21 +0,0 @@
---
interface Props {
class?: string;
size?: 'nav' | 'hero';
}
const { class: className = '', size = 'hero' } = Astro.props;
const sizeClasses = size === 'nav'
? 'text-2xl'
: 'text-6xl sm:text-7xl md:text-8xl';
---
<span
class:list={[
"font-['VT323'] text-accent inline-block tracking-tighter",
sizeClasses,
className,
]}
aria-label="Feynman"
>feynman</span>

View File

@@ -1,9 +0,0 @@
<footer class="py-8 mt-16">
<div class="max-w-6xl mx-auto px-6 flex flex-col sm:flex-row items-center justify-between gap-4">
<span class="text-sm text-text-dim">&copy; 2026 Companion Inc.</span>
<div class="flex gap-6">
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="text-sm text-text-dim hover:text-text-primary transition-colors">GitHub</a>
<a href="/docs/getting-started/installation" class="text-sm text-text-dim hover:text-text-primary transition-colors">Docs</a>
</div>
</div>
</footer>

View File

@@ -1,29 +0,0 @@
---
import ThemeToggle from './ThemeToggle.astro';
import AsciiLogo from './AsciiLogo.astro';
interface Props {
active?: 'home' | 'docs';
}
const { active = 'home' } = Astro.props;
---
<nav class="sticky top-0 z-50 bg-bg">
<div class="max-w-6xl mx-auto px-6 h-14 flex items-center justify-between">
<a href="/" class="hover:opacity-80 transition-opacity" aria-label="Feynman">
<AsciiLogo size="nav" />
</a>
<div class="flex items-center gap-6">
<a href="/docs/getting-started/installation"
class:list={["text-sm transition-colors", active === 'docs' ? 'text-text-primary' : 'text-text-muted hover:text-text-primary']}>
Docs
</a>
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener"
class="text-sm text-text-muted hover:text-text-primary transition-colors">
GitHub
</a>
<ThemeToggle />
</div>
</div>
</nav>

View File

@@ -1,80 +0,0 @@
---
interface Props {
currentSlug: string;
}
const { currentSlug } = Astro.props;
const sections = [
{
title: 'Getting Started',
items: [
{ label: 'Installation', slug: 'getting-started/installation' },
{ label: 'Quick Start', slug: 'getting-started/quickstart' },
{ label: 'Setup', slug: 'getting-started/setup' },
{ label: 'Configuration', slug: 'getting-started/configuration' },
],
},
{
title: 'Workflows',
items: [
{ label: 'Deep Research', slug: 'workflows/deep-research' },
{ label: 'Literature Review', slug: 'workflows/literature-review' },
{ label: 'Peer Review', slug: 'workflows/review' },
{ label: 'Code Audit', slug: 'workflows/audit' },
{ label: 'Replication', slug: 'workflows/replication' },
{ label: 'Source Comparison', slug: 'workflows/compare' },
{ label: 'Draft Writing', slug: 'workflows/draft' },
{ label: 'Autoresearch', slug: 'workflows/autoresearch' },
{ label: 'Watch', slug: 'workflows/watch' },
],
},
{
title: 'Agents',
items: [
{ label: 'Researcher', slug: 'agents/researcher' },
{ label: 'Reviewer', slug: 'agents/reviewer' },
{ label: 'Writer', slug: 'agents/writer' },
{ label: 'Verifier', slug: 'agents/verifier' },
],
},
{
title: 'Tools',
items: [
{ label: 'AlphaXiv', slug: 'tools/alphaxiv' },
{ label: 'Web Search', slug: 'tools/web-search' },
{ label: 'Session Search', slug: 'tools/session-search' },
{ label: 'Preview', slug: 'tools/preview' },
],
},
{
title: 'Reference',
items: [
{ label: 'CLI Commands', slug: 'reference/cli-commands' },
{ label: 'Slash Commands', slug: 'reference/slash-commands' },
{ label: 'Package Stack', slug: 'reference/package-stack' },
],
},
];
---
<aside id="sidebar" class="w-64 shrink-0 h-[calc(100vh-3.5rem)] sticky top-14 overflow-y-auto py-6 pr-4 hidden lg:block border-r border-border">
{sections.map((section) => (
<div class="mb-6">
<div class="text-xs font-semibold text-accent uppercase tracking-wider px-3 mb-2">{section.title}</div>
{section.items.map((item) => (
<a
href={`/docs/${item.slug}`}
class:list={[
'block px-3 py-1.5 text-sm border-l-[2px] transition-colors',
currentSlug === item.slug
? 'border-accent text-text-primary'
: 'border-transparent text-text-muted hover:text-text-primary',
]}
>
{item.label}
</a>
))}
</div>
))}
</aside>

View File

@@ -1,33 +0,0 @@
<button id="theme-toggle" class="p-1.5 rounded-md text-text-muted hover:text-text-primary hover:bg-surface transition-colors" aria-label="Toggle theme">
<svg id="sun-icon" class="hidden w-[18px] h-[18px]" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
<circle cx="12" cy="12" r="5" />
<path d="M12 1v2M12 21v2M4.22 4.22l1.42 1.42M18.36 18.36l1.42 1.42M1 12h2M21 12h2M4.22 19.78l1.42-1.42M18.36 5.64l1.42-1.42" />
</svg>
<svg id="moon-icon" class="hidden w-[18px] h-[18px]" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
<path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z" />
</svg>
</button>
<script is:inline>
(function() {
var stored = localStorage.getItem('theme');
var prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
var dark = stored === 'dark' || (!stored && prefersDark);
if (dark) document.documentElement.classList.add('dark');
function update() {
var isDark = document.documentElement.classList.contains('dark');
document.getElementById('sun-icon').style.display = isDark ? 'block' : 'none';
document.getElementById('moon-icon').style.display = isDark ? 'none' : 'block';
}
update();
document.addEventListener('DOMContentLoaded', function() {
update();
document.getElementById('theme-toggle').addEventListener('click', function() {
document.documentElement.classList.toggle('dark');
var isDark = document.documentElement.classList.contains('dark');
localStorage.setItem('theme', isDark ? 'dark' : 'light');
update();
});
});
})();
</script>

View File

@@ -0,0 +1,49 @@
import * as React from "react"
import { cva, type VariantProps } from "class-variance-authority"
import { Slot } from "radix-ui"
import { cn } from "@/lib/utils"
const badgeVariants = cva(
"group/badge inline-flex h-5 w-fit shrink-0 items-center justify-center gap-1 overflow-hidden rounded-4xl border border-transparent px-2 py-0.5 text-xs font-medium whitespace-nowrap transition-all focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50 has-data-[icon=inline-end]:pr-1.5 has-data-[icon=inline-start]:pl-1.5 aria-invalid:border-destructive aria-invalid:ring-destructive/20 dark:aria-invalid:ring-destructive/40 [&>svg]:pointer-events-none [&>svg]:size-3!",
{
variants: {
variant: {
default: "bg-primary text-primary-foreground [a]:hover:bg-primary/80",
secondary:
"bg-secondary text-secondary-foreground [a]:hover:bg-secondary/80",
destructive:
"bg-destructive/10 text-destructive focus-visible:ring-destructive/20 dark:bg-destructive/20 dark:focus-visible:ring-destructive/40 [a]:hover:bg-destructive/20",
outline:
"border-border text-foreground [a]:hover:bg-muted [a]:hover:text-muted-foreground",
ghost:
"hover:bg-muted hover:text-muted-foreground dark:hover:bg-muted/50",
link: "text-primary underline-offset-4 hover:underline",
},
},
defaultVariants: {
variant: "default",
},
}
)
function Badge({
className,
variant = "default",
asChild = false,
...props
}: React.ComponentProps<"span"> &
VariantProps<typeof badgeVariants> & { asChild?: boolean }) {
const Comp = asChild ? Slot.Root : "span"
return (
<Comp
data-slot="badge"
data-variant={variant}
className={cn(badgeVariants({ variant }), className)}
{...props}
/>
)
}
export { Badge, badgeVariants }

View File

@@ -0,0 +1,67 @@
import * as React from "react"
import { cva, type VariantProps } from "class-variance-authority"
import { Slot } from "radix-ui"
import { cn } from "@/lib/utils"
const buttonVariants = cva(
"group/button inline-flex shrink-0 items-center justify-center rounded-md border border-transparent bg-clip-padding text-sm font-medium whitespace-nowrap transition-all outline-none select-none focus-visible:border-ring focus-visible:ring-3 focus-visible:ring-ring/50 active:not-aria-[haspopup]:translate-y-px disabled:pointer-events-none disabled:opacity-50 aria-invalid:border-destructive aria-invalid:ring-3 aria-invalid:ring-destructive/20 dark:aria-invalid:border-destructive/50 dark:aria-invalid:ring-destructive/40 [&_svg]:pointer-events-none [&_svg]:shrink-0 [&_svg:not([class*='size-'])]:size-4",
{
variants: {
variant: {
default: "bg-primary text-primary-foreground hover:bg-primary/80",
outline:
"border-border bg-background shadow-xs hover:bg-muted hover:text-foreground aria-expanded:bg-muted aria-expanded:text-foreground dark:border-input dark:bg-input/30 dark:hover:bg-input/50",
secondary:
"bg-secondary text-secondary-foreground hover:bg-secondary/80 aria-expanded:bg-secondary aria-expanded:text-secondary-foreground",
ghost:
"hover:bg-muted hover:text-foreground aria-expanded:bg-muted aria-expanded:text-foreground dark:hover:bg-muted/50",
destructive:
"bg-destructive/10 text-destructive hover:bg-destructive/20 focus-visible:border-destructive/40 focus-visible:ring-destructive/20 dark:bg-destructive/20 dark:hover:bg-destructive/30 dark:focus-visible:ring-destructive/40",
link: "text-primary underline-offset-4 hover:underline",
},
size: {
default:
"h-9 gap-1.5 px-2.5 in-data-[slot=button-group]:rounded-md has-data-[icon=inline-end]:pr-2 has-data-[icon=inline-start]:pl-2",
xs: "h-6 gap-1 rounded-[min(var(--radius-md),8px)] px-2 text-xs in-data-[slot=button-group]:rounded-md has-data-[icon=inline-end]:pr-1.5 has-data-[icon=inline-start]:pl-1.5 [&_svg:not([class*='size-'])]:size-3",
sm: "h-8 gap-1 rounded-[min(var(--radius-md),10px)] px-2.5 in-data-[slot=button-group]:rounded-md has-data-[icon=inline-end]:pr-1.5 has-data-[icon=inline-start]:pl-1.5",
lg: "h-10 gap-1.5 px-2.5 has-data-[icon=inline-end]:pr-3 has-data-[icon=inline-start]:pl-3",
icon: "size-9",
"icon-xs":
"size-6 rounded-[min(var(--radius-md),8px)] in-data-[slot=button-group]:rounded-md [&_svg:not([class*='size-'])]:size-3",
"icon-sm":
"size-8 rounded-[min(var(--radius-md),10px)] in-data-[slot=button-group]:rounded-md",
"icon-lg": "size-10",
},
},
defaultVariants: {
variant: "default",
size: "default",
},
}
)
function Button({
className,
variant = "default",
size = "default",
asChild = false,
...props
}: React.ComponentProps<"button"> &
VariantProps<typeof buttonVariants> & {
asChild?: boolean
}) {
const Comp = asChild ? Slot.Root : "button"
return (
<Comp
data-slot="button"
data-variant={variant}
data-size={size}
className={cn(buttonVariants({ variant, size, className }))}
{...props}
/>
)
}
export { Button, buttonVariants }

View File

@@ -0,0 +1,103 @@
import * as React from "react"
import { cn } from "@/lib/utils"
function Card({
className,
size = "default",
...props
}: React.ComponentProps<"div"> & { size?: "default" | "sm" }) {
return (
<div
data-slot="card"
data-size={size}
className={cn(
"group/card flex flex-col gap-6 overflow-hidden rounded-xl bg-card py-6 text-sm text-card-foreground shadow-xs ring-1 ring-foreground/10 has-[>img:first-child]:pt-0 data-[size=sm]:gap-4 data-[size=sm]:py-4 *:[img:first-child]:rounded-t-xl *:[img:last-child]:rounded-b-xl",
className
)}
{...props}
/>
)
}
function CardHeader({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-header"
className={cn(
"group/card-header @container/card-header grid auto-rows-min items-start gap-1 rounded-t-xl px-6 group-data-[size=sm]/card:px-4 has-data-[slot=card-action]:grid-cols-[1fr_auto] has-data-[slot=card-description]:grid-rows-[auto_auto] [.border-b]:pb-6 group-data-[size=sm]/card:[.border-b]:pb-4",
className
)}
{...props}
/>
)
}
function CardTitle({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-title"
className={cn(
"font-heading text-base leading-normal font-medium group-data-[size=sm]/card:text-sm",
className
)}
{...props}
/>
)
}
function CardDescription({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-description"
className={cn("text-sm text-muted-foreground", className)}
{...props}
/>
)
}
function CardAction({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-action"
className={cn(
"col-start-2 row-span-2 row-start-1 self-start justify-self-end",
className
)}
{...props}
/>
)
}
function CardContent({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-content"
className={cn("px-6 group-data-[size=sm]/card:px-4", className)}
{...props}
/>
)
}
function CardFooter({ className, ...props }: React.ComponentProps<"div">) {
return (
<div
data-slot="card-footer"
className={cn(
"flex items-center rounded-b-xl px-6 group-data-[size=sm]/card:px-4 [.border-t]:pt-6 group-data-[size=sm]/card:[.border-t]:pt-4",
className
)}
{...props}
/>
)
}
export {
Card,
CardHeader,
CardFooter,
CardTitle,
CardAction,
CardDescription,
CardContent,
}

View File

@@ -0,0 +1,26 @@
import * as React from "react"
import { Separator as SeparatorPrimitive } from "radix-ui"
import { cn } from "@/lib/utils"
function Separator({
className,
orientation = "horizontal",
decorative = true,
...props
}: React.ComponentProps<typeof SeparatorPrimitive.Root>) {
return (
<SeparatorPrimitive.Root
data-slot="separator"
decorative={decorative}
orientation={orientation}
className={cn(
"shrink-0 bg-border data-horizontal:h-px data-horizontal:w-full data-vertical:w-px data-vertical:self-stretch",
className
)}
{...props}
/>
)
}
export { Separator }

View File

@@ -1,75 +1,32 @@
--- ---
title: Researcher title: Researcher
description: Gather primary evidence across papers, web sources, repos, docs, and local artifacts. description: The researcher agent searches, reads, and extracts findings from papers and web sources.
section: Agents section: Agents
order: 1 order: 1
--- ---
## Source The researcher is the primary information-gathering agent in Feynman. It searches academic databases and the web, reads papers and articles, extracts key findings, and organizes source material for other agents to synthesize. Most workflows start with the researcher.
Generated from `.feynman/agents/researcher.md`. Edit that prompt file, not this docs page. ## What it does
## Role The researcher agent handles the entire source discovery and extraction pipeline. It formulates search queries based on your topic, evaluates results for relevance, reads full documents, and extracts structured information including claims, methodology, results, and limitations.
Gather primary evidence across papers, web sources, repos, docs, and local artifacts. When multiple researcher agents are spawned in parallel (which is the default for deep research and literature review), each agent tackles a different angle of the topic. One might search for foundational papers while another looks for recent work that challenges the established view. This parallel approach produces broader coverage than a single sequential search.
## Tools
`read`, `bash`, `grep`, `find`, `ls`
## Default Output
`research.md`
## Integrity commandments
1. **Never fabricate a source.** Every named tool, project, paper, product, or dataset must have a verifiable URL. If you cannot find a URL, do not mention it.
2. **Never claim a project exists without checking.** Before citing a GitHub repo, search for it. Before citing a paper, find it. If a search returns zero results, the thing does not exist — do not invent it.
3. **Never extrapolate details you haven't read.** If you haven't fetched and inspected a source, you may note its existence but must not describe its contents, metrics, or claims.
4. **URL or it didn't happen.** Every entry in your evidence table must include a direct, checkable URL. No URL = not included.
## Search strategy ## Search strategy
1. **Start wide.** Begin with short, broad queries to map the landscape. Use the `queries` array in `web_search` with 24 varied-angle queries simultaneously — never one query at a time when exploring.
2. **Evaluate availability.** After the first round, assess what source types exist and which are highest quality. Adjust strategy accordingly.
3. **Progressively narrow.** Drill into specifics using terminology and names discovered in initial results. Refine queries, don't repeat them.
4. **Cross-source.** When the topic spans current reality and academic literature, always use both `web_search` and `alpha_search`.
Use `recencyFilter` on `web_search` for fast-moving topics. Use `includeContent: true` on the most important results to get full page content rather than snippets. The researcher uses a multi-source search strategy. For academic topics, it queries AlphaXiv for papers and uses citation chains to discover related work. For applied topics, it searches the web for documentation, blog posts, and code repositories. For most topics, it uses both channels and cross-references findings.
## Source quality Search queries are diversified automatically. Rather than running the same query multiple times, the researcher generates 2-4 varied queries that approach the topic from different angles. This catches papers that use different terminology for the same concept and surfaces sources that a single query would miss.
- **Prefer:** academic papers, official documentation, primary datasets, verified benchmarks, government filings, reputable journalism, expert technical blogs, official vendor pages
- **Accept with caveats:** well-cited secondary sources, established trade publications
- **Deprioritize:** SEO-optimized listicles, undated blog posts, content aggregators, social media without primary links
- **Reject:** sources with no author and no date, content that appears AI-generated with no primary backing
When initial results skew toward low-quality sources, re-search with `domainFilter` targeting authoritative domains. ## Source evaluation
## Output format Not every search result is worth reading in full. The researcher evaluates results by scanning abstracts and summaries first, then selects the most relevant and authoritative sources for deep reading. It considers publication venue, citation count, recency, and topical relevance when prioritizing sources.
Assign each source a stable numeric ID. Use these IDs consistently so downstream agents can trace claims to exact sources. ## Extraction
### Evidence table When reading a source in depth, the researcher extracts structured data: the main claims and their supporting evidence, methodology details, experimental results, stated limitations, and connections to other work. Each extracted item is tagged with its source location for traceability.
| # | Source | URL | Key claim | Type | Confidence | ## Used by
|---|--------|-----|-----------|------|------------|
| 1 | ... | ... | ... | primary / secondary / self-reported | high / medium / low |
### Findings The researcher agent is used by the `/deepresearch`, `/lit`, `/review`, `/audit`, `/replicate`, `/compare`, and `/draft` workflows. It is the most frequently invoked agent in the system. You do not invoke it directly -- it is dispatched automatically by the workflow orchestrator.
Write findings using inline source references: `[1]`, `[2]`, etc. Every factual claim must cite at least one source by number.
### Sources
Numbered list matching the evidence table:
1. Author/Title — URL
2. Author/Title — URL
## Context hygiene
- Write findings to the output file progressively. Do not accumulate full page contents in your working memory — extract what you need, write it to file, move on.
- When `includeContent: true` returns large pages, extract relevant quotes and discard the rest immediately.
- If your search produces 10+ results, triage by title/snippet first. Only fetch full content for the top candidates.
- Return a one-line summary to the parent, not full findings. The parent reads the output file.
## Output contract
- Save to the output file (default: `research.md`).
- Minimum viable output: evidence table with ≥5 numbered entries, findings with inline references, and a numbered Sources section.
- Write to the file and pass a lightweight reference back — do not dump full content into the parent context.

View File

@@ -1,93 +1,33 @@
--- ---
title: Reviewer title: Reviewer
description: Simulate a tough but constructive AI research peer reviewer with inline annotations. description: The reviewer agent evaluates documents with severity-graded academic feedback.
section: Agents section: Agents
order: 2 order: 2
--- ---
## Source The reviewer agent evaluates documents, papers, and research artifacts with the rigor of an academic peer reviewer. It produces severity-graded feedback covering methodology, claims, writing quality, and reproducibility.
Generated from `.feynman/agents/reviewer.md`. Edit that prompt file, not this docs page. ## What it does
## Role The reviewer reads a document end-to-end and evaluates it against standard academic criteria. It checks whether claims are supported by the presented evidence, whether the methodology is sound and described in sufficient detail, whether the experimental design controls for confounds, and whether the writing is clear and complete.
Simulate a tough but constructive AI research peer reviewer with inline annotations. Each piece of feedback is assigned a severity level. **Critical** issues are fundamental problems that undermine the document's validity, such as a statistical test applied incorrectly or a conclusion not supported by the data. **Major** issues are significant problems that should be addressed, like missing baselines or inadequate ablation studies. **Minor** issues are suggestions for improvement, and **nits** are stylistic or formatting comments.
## Default Output ## Evaluation criteria
`review.md` The reviewer evaluates documents across several dimensions:
Your job is to act like a skeptical but fair peer reviewer for AI/ML systems work. - **Claims vs. Evidence** -- Does the evidence presented actually support the claims made?
- **Methodology** -- Is the approach sound? Are there confounds or biases?
- **Experimental Design** -- Are baselines appropriate? Are ablations sufficient?
- **Reproducibility** -- Could someone replicate this work from the description alone?
- **Writing Quality** -- Is the paper clear, well-organized, and free of ambiguity?
- **Completeness** -- Are limitations discussed? Is related work adequately covered?
## Review checklist ## Confidence scoring
- Evaluate novelty, clarity, empirical rigor, reproducibility, and likely reviewer pushback.
- Do not praise vaguely. Every positive claim should be tied to specific evidence.
- Look for:
- missing or weak baselines
- missing ablations
- evaluation mismatches
- unclear claims of novelty
- weak related-work positioning
- insufficient statistical evidence
- benchmark leakage or contamination risks
- under-specified implementation details
- claims that outrun the experiments
- Distinguish between fatal issues, strong concerns, and polish issues.
- Preserve uncertainty. If the draft might pass depending on venue norms, say so explicitly.
## Output format The reviewer provides a confidence score for each finding, indicating how certain it is about the assessment. High-confidence findings are clear-cut issues (a statistical error, a missing citation). Lower-confidence findings are judgment calls (whether a baseline is sufficient, whether more ablations are needed) where reasonable reviewers might disagree.
Produce two sections: a structured review and inline annotations. ## Used by
### Part 1: Structured Review The reviewer agent is the primary agent in the `/review` workflow. It also contributes to `/audit` (evaluating paper claims against code) and `/compare` (assessing the strength of evidence across sources). Like all agents, it is dispatched automatically by the workflow orchestrator.
```markdown
## Summary
1-2 paragraph summary of the paper's contributions and approach.
## Strengths
- [S1] ...
- [S2] ...
## Weaknesses
- [W1] **FATAL:** ...
- [W2] **MAJOR:** ...
- [W3] **MINOR:** ...
## Questions for Authors
- [Q1] ...
## Verdict
Overall assessment and confidence score. Would this pass at [venue]?
## Revision Plan
Prioritized, concrete steps to address each weakness.
```
### Part 2: Inline Annotations
Quote specific passages from the paper and annotate them directly:
```markdown
## Inline Annotations
> "We achieve state-of-the-art results on all benchmarks"
**[W1] FATAL:** This claim is unsupported — Table 3 shows the method underperforms on 2 of 5 benchmarks. Revise to accurately reflect results.
> "Our approach is novel in combining X with Y"
**[W3] MINOR:** Z et al. (2024) combined X with Y in a different domain. Acknowledge this and clarify the distinction.
> "We use a learning rate of 1e-4"
**[Q1]:** Was this tuned? What range was searched? This matters for reproducibility.
```
Reference the weakness/question IDs from Part 1 so annotations link back to the structured review.
## Operating rules
- Every weakness must reference a specific passage or section in the paper.
- Inline annotations must quote the exact text being critiqued.
- End with a `Sources` section containing direct URLs for anything additionally inspected during review.
## Output contract
- Save the main artifact to `review.md`.
- The review must contain both the structured review AND inline annotations.

View File

@@ -1,50 +1,36 @@
--- ---
title: Verifier title: Verifier
description: Post-process a draft to add inline citations and verify every source URL. description: The verifier agent cross-checks claims against their cited sources.
section: Agents section: Agents
order: 4 order: 4
--- ---
## Source The verifier agent is responsible for fact-checking and validation. It cross-references claims against their cited sources, checks code implementations against paper descriptions, and flags unsupported or misattributed assertions.
Generated from `.feynman/agents/verifier.md`. Edit that prompt file, not this docs page. ## What it does
## Role The verifier performs targeted checks on specific claims rather than reading documents end-to-end like the reviewer. It takes a claim and its cited source, retrieves the source, and determines whether the source actually supports the claim as stated. This catches misattributions (citing a paper that says something different), overstatements (claiming a stronger result than the source reports), and fabrications (claims with no basis in the cited source).
Post-process a draft to add inline citations and verify every source URL. When checking code against papers, the verifier examines specific implementation details: hyperparameters, architecture configurations, training procedures, and evaluation metrics. It compares the paper's description to the code's actual behavior, noting discrepancies with exact file paths and line numbers.
## Tools ## Verification process
`read`, `bash`, `grep`, `find`, `ls`, `write`, `edit` The verifier follows a systematic process for each claim it checks:
## Default Output 1. **Retrieve the source** -- Fetch the cited paper, article, or code file
2. **Locate the relevant section** -- Find where the source addresses the claim
3. **Compare** -- Check whether the source supports the claim as stated
4. **Classify** -- Mark the claim as verified, unsupported, overstated, or contradicted
5. **Document** -- Record the evidence with exact quotes and locations
`cited.md` This process is deterministic and traceable. Every verification result includes the specific passage or code that was checked, making it easy to audit the verifier's work.
You receive a draft document and the research files it was built from. Your job is to: ## Confidence and limitations
1. **Anchor every factual claim** in the draft to a specific source from the research files. Insert inline citations `[1]`, `[2]`, etc. directly after each claim. The verifier assigns a confidence level to each verification. Claims that directly quote a source are verified with high confidence. Claims that paraphrase or interpret results are verified with moderate confidence, since reasonable interpretations can differ. Claims about the implications or significance of results are verified with lower confidence, since these involve judgment.
2. **Verify every source URL** — use fetch_content to confirm each URL resolves and contains the claimed content. Flag dead links.
3. **Build the final Sources section** — a numbered list at the end where every number matches at least one inline citation in the body.
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
## Citation rules The verifier is honest about its limitations. When a claim cannot be verified because the source is behind a paywall, the code is not available, or the claim requires domain expertise beyond what the verifier can assess, it says so explicitly rather than guessing.
- Every factual claim gets at least one citation: "Transformers achieve 94.2% on MMLU [3]." ## Used by
- Multiple sources for one claim: "Recent work questions benchmark validity [7, 12]."
- No orphan citations — every `[N]` in the body must appear in Sources.
- No orphan sources — every entry in Sources must be cited at least once.
- Hedged or opinion statements do not need citations.
- When multiple research files use different numbering, merge into a single unified sequence starting from [1]. Deduplicate sources that appear in multiple files.
## Source verification The verifier agent is used by `/deepresearch` (final fact-checking pass), `/audit` (comparing paper claims to code), and `/replicate` (verifying that the replication plan captures all necessary details). It serves as the quality control step that runs after the researcher and writer have produced their output.
For each source URL:
- **Live:** keep as-is.
- **Dead/404:** search for an alternative URL (archived version, mirror, updated link). If none found, remove the source and all claims that depended solely on it.
- **Redirects to unrelated content:** treat as dead.
## Output contract
- Save to the output file (default: `cited.md`).
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
- Do not change the substance or structure of the draft. Only add citations and fix dead sources.

View File

@@ -1,56 +1,36 @@
--- ---
title: Writer title: Writer
description: Turn research notes into clear, structured briefs and drafts. description: The writer agent produces structured academic prose from research findings.
section: Agents section: Agents
order: 3 order: 3
--- ---
## Source The writer agent transforms raw research findings into structured, well-organized documents. It specializes in academic prose, producing papers, briefs, surveys, and reports with proper citations, section structure, and narrative flow.
Generated from `.feynman/agents/writer.md`. Edit that prompt file, not this docs page. ## What it does
## Role The writer takes source material -- findings from researcher agents, review feedback, comparison matrices -- and synthesizes it into a coherent document. It handles the difficult task of turning a collection of extracted claims and citations into prose that tells a clear story.
Turn research notes into clear, structured briefs and drafts. The writer understands academic conventions. Claims are attributed to their sources with inline citations. Methodology sections describe procedures with sufficient detail for reproduction. Results are presented with appropriate qualifiers. Limitations are discussed honestly rather than buried or omitted.
## Tools ## Writing capabilities
`read`, `bash`, `grep`, `find`, `ls`, `write`, `edit` The writer agent handles several document types:
## Default Output - **Research Briefs** -- Concise summaries of a topic with key findings and citations, produced by the deep research workflow
- **Literature Reviews** -- Survey-style documents that map consensus, disagreement, and open questions across the field
- **Paper Drafts** -- Full academic papers with abstract, introduction, body sections, discussion, and references
- **Comparison Reports** -- Structured analyses of how multiple sources agree and differ
- **Summaries** -- Condensed versions of longer documents or multi-source findings
`draft.md` ## Citation handling
## Integrity commandments The writer maintains citation integrity throughout the document. Every factual claim is linked back to its source. When multiple sources support the same claim, all are cited. When a claim comes from a single source, the writer notes this to help the reader assess confidence. The final reference list includes only works actually cited in the text.
1. **Write only from supplied evidence.** Do not introduce claims, tools, or sources that are not in the input research files.
2. **Preserve caveats and disagreements.** Never smooth away uncertainty.
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
## Output structure ## Iteration
```markdown The writer supports iterative refinement. After producing an initial draft, you can ask Feynman to revise specific sections, add more detail on a subtopic, restructure the argument, or adjust the tone and level of technical detail. Each revision preserves the citation links and document structure.
# Title
## Executive Summary ## Used by
2-3 paragraph overview of key findings.
## Section 1: ... The writer agent is used by `/deepresearch` (for the final brief), `/lit` (for the review document), `/draft` (as the primary agent), and `/compare` (for the comparison report). It is always the last agent to run in a workflow, producing the final output from the material gathered and evaluated by the researcher and reviewer agents.
Detailed findings organized by theme or question.
## Section N: ...
...
## Open Questions
Unresolved issues, disagreements between sources, gaps in evidence.
```
## Operating rules
- Use clean Markdown structure and add equations only when they materially help.
- Keep the narrative readable, but never outrun the evidence.
- Produce artifacts that are ready to review in a browser or PDF preview.
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
- Do NOT add a Sources section — the verifier agent builds that.
## Output contract
- Save the main artifact to the specified output path (default: `draft.md`).
- Focus on clarity, structure, and evidence traceability.

View File

@@ -1,66 +1,83 @@
--- ---
title: Configuration title: Configuration
description: Configure models, search, and runtime options description: Understand Feynman's configuration files and environment variables.
section: Getting Started section: Getting Started
order: 4 order: 4
--- ---
## Model Feynman stores all configuration and state under `~/.feynman/`. This directory is created on first run and contains settings, authentication tokens, session history, and installed packages.
Set the default model: ## Directory structure
```bash ```
feynman model set <provider:model> ~/.feynman/
├── settings.json # Core configuration
├── web-search.json # Web search routing config
├── auth/ # OAuth tokens and API keys
├── sessions/ # Persisted conversation history
└── packages/ # Installed optional packages
``` ```
Override at runtime: The `settings.json` file is the primary configuration file. It is created by `feynman setup` and can be edited manually. A typical configuration looks like:
```bash ```json
feynman --model anthropic:claude-opus-4-6 {
"defaultModel": "anthropic:claude-sonnet-4-20250514",
"thinkingLevel": "medium"
}
``` ```
List available models: ## Model configuration
The `defaultModel` field sets which model is used when you launch Feynman without the `--model` flag. The format is `provider:model-name`. You can change it via the CLI:
```bash
feynman model set anthropic:claude-opus-4-20250514
```
To see all models you have configured:
```bash ```bash
feynman model list feynman model list
``` ```
## Thinking level ## Thinking levels
Control the reasoning depth: The `thinkingLevel` field controls how much reasoning the model does before responding. Available levels are `off`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Higher levels produce more thorough analysis at the cost of latency and token usage. You can override per-session:
```bash ```bash
feynman --thinking high feynman --thinking high
``` ```
Levels: `off`, `minimal`, `low`, `medium`, `high`, `xhigh`. ## Environment variables
## Web search Feynman respects the following environment variables, which take precedence over `settings.json`:
Check the current search configuration: | Variable | Description |
| --- | --- |
```bash | `FEYNMAN_MODEL` | Override the default model |
feynman search status | `FEYNMAN_HOME` | Override the config directory (default: `~/.feynman`) |
``` | `FEYNMAN_THINKING` | Override the thinking level |
| `ANTHROPIC_API_KEY` | Anthropic API key |
For advanced configuration, edit `~/.feynman/web-search.json` directly to set Gemini API keys, Perplexity keys, or a different route. | `OPENAI_API_KEY` | OpenAI API key |
| `GOOGLE_API_KEY` | Google AI API key |
## Working directory | `TAVILY_API_KEY` | Tavily web search API key |
| `SERPER_API_KEY` | Serper web search API key |
```bash
feynman --cwd /path/to/project
```
## Session storage ## Session storage
```bash Each conversation is persisted as a JSON file in `~/.feynman/sessions/`. To start a fresh session:
feynman --session-dir /path/to/sessions
```
## One-shot mode
Run a single prompt and exit:
```bash ```bash
feynman --prompt "summarize the key findings of 2401.12345" feynman --new-session
``` ```
To point sessions at a different directory (useful for per-project session isolation):
```bash
feynman --session-dir ~/myproject/.feynman/sessions
```
## Diagnostics
Run `feynman doctor` to verify your configuration is valid, check authentication status for all configured providers, and detect missing optional dependencies. The doctor command outputs a checklist showing what is working and what needs attention.

View File

@@ -1,48 +1,103 @@
--- ---
title: Installation title: Installation
description: Install Feynman and get started description: Install Feynman on macOS, Linux, or Windows using curl, pnpm, or bun.
section: Getting Started section: Getting Started
order: 1 order: 1
--- ---
## Requirements Feynman ships as a standalone runtime bundle for macOS, Linux, and Windows, and as a package-manager install for environments where Node.js is already installed. The recommended approach is the one-line installer, which downloads a prebuilt native bundle with zero external runtime dependencies.
- macOS, Linux, or WSL ## One-line installer (recommended)
- `curl` or `wget`
## Recommended install On **macOS or Linux**, open a terminal and run:
```bash ```bash
curl -fsSL https://feynman.is/install | bash curl -fsSL https://feynman.is/install | bash
``` ```
## Verify The installer detects your OS and architecture automatically. On macOS it supports both Intel and Apple Silicon. On Linux it supports x64 and arm64. The launcher is installed to `~/.local/bin`, the bundled runtime is unpacked into `~/.local/share/feynman`, and your `PATH` is updated when needed.
```bash By default, the one-line installer tracks the rolling `edge` channel from `main`.
feynman --version
```
## Windows PowerShell On **Windows**, open PowerShell as Administrator and run:
```powershell ```powershell
irm https://feynman.is/install.ps1 | iex irm https://feynman.is/install.ps1 | iex
``` ```
## npm fallback This installs the Windows runtime bundle under `%LOCALAPPDATA%\Programs\feynman`, adds its launcher to your user `PATH`, and lets you re-run the installer at any time to update.
If you already manage Node yourself: ## Stable or pinned releases
If you want the latest tagged release instead of the rolling `edge` channel:
```bash ```bash
npm install -g @companion-ai/feynman curl -fsSL https://feynman.is/install | bash -s -- stable
``` ```
## Local Development On Windows:
For contributing or local development: ```powershell
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version stable
```
You can also pin an exact version by replacing `stable` with a version such as `0.2.13`.
## pnpm
If you already have Node.js 20.18.1+ installed, you can install Feynman globally via `pnpm`:
```bash
pnpm add -g @companion-ai/feynman
```
Or run it directly without installing:
```bash
pnpm dlx @companion-ai/feynman
```
## bun
```bash
bun add -g @companion-ai/feynman
```
Or run it directly without installing:
```bash
bunx @companion-ai/feynman
```
Both package-manager distributions ship the same core application but depend on Node.js being present on your system. The standalone installer is preferred because it bundles its own Node runtime and works without a separate Node installation.
## Post-install setup
After installation, run the guided setup wizard to configure your model provider and API keys:
```bash
feynman setup
```
This walks you through selecting a default model, authenticating with your provider, and optionally installing extra packages for features like web search and document preview. See the [Setup guide](/docs/getting-started/setup) for a detailed walkthrough.
## Verifying the installation
Confirm Feynman is installed and accessible:
```bash
feynman --version
```
If you see a version number, you are ready to go. Run `feynman doctor` at any time to diagnose configuration issues, missing dependencies, or authentication problems.
## Local development
For contributing or running Feynman from source:
```bash ```bash
git clone https://github.com/getcompanion-ai/feynman.git git clone https://github.com/getcompanion-ai/feynman.git
cd feynman cd feynman
npm install pnpm install
npm run start pnpm start
``` ```

View File

@@ -1,44 +1,60 @@
--- ---
title: Quick Start title: Quick Start
description: Get up and running with Feynman in 60 seconds description: Get up and running with Feynman in under five minutes.
section: Getting Started section: Getting Started
order: 2 order: 2
--- ---
## First run This guide assumes you have already [installed Feynman](/docs/getting-started/installation) and run `feynman setup`. If not, start there first.
## Launch the REPL
Start an interactive session by running:
```bash ```bash
feynman setup
feynman feynman
``` ```
`feynman setup` walks you through model authentication, alphaXiv login, web search configuration, and preview dependencies. You are dropped into a conversational REPL where you can ask research questions, run workflows, and interact with agents in natural language. Type your question and press Enter.
## Ask naturally ## Run a one-shot prompt
Feynman routes your questions into the right workflow automatically. You don't need slash commands to get started. If you want a quick answer without entering the REPL, use the `--prompt` flag:
``` ```bash
> What are the main approaches to RLHF alignment? feynman --prompt "Summarize the key findings of Attention Is All You Need"
``` ```
Feynman will search papers, gather web sources, and produce a structured answer with citations. Feynman processes the prompt, prints the response, and exits. This is useful for scripting or piping output into other tools.
## Use workflows directly ## Start a deep research session
For explicit control, use slash commands inside the REPL: Deep research is the flagship workflow. It dispatches multiple agents to search, read, cross-reference, and synthesize information from academic papers and the web:
``` ```bash
> /deepresearch transformer scaling laws feynman
> /lit multimodal reasoning benchmarks > /deepresearch What are the current approaches to mechanistic interpretability in LLMs?
> /review paper.pdf
``` ```
## Output locations The agents collaborate to produce a structured research report with citations, key findings, and open questions. The full report is saved to your session directory for later reference.
Feynman writes durable artifacts to canonical directories: ## Work with files
- `outputs/` — Reviews, reading lists, summaries Feynman can read and write files in your working directory. Point it at a paper or codebase for targeted analysis:
- `papers/` — Polished paper-style drafts
- `experiments/` — Runnable code and result logs ```bash
- `notes/` — Scratch notes and session logs feynman --cwd ~/papers
> /review arxiv:2301.07041
```
You can also ask Feynman to draft documents, audit code, or compare multiple sources by referencing local files directly in your prompts.
## Explore slash commands
Type `/help` inside the REPL to see all available slash commands. Each command maps to a workflow or utility, such as `/deepresearch`, `/review`, `/draft`, `/watch`, and more. You can also run any workflow directly from the CLI:
```bash
feynman deepresearch "transformer architectures for protein folding"
```
See the [Slash Commands reference](/docs/reference/slash-commands) for the complete list.

View File

@@ -1,78 +1,57 @@
--- ---
title: Setup title: Setup
description: Detailed setup guide for Feynman description: Walk through the guided setup wizard to configure Feynman.
section: Getting Started section: Getting Started
order: 3 order: 3
--- ---
## Guided setup The `feynman setup` wizard configures your model provider, API keys, and optional packages. It runs automatically on first launch, but you can re-run it at any time to change your configuration.
## Running setup
```bash ```bash
feynman setup feynman setup
``` ```
This walks through four steps: The wizard walks you through three stages: model configuration, authentication, and optional package installation.
### Model provider authentication ## Stage 1: Model selection
Feynman uses Pi's OAuth system for model access. The setup wizard prompts you to log in to your preferred provider. Feynman supports multiple model providers. The setup wizard presents a list of available providers and models. Select your preferred default model using the arrow keys:
```bash ```
feynman model login ? Select your default model:
anthropic:claude-sonnet-4-20250514
> anthropic:claude-opus-4-20250514
openai:gpt-4o
openai:o3
google:gemini-2.5-pro
``` ```
### AlphaXiv login The model you choose here becomes the default for all sessions. You can override it per-session with the `--model` flag or change it later via `feynman model set <provider:model>`.
AlphaXiv powers Feynman's paper search and analysis tools. Sign in with: ## Stage 2: Authentication
```bash Depending on your chosen provider, setup prompts you for an API key or walks you through OAuth login. For providers that support Pi OAuth (like Anthropic and OpenAI), Feynman opens a browser window to complete the sign-in flow. Your credentials are stored securely in the Pi auth storage at `~/.feynman/`.
feynman alpha login
For API key providers, you are prompted to paste your key directly:
```
? Enter your API key: sk-ant-...
``` ```
Check status anytime: Keys are encrypted at rest and never sent anywhere except the provider's API endpoint.
```bash ## Stage 3: Optional packages
feynman alpha status
```
### Web search routing Feynman's core ships with the essentials, but some features require additional packages. The wizard asks if you want to install optional presets:
Feynman supports three web search backends: - **session-search** -- Enables searching prior session transcripts for past research
- **memory** -- Automatic preference and correction memory across sessions
- **generative-ui** -- Interactive HTML-style widgets for rich output
- **auto** — Prefer Perplexity when configured, fall back to Gemini You can skip this step and install packages later with `feynman packages install <preset>`.
- **perplexity** — Force Perplexity Sonar
- **gemini** — Force Gemini (default, zero-config via signed-in Chromium)
The default path requires no API keys — it uses Gemini Browser via your signed-in Chromium profile. ## Re-running setup
### Preview dependencies Configuration is stored in `~/.feynman/settings.json`. Running `feynman setup` again overwrites previous settings. If you only need to change a specific value, edit the config file directly or use the targeted commands like `feynman model set` or `feynman alpha login`.
For PDF and HTML export of generated artifacts, Feynman needs `pandoc`:
```bash
feynman --setup-preview
```
Global macOS installs also try to install pandoc automatically when Homebrew is available. Use the command above to retry manually.
### Optional packages
Feynman keeps the default package set lean so first-run installs stay fast. Install the heavier optional packages only when you need them:
```bash
feynman packages list
feynman packages install memory
feynman packages install session-search
feynman packages install generative-ui
feynman packages install all-extras
```
## Diagnostics
Run the doctor to check everything:
```bash
feynman doctor
```
This verifies model auth, alphaXiv credentials, preview dependencies, and the Pi runtime.

View File

@@ -1,61 +1,88 @@
--- ---
title: CLI Commands title: CLI Commands
description: Complete reference for Feynman CLI commands description: Complete reference for all Feynman CLI commands and flags.
section: Reference section: Reference
order: 1 order: 1
--- ---
This page covers the dedicated Feynman CLI commands and compatibility flags. This page covers the dedicated Feynman CLI commands and flags. Workflow commands like `feynman deepresearch` are also documented in the [Slash Commands](/docs/reference/slash-commands) reference since they map directly to REPL slash commands.
Workflow prompt templates such as `/deepresearch` also run directly from the shell as `feynman <workflow> ...`. Those workflow entries live in the slash-command reference instead of being duplicated here. ## Core commands
## Core
| Command | Description | | Command | Description |
| --- | --- | | --- | --- |
| `feynman` | Launch the interactive REPL. | | `feynman` | Launch the interactive REPL |
| `feynman chat [prompt]` | Start chat explicitly, optionally with an initial prompt. | | `feynman chat [prompt]` | Start chat explicitly, optionally with an initial prompt |
| `feynman help` | Show CLI help. | | `feynman help` | Show CLI help |
| `feynman setup` | Run the guided setup wizard. | | `feynman setup` | Run the guided setup wizard |
| `feynman doctor` | Diagnose config, auth, Pi runtime, and preview dependencies. | | `feynman doctor` | Diagnose config, auth, Pi runtime, and preview dependencies |
| `feynman status` | Show the current setup summary. | | `feynman status` | Show the current setup summary (model, auth, packages) |
## Model Management ## Model management
| Command | Description | | Command | Description |
| --- | --- | | --- | --- |
| `feynman model list` | List available models in Pi auth storage. | | `feynman model list` | List available models in Pi auth storage |
| `feynman model login [id]` | Login to a Pi OAuth model provider. | | `feynman model login [id]` | Login to a Pi OAuth model provider |
| `feynman model logout [id]` | Logout from a Pi OAuth model provider. | | `feynman model logout [id]` | Logout from a Pi OAuth model provider |
| `feynman model set <provider/model>` | Set the default model. | | `feynman model set <provider:model>` | Set the default model for all sessions |
## AlphaXiv These commands manage your model provider configuration. The `model set` command updates `~/.feynman/settings.json` with the new default. The format is `provider:model-name`, for example `anthropic:claude-sonnet-4-20250514`.
## AlphaXiv commands
| Command | Description | | Command | Description |
| --- | --- | | --- | --- |
| `feynman alpha login` | Sign in to alphaXiv. | | `feynman alpha login` | Sign in to alphaXiv |
| `feynman alpha logout` | Clear alphaXiv auth. | | `feynman alpha logout` | Clear alphaXiv auth |
| `feynman alpha status` | Check alphaXiv auth status. | | `feynman alpha status` | Check alphaXiv auth status |
## Utilities AlphaXiv authentication enables Feynman to search and retrieve papers, access discussion threads, and pull citation metadata. The `alpha` CLI is also available directly in the agent shell for paper search, Q&A, and code inspection.
## Package management
| Command | Description | | Command | Description |
| --- | --- | | --- | --- |
| `feynman search status` | Show Pi web-access status and config path. | | `feynman packages list` | List all available packages and their install status |
| `feynman update [package]` | Update installed packages, or a specific package. | | `feynman packages install <preset>` | Install an optional package preset |
| `feynman update [package]` | Update installed packages, or a specific package by name |
Use `feynman packages list` to see which optional packages are available and which are already installed. The `all-extras` preset installs every optional package at once.
## Utility commands
| Command | Description |
| --- | --- |
| `feynman search status` | Show Pi web-access status and config path |
## Workflow commands
All research workflow slash commands can also be invoked directly from the CLI:
```bash
feynman deepresearch "topic"
feynman lit "topic"
feynman review artifact.md
feynman audit 2401.12345
feynman replicate "claim"
feynman compare "topic"
feynman draft "topic"
```
These are equivalent to launching the REPL and typing the corresponding slash command.
## Flags ## Flags
| Flag | Description | | Flag | Description |
| --- | --- | | --- | --- |
| `--prompt "<text>"` | Run one prompt and exit. | | `--prompt "<text>"` | Run one prompt and exit (one-shot mode) |
| `--alpha-login` | Sign in to alphaXiv and exit. | | `--model <provider:model>` | Force a specific model for this session |
| `--alpha-logout` | Clear alphaXiv auth and exit. | | `--thinking <level>` | Set thinking level: `off`, `minimal`, `low`, `medium`, `high`, `xhigh` |
| `--alpha-status` | Show alphaXiv auth status and exit. | | `--cwd <path>` | Set the working directory for all file operations |
| `--model <provider:model>` | Force a specific model. | | `--session-dir <path>` | Set the session storage directory |
| `--thinking <level>` | Set thinking level: off | minimal | low | medium | high | xhigh. | | `--new-session` | Start a new persisted session |
| `--cwd <path>` | Set the working directory for tools. | | `--alpha-login` | Sign in to alphaXiv and exit |
| `--session-dir <path>` | Set the session storage directory. | | `--alpha-logout` | Clear alphaXiv auth and exit |
| `--new-session` | Start a new persisted session. | | `--alpha-status` | Show alphaXiv auth status and exit |
| `--doctor` | Alias for `feynman doctor`. | | `--doctor` | Alias for `feynman doctor` |
| `--setup-preview` | Alias for `feynman setup preview`. | | `--setup-preview` | Install preview dependencies (pandoc) |

View File

@@ -1,36 +1,76 @@
--- ---
title: Package Stack title: Package Stack
description: Curated Pi packages bundled with Feynman description: Core and optional Pi packages bundled with Feynman.
section: Reference section: Reference
order: 3 order: 3
--- ---
Curated Pi packages bundled with Feynman. The runtime package list lives in `.feynman/settings.json`. Feynman is built on the Pi runtime and uses curated Pi packages for its capabilities. Packages are managed through `feynman packages` commands and configured in `~/.feynman/settings.json`.
## Core packages ## Core packages
Installed by default. These are installed by default with every Feynman installation. They provide the foundation for all research workflows.
| Package | Purpose | | Package | Purpose |
|---------|---------| | --- | --- |
| `pi-subagents` | Parallel literature gathering and decomposition. | | `pi-subagents` | Parallel agent spawning for literature gathering and task decomposition. Powers the multi-agent workflows |
| `pi-btw` | Fast side-thread `/btw` conversations without interrupting the main run. | | `pi-btw` | Fast side-thread `/btw` conversations without interrupting the main research run |
| `pi-docparser` | PDFs, Office docs, spreadsheets, and images. | | `pi-docparser` | Parse PDFs, Office documents, spreadsheets, and images for content extraction |
| `pi-web-access` | Web, GitHub, PDF, and media access. | | `pi-web-access` | Web browsing, GitHub access, PDF fetching, and media retrieval |
| `pi-markdown-preview` | Polished Markdown and LaTeX-heavy research writeups. | | `pi-markdown-preview` | Render Markdown and LaTeX-heavy research documents as polished HTML/PDF |
| `@walterra/pi-charts` | Charts and quantitative visualizations. | | `@walterra/pi-charts` | Generate charts and quantitative visualizations from data |
| `pi-mermaid` | Diagrams in the TUI. | | `pi-mermaid` | Render Mermaid diagrams in the terminal UI |
| `@aliou/pi-processes` | Long-running experiments and log tails. | | `@aliou/pi-processes` | Manage long-running experiments, background tasks, and log tailing |
| `pi-zotero` | Citation-library workflows. | | `pi-zotero` | Integration with Zotero for citation library management |
| `pi-schedule-prompt` | Recurring and deferred research jobs. | | `pi-schedule-prompt` | Schedule recurring and deferred research jobs. Powers the `/watch` workflow |
| `@tmustier/pi-ralph-wiggum` | Long-running agent loops for iterative development. | | `@tmustier/pi-ralph-wiggum` | Long-running agent loops for iterative development. Powers `/autoresearch` |
These packages are updated together when you run `feynman update`. You do not need to install them individually.
## Optional packages ## Optional packages
Install on demand with `feynman packages install <preset>`. Install on demand with `feynman packages install <preset>`. These extend Feynman with capabilities that not every user needs.
| Package | Purpose | | Package | Preset | Purpose |
|---------|---------| | --- | --- | --- |
| `pi-generative-ui` | Interactive HTML-style widgets. | | `pi-generative-ui` | `generative-ui` | Interactive HTML-style widgets for rich output |
| `@kaiserlich-dev/pi-session-search` | Indexed session recall and summarize/resume UI. | | `@kaiserlich-dev/pi-session-search` | `session-search` | Indexed session recall with summarize and resume UI. Powers `/search` |
| `@samfp/pi-memory` | Automatic preference and correction memory across sessions. | | `@samfp/pi-memory` | `memory` | Automatic preference and correction memory across sessions |
## Installing and managing packages
List all available packages and their install status:
```bash
feynman packages list
```
Install a specific optional preset:
```bash
feynman packages install session-search
feynman packages install memory
feynman packages install generative-ui
```
Install all optional packages at once:
```bash
feynman packages install all-extras
```
## Updating packages
Update all installed packages to their latest versions:
```bash
feynman update
```
Update a specific package:
```bash
feynman update pi-subagents
```
Running `feynman update` without arguments updates everything. Pass a specific package name to update just that one. Updates are safe and preserve your configuration.

View File

@@ -1,41 +1,54 @@
--- ---
title: Slash Commands title: Slash Commands
description: Repo-owned REPL slash commands description: Complete reference for REPL slash commands.
section: Reference section: Reference
order: 2 order: 2
--- ---
This page documents the slash commands that Feynman owns in this repository: prompt templates from `prompts/` and extension commands from `extensions/research-tools/`. Slash commands are available inside the Feynman REPL. They map to research workflows, project management tools, and setup utilities. Type `/help` inside the REPL for the live command list, which may include additional commands from installed Pi packages.
Additional slash commands can appear at runtime from Pi core and bundled packages such as subagents, preview, session search, and scheduling. Use `/help` inside the REPL for the live command list instead of relying on a static copy of package-provided commands. ## Research workflows
## Research Workflows
| Command | Description | | Command | Description |
| --- | --- | | --- | --- |
| `/deepresearch <topic>` | Run a thorough, source-heavy investigation on a topic and produce a durable research brief with inline citations. | | `/deepresearch <topic>` | Run a thorough, source-heavy investigation and produce a research brief with inline citations |
| `/lit <topic>` | Run a literature review on a topic using paper search and primary-source synthesis. | | `/lit <topic>` | Run a structured literature review with consensus, disagreements, and open questions |
| `/review <artifact>` | Simulate an AI research peer review with likely objections, severity, and a concrete revision plan. | | `/review <artifact>` | Simulate a peer review with severity-graded feedback and inline annotations |
| `/audit <item>` | Compare a paper's claims against its public codebase and identify mismatches, omissions, and reproducibility risks. | | `/audit <item>` | Compare a paper's claims against its public codebase for mismatches and reproducibility risks |
| `/replicate <paper>` | Plan or execute a replication workflow for a paper, claim, or benchmark. | | `/replicate <paper>` | Plan or execute a replication workflow for a paper, claim, or benchmark |
| `/compare <topic>` | Compare multiple sources on a topic and produce a source-grounded matrix of agreements, disagreements, and confidence. | | `/compare <topic>` | Compare multiple sources and produce an agreement/disagreement matrix |
| `/draft <topic>` | Turn research findings into a polished paper-style draft with equations, sections, and explicit claims. | | `/draft <topic>` | Generate a paper-style draft from research findings |
| `/autoresearch <idea>` | Autonomous experiment loop — try ideas, measure results, keep what works, discard what doesn't, repeat. | | `/autoresearch <idea>` | Start an autonomous experiment loop that iteratively optimizes toward a goal |
| `/watch <topic>` | Set up a recurring or deferred research watch on a topic, company, paper area, or product surface. | | `/watch <topic>` | Set up recurring research monitoring on a topic |
## Project & Session These are the primary commands you will use day-to-day. Each workflow dispatches one or more specialized agents (researcher, reviewer, writer, verifier) depending on the task.
## Project and session
| Command | Description | | Command | Description |
| --- | --- | | --- | --- |
| `/log` | Write a durable session log with completed work, findings, open questions, and next steps. | | `/log` | Write a durable session log with completed work, findings, open questions, and next steps |
| `/jobs` | Inspect active background research work, including running processes and scheduled follow-ups. | | `/jobs` | Inspect active background work: running processes, scheduled follow-ups, and active watches |
| `/help` | Show grouped Feynman commands and prefill the editor with a selected command. | | `/help` | Show grouped Feynman commands and prefill the editor with a selected command |
| `/init` | Bootstrap AGENTS.md and session-log folders for a research project. | | `/init` | Bootstrap `AGENTS.md` and session-log folders for a new research project |
| `/outputs` | Browse all research artifacts (papers, outputs, experiments, notes) |
| `/search` | Search prior session transcripts for past research and findings |
| `/preview` | Preview the current artifact as rendered HTML or PDF |
## Setup Session management commands help you organize ongoing work. The `/log` command is particularly useful at the end of a research session to capture what was accomplished and what remains.
| Command | Description | ## Running workflows from the CLI
| --- | --- |
| `/alpha-login` | Sign in to alphaXiv from inside Feynman. | All research workflow slash commands can also be run directly from the command line:
| `/alpha-status` | Show alphaXiv authentication status. |
| `/alpha-logout` | Clear alphaXiv auth from inside Feynman. | ```bash
feynman deepresearch "topic"
feynman lit "topic"
feynman review artifact.md
feynman audit 2401.12345
feynman replicate "claim"
feynman compare "topic"
feynman draft "topic"
```
This is equivalent to launching the REPL and typing the slash command. The CLI form is useful for scripting and automation.

View File

@@ -1,40 +1,53 @@
--- ---
title: AlphaXiv title: AlphaXiv
description: Paper search and analysis tools description: Search and retrieve academic papers through the AlphaXiv integration.
section: Tools section: Tools
order: 1 order: 1
--- ---
## Overview AlphaXiv is the primary academic paper search and retrieval tool in Feynman. It provides access to a vast corpus of research papers, discussion threads, citation metadata, and full-text PDFs. The researcher agent uses AlphaXiv as its primary source for academic content.
AlphaXiv powers Feynman's academic paper workflows. All tools require an alphaXiv account — sign in with `feynman alpha login`. ## Authentication
## Tools AlphaXiv requires authentication. Set it up during initial setup or at any time:
### alpha_search ```bash
feynman alpha login
```
Paper discovery with three search modes: Check your authentication status:
- **semantic** — Meaning-based search across paper content ```bash
- **keyword** — Traditional keyword matching feynman alpha status
- **agentic** — AI-powered search that interprets your intent ```
### alpha_get_paper ## What it provides
Fetch a paper's report (structured summary) or full raw text by arXiv ID. AlphaXiv gives Feynman access to several capabilities that power the research workflows:
### alpha_ask_paper - **Paper search** -- Find papers by topic, author, keyword, or arXiv ID (`alpha search`)
- **Full-text retrieval** -- Download and parse complete PDFs for in-depth reading (`alpha get`)
- **Paper Q&A** -- Ask targeted questions about a paper's content (`alpha ask`)
- **Code inspection** -- Read files from a paper's linked GitHub repository (`alpha code`)
- **Annotations** -- Persistent local notes on papers across sessions (`alpha annotate`)
Ask a targeted question about a specific paper. Returns an answer grounded in the paper's content. ## How it is used
### alpha_annotate_paper Feynman ships an `alpha-research` skill that teaches the agent to use the `alpha` CLI for paper operations. The researcher agent uses it automatically during workflows like deep research, literature review, and peer review. When you provide an arXiv ID (like `2401.12345`), the agent fetches the paper via `alpha get`.
Add persistent local notes to a paper. Annotations are stored locally and persist across sessions. You can also use the `alpha` CLI directly from the terminal:
### alpha_list_annotations ```bash
alpha search "scaling laws"
alpha get 2401.12345
alpha ask 2401.12345 "What optimizer did they use?"
alpha code https://github.com/org/repo src/model.py
```
Recall all annotations across papers and sessions. ## Configuration
### alpha_read_code Authentication tokens are stored in `~/.feynman/auth/` and persist across sessions. No additional configuration is needed beyond logging in.
Read source code from a paper's linked GitHub repository. Useful for auditing or replication planning. ## Without AlphaXiv
If you choose not to authenticate with AlphaXiv, Feynman still functions but with reduced academic search capabilities. It falls back to web search for finding papers, which works for well-known work but misses the citation metadata, discussion threads, and full-text access that AlphaXiv provides. For serious research workflows, AlphaXiv authentication is strongly recommended.

View File

@@ -1,34 +1,50 @@
--- ---
title: Preview title: Preview
description: Preview generated artifacts in browser or PDF description: Preview generated research artifacts as rendered HTML or PDF.
section: Tools section: Tools
order: 4 order: 4
--- ---
## Overview The preview tool renders generated artifacts as polished HTML or PDF documents and opens them in your browser or PDF viewer. This is particularly useful for research briefs, paper drafts, and any document that contains LaTeX math, tables, or complex formatting that does not render well in a terminal.
The `preview_file` tool opens generated artifacts in your browser or PDF viewer.
## Usage ## Usage
Inside the REPL: Inside the REPL, preview the most recent artifact:
``` ```
/preview /preview
``` ```
Or Feynman will suggest previewing when you generate artifacts that benefit from rendered output (Markdown with LaTeX, HTML reports, etc.). Feynman suggests previewing automatically when you generate artifacts that benefit from rendered output. You can also preview a specific file:
```
/preview outputs/scaling-laws-brief.md
```
## Requirements ## Requirements
Preview requires `pandoc` for PDF/HTML rendering. Install it with: Preview requires `pandoc` for Markdown-to-HTML and Markdown-to-PDF rendering. Install the preview dependencies with:
```bash ```bash
feynman --setup-preview feynman --setup-preview
``` ```
On macOS with Homebrew, the setup command attempts to install pandoc automatically. On Linux, it checks for pandoc in your package manager. If the automatic install does not work, install pandoc manually from [pandoc.org](https://pandoc.org/installing.html) and rerun `feynman --setup-preview` to verify.
## Supported formats ## Supported formats
- Markdown (with LaTeX math rendering) The preview tool handles three output formats:
- HTML
- PDF - **Markdown** -- Rendered as HTML with full LaTeX math support via KaTeX, syntax-highlighted code blocks, and clean typography
- **HTML** -- Opened directly in your default browser with no conversion step
- **PDF** -- Generated via pandoc with LaTeX rendering, suitable for sharing or printing
## How it works
The `pi-markdown-preview` package handles the rendering pipeline. For Markdown files, it converts to HTML with a clean stylesheet, proper code highlighting, and rendered math equations. The preview opens in your default browser as a local file.
For documents with heavy math notation (common in research drafts), the preview ensures all LaTeX expressions render correctly. Inline math (`$...$`) and display math (`$$...$$`) are both supported. Tables, citation lists, and nested blockquotes all render with proper formatting.
## Customization
The preview stylesheet is designed for research documents and includes styles for proper heading hierarchy, code blocks with syntax highlighting, tables with clean borders, math equations (inline and display), citation formatting, and blockquotes. The stylesheet is bundled with the package and does not require any configuration.

View File

@@ -1,26 +1,47 @@
--- ---
title: Session Search title: Session Search
description: Search prior Feynman session transcripts description: Search prior Feynman session transcripts to recall past research.
section: Tools section: Tools
order: 3 order: 3
--- ---
## Overview The session search tool recovers prior Feynman work from stored session transcripts. Every Feynman session is persisted to disk, and session search lets you find and reference past research, findings, and generated artifacts without starting over.
The `session_search` tool recovers prior Feynman work from stored session transcripts. Useful for picking up previous research threads or finding past findings. ## Installation
Session search is an optional package. Install it with:
```bash
feynman packages install session-search
```
Once installed, the `/search` slash command and automatic session recall become available in all future sessions.
## Usage ## Usage
Inside the REPL: Inside the REPL, invoke session search directly:
``` ```
/search /search transformer scaling laws
``` ```
Or use the tool directly — Feynman will invoke `session_search` automatically when you reference prior work. You can also reference prior work naturally in conversation. Feynman invokes session search automatically when you mention previous research or ask to continue earlier work. For example, saying "pick up where I left off on protein folding" triggers a session search behind the scenes.
## What it searches ## What it searches
- Full session transcripts Session search indexes the full contents of your session history:
- Tool outputs and agent results
- Generated artifacts and their content - Full session transcripts including your prompts and Feynman's responses
- Tool outputs and agent results from workflows like deep research and literature review
- Generated artifacts such as drafts, reports, and comparison matrices
- Metadata like timestamps, topics, and workflow types
The search uses both keyword matching and semantic similarity to find relevant past work. Results include the session ID, timestamp, and relevant excerpts so you can quickly identify which session contains the information you need.
## When to use it
Session search is valuable when you want to pick up a previous research thread without rerunning an expensive workflow, find specific findings or citations from a past deep research session, reference prior analysis in a new research context, or check what you have already investigated on a topic before launching a new round.
## How it works
The `@kaiserlich-dev/pi-session-search` package provides the underlying search and indexing. Sessions are stored in `~/.feynman/sessions/` by default (configurable with `--session-dir`). The index is built incrementally as new sessions complete, so search stays fast even with hundreds of past sessions.

View File

@@ -1,34 +1,57 @@
--- ---
title: Web Search title: Web Search
description: Web search routing and configuration description: Web search routing, configuration, and usage within Feynman.
section: Tools section: Tools
order: 2 order: 2
--- ---
Feynman's web search tool retrieves current information from the web during research workflows. It supports multiple simultaneous queries, domain filtering, recency filtering, and optional full-page content retrieval. The researcher agent uses web search alongside AlphaXiv to gather evidence from non-academic sources like blog posts, documentation, news, and code repositories.
## Routing modes ## Routing modes
Feynman supports three web search backends: Feynman supports three web search backends. You can configure which one to use or let Feynman choose automatically:
| Mode | Description | | Mode | Description |
|------|-------------| | --- | --- |
| `auto` | Prefer Perplexity when configured, fall back to Gemini | | `auto` | Prefer Perplexity when configured, fall back to Gemini |
| `perplexity` | Force Perplexity Sonar | | `perplexity` | Force Perplexity Sonar for all web searches |
| `gemini` | Force Gemini (default) | | `gemini` | Force Gemini grounding (default, zero-config) |
## Default behavior ## Default behavior
The default path is zero-config Gemini Browser via a signed-in Chromium profile. No API keys required. The default path is zero-config Gemini grounding via a signed-in Chromium profile. No API keys are required. This works on macOS and Linux where a Chromium-based browser is installed and signed in to a Google account.
## Check current config For headless environments, CI pipelines, or servers without a browser, configure an explicit API key for either Perplexity or Gemini in `~/.feynman/web-search.json`.
## Configuration
Check the current search configuration:
```bash ```bash
feynman search status feynman search status
``` ```
## Advanced configuration Edit `~/.feynman/web-search.json` to configure the backend:
Edit `~/.feynman/web-search.json` directly to set: ```json
{
"route": "auto",
"perplexityApiKey": "pplx-...",
"geminiApiKey": "AIza..."
}
```
- Gemini API keys Set `route` to `auto`, `perplexity`, or `gemini`. When using `auto`, Feynman prefers Perplexity if a key is present, then falls back to Gemini.
- Perplexity API keys
- Custom routing preferences ## Search features
The web search tool supports several capabilities that the researcher agent leverages automatically:
- **Multiple queries** -- Send 2-4 varied-angle queries simultaneously for broader coverage of a topic
- **Domain filtering** -- Restrict results to specific domains like `arxiv.org`, `github.com`, or `nature.com`
- **Recency filtering** -- Filter results by date, useful for fast-moving topics where only recent work matters
- **Full content retrieval** -- Fetch complete page content for the most important results rather than relying on snippets
## When it runs
Web search is used automatically by researcher agents during workflows. You do not need to invoke it directly. The researcher decides when to use web search versus paper search based on the topic and source availability. Academic topics lean toward AlphaXiv; engineering and applied topics lean toward web search.

View File

@@ -1,39 +1,50 @@
--- ---
title: Code Audit title: Code Audit
description: Compare paper claims against public codebases description: Compare a paper's claims against its public codebase for reproducibility.
section: Workflows section: Workflows
order: 4 order: 4
--- ---
The code audit workflow compares a paper's claims against its public codebase to identify mismatches, undocumented deviations, and reproducibility risks. It bridges the gap between what a paper says and what the code actually does.
## Usage ## Usage
``` From the REPL:
/audit <item>
```
## What it does
Compares claims made in a paper against its public codebase. Surfaces mismatches, missing experiments, and reproducibility risks.
## What it checks
- Do the reported hyperparameters match the code?
- Are all claimed experiments present in the repository?
- Does the training loop match the described methodology?
- Are there undocumented preprocessing steps?
- Do evaluation metrics match the paper's claims?
## Example
``` ```
/audit 2401.12345 /audit arxiv:2401.12345
``` ```
## Output ```
/audit https://github.com/org/repo --paper arxiv:2401.12345
```
An audit report with: From the CLI:
- Claim-by-claim verification ```bash
- Identified mismatches feynman audit 2401.12345
- Missing components ```
- Reproducibility risk assessment
When given an arXiv ID, Feynman locates the associated code repository from the paper's links, Papers With Code, or GitHub search. You can also provide the repository URL directly.
## How it works
The audit workflow operates in two passes. First, the researcher agent reads the paper and extracts all concrete claims: hyperparameters, architecture details, training procedures, dataset splits, evaluation metrics, and reported results. Each claim is tagged with its location in the paper for traceability.
Second, the verifier agent examines the codebase to find the corresponding implementation for each claim. It checks configuration files, training scripts, model definitions, and evaluation code to verify that the code matches the paper's description. When it finds a discrepancy -- a hyperparameter that differs, a training step that was described but not implemented, or an evaluation procedure that deviates from the paper -- it documents the mismatch with exact file paths and line numbers.
The audit also checks for common reproducibility issues like missing random seeds, non-deterministic operations without pinned versions, hardcoded paths, and absent environment specifications.
## Output format
The audit report contains:
- **Match Summary** -- Percentage of claims that match the code
- **Confirmed Claims** -- Claims that are accurately reflected in the codebase
- **Mismatches** -- Discrepancies between paper and code with evidence from both
- **Missing Implementations** -- Claims in the paper with no corresponding code
- **Reproducibility Risks** -- Issues like missing seeds, unpinned dependencies, or hardcoded paths
## When to use it
Use `/audit` when you are deciding whether to build on a paper's results, when replicating an experiment, or when reviewing a paper for a venue and want to verify its claims against the code. It is also useful for auditing your own papers before submission to catch inconsistencies between your writeup and implementation.

View File

@@ -1,44 +1,58 @@
--- ---
title: Autoresearch title: Autoresearch
description: Autonomous experiment optimization loop description: Start an autonomous experiment loop that iteratively optimizes toward a goal.
section: Workflows section: Workflows
order: 8 order: 8
--- ---
The autoresearch workflow launches an autonomous research loop that iteratively designs experiments, runs them, analyzes results, and proposes next steps. It is designed for open-ended exploration where the goal is optimization or discovery rather than a specific answer.
## Usage ## Usage
``` From the REPL:
/autoresearch <idea>
```
## What it does
Runs an autonomous experiment loop:
1. **Edit** — Modify code or configuration
2. **Commit** — Save the change
3. **Benchmark** — Run evaluation
4. **Evaluate** — Compare against baseline
5. **Keep or revert** — Persist improvements, roll back regressions
6. **Repeat** — Continue until the target is hit
## Tracking
Metrics are tracked in:
- `autoresearch.md` — Human-readable progress log
- `autoresearch.jsonl` — Machine-readable metrics over time
## Controls
``` ```
/autoresearch <idea> # start or resume /autoresearch Optimize prompt engineering strategies for math reasoning on GSM8K
/autoresearch off # stop, keep data
/autoresearch clear # delete all state, start fresh
``` ```
## Example From the CLI:
```bash
feynman autoresearch "Optimize prompt engineering strategies for math reasoning on GSM8K"
```
Autoresearch runs as a long-lived background process. You can monitor its progress, pause it, or redirect its focus at any time.
## How it works
The autoresearch workflow is powered by `@tmustier/pi-ralph-wiggum`, which provides long-running agent loops. The workflow begins by analyzing the research goal and designing an initial experiment plan. It then enters an iterative loop:
1. **Hypothesis** -- The agent proposes a hypothesis or modification based on current results
2. **Experiment** -- It designs and executes an experiment to test the hypothesis
3. **Analysis** -- Results are analyzed and compared against prior iterations
4. **Decision** -- The agent decides whether to continue the current direction, try a variation, or pivot to a new approach
Each iteration builds on the previous ones. The agent maintains a running log of what has been tried, what worked, what failed, and what the current best result is. This prevents repeating failed approaches and ensures the search progresses efficiently.
## Monitoring and control
Check active autoresearch jobs:
``` ```
/autoresearch optimize the learning rate schedule for better convergence /jobs
``` ```
Autoresearch runs in the background, so you can continue using Feynman for other tasks while it works. The `/jobs` command shows the current status, iteration count, and best result so far. You can interrupt the loop at any time to provide guidance or redirect the search.
## Output format
Autoresearch produces a running experiment log that includes:
- **Experiment History** -- What was tried in each iteration with parameters and results
- **Best Configuration** -- The best-performing setup found so far
- **Ablation Results** -- Which factors mattered most based on the experiments run
- **Recommendations** -- Suggested next steps based on observed trends
## When to use it
Use `/autoresearch` for tasks that benefit from iterative exploration: hyperparameter optimization, prompt engineering, architecture search, or any problem where the search space is large and the feedback signal is clear. It is not the right tool for answering a specific question (use `/deepresearch` for that) but excels at finding what works best through systematic experimentation.

View File

@@ -1,29 +1,50 @@
--- ---
title: Source Comparison title: Source Comparison
description: Compare multiple sources with agreement/disagreement matrix description: Compare multiple sources and produce an agreement/disagreement matrix.
section: Workflows section: Workflows
order: 6 order: 6
--- ---
The source comparison workflow analyzes multiple papers, articles, or documents side by side and produces a structured matrix showing where they agree, disagree, and differ in methodology. It is useful for understanding conflicting results, evaluating competing approaches, and identifying which claims have broad support versus limited evidence.
## Usage ## Usage
``` From the REPL:
/compare <topic>
```
## What it does
Compares multiple sources on a topic. Builds an agreement/disagreement matrix showing where sources align and where they conflict.
## Example
``` ```
/compare approaches to constitutional AI training /compare "GPT-4 vs Claude vs Gemini on reasoning benchmarks"
``` ```
## Output ```
/compare arxiv:2401.12345 arxiv:2402.67890 arxiv:2403.11111
```
- Source-by-source breakdown From the CLI:
- Agreement/disagreement matrix
- Synthesis of key differences ```bash
- Assessment of which positions have stronger evidence feynman compare "topic or list of sources"
```
You can provide a topic and let Feynman find the sources, or list specific papers and documents for a targeted comparison.
## How it works
The comparison workflow begins by identifying or retrieving the sources to compare. If you provide a topic, the researcher agents find the most relevant and contrasting papers. If you provide specific IDs or files, they are used directly.
Each source is analyzed independently first: the researcher agents extract claims, results, methodology, and limitations from each document. Then the comparison engine aligns claims across sources -- identifying where two papers make the same claim (agreement), where they report contradictory results (disagreement), and where they measure different things entirely (non-overlapping scope).
The alignment step handles the nuance that papers often measure slightly different quantities or use different evaluation protocols. The comparison explicitly notes when an apparent disagreement might be explained by methodological differences rather than genuine conflicting results.
## Output format
The comparison produces:
- **Source Summaries** -- One-paragraph summary of each source's key contributions
- **Agreement Matrix** -- Claims supported by multiple sources with citation evidence
- **Disagreement Matrix** -- Conflicting claims with analysis of why sources diverge
- **Methodology Differences** -- How the sources differ in approach, data, and evaluation
- **Synthesis** -- An overall assessment of which claims are well-supported and which remain contested
## When to use it
Use `/compare` when you encounter contradictory results in the literature, when evaluating competing approaches to the same problem, or when you need to understand how different research groups frame the same topic. It is also useful for writing related work sections where you need to accurately characterize the state of debate.

View File

@@ -1,40 +1,48 @@
--- ---
title: Deep Research title: Deep Research
description: Thorough source-heavy investigation with parallel agents description: Run a thorough, multi-agent investigation that produces a cited research brief.
section: Workflows section: Workflows
order: 1 order: 1
--- ---
Deep research is the flagship Feynman workflow. It dispatches multiple researcher agents in parallel to search academic papers, web sources, and code repositories, then synthesizes everything into a structured research brief with inline citations.
## Usage ## Usage
``` From the REPL:
/deepresearch <topic>
```
## What it does
Deep research runs a thorough, source-heavy investigation. It plans the research scope, delegates to parallel researcher agents, synthesizes findings, and adds inline citations.
The workflow follows these steps:
1. **Plan** — Clarify the research question and identify search strategy
2. **Delegate** — Spawn parallel researcher agents to gather evidence from different source types (papers, web, repos)
3. **Synthesize** — Merge findings, resolve contradictions, identify gaps
4. **Cite** — Add inline citations and verify all source URLs
5. **Deliver** — Write a durable research brief to `outputs/`
## Example
``` ```
/deepresearch transformer scaling laws and their implications for compute-optimal training /deepresearch What are the current approaches to mechanistic interpretability in LLMs?
``` ```
## Output From the CLI:
Produces a structured research brief with: ```bash
feynman deepresearch "What are the current approaches to mechanistic interpretability in LLMs?"
```
- Executive summary Both forms are equivalent. The workflow begins immediately and streams progress as agents discover and analyze sources.
- Key findings organized by theme
- Evidence tables with source links ## How it works
- Open questions and suggested next steps
- Numbered sources section with direct URLs The deep research workflow proceeds through four phases. First, the researcher agents fan out to search AlphaXiv for relevant papers and the web for non-academic sources like blog posts, documentation, and code repositories. Each agent tackles a different angle of the topic to maximize coverage.
Second, the agents read and extract key findings from the most relevant sources. They pull claims, methodology details, results, and limitations from each paper or article. For academic papers, they access the full PDF through AlphaXiv when available.
Third, a synthesis step cross-references findings across sources, identifies areas of consensus and disagreement, and organizes the material into a coherent narrative. The writer agent structures the output as a research brief with sections for background, key findings, open questions, and references.
Finally, the verifier agent spot-checks claims against their cited sources to flag any misattributions or unsupported assertions. The finished report is saved to your session directory and can be previewed as rendered HTML with `/preview`.
## Output format
The research brief follows a consistent structure:
- **Summary** -- A concise overview of the topic and key takeaways
- **Background** -- Context and motivation for the research area
- **Key Findings** -- The main results organized by theme, with inline citations
- **Open Questions** -- Unresolved issues and promising research directions
- **References** -- Full citation list with links to source papers and articles
## Customization
You can steer the research by being specific in your prompt. Narrow topics produce more focused briefs. Broad topics produce survey-style overviews. You can also specify constraints like "focus on papers from 2024" or "only consider empirical results" to guide the agents.

View File

@@ -1,37 +1,51 @@
--- ---
title: Draft Writing title: Draft Writing
description: Paper-style draft generation from research findings description: Generate a paper-style draft from research findings and session context.
section: Workflows section: Workflows
order: 7 order: 7
--- ---
The draft writing workflow generates structured academic-style documents from your research findings. It uses the writer agent to produce well-organized prose with proper citations, sections, and formatting suitable for papers, reports, or blog posts.
## Usage ## Usage
``` From the REPL:
/draft <topic>
```
## What it does
Produces a paper-style draft with structured sections. Writes to `papers/`.
## Structure
The generated draft includes:
- Title
- Abstract
- Introduction / Background
- Method or Approach
- Evidence and Analysis
- Limitations
- Conclusion
- Sources
## Example
``` ```
/draft survey of differentiable physics simulators /draft A survey of retrieval-augmented generation techniques
``` ```
The writer agent works only from supplied evidence — it never fabricates content. If evidence is insufficient, it explicitly notes the gaps. ```
/draft --from-session
```
From the CLI:
```bash
feynman draft "A survey of retrieval-augmented generation techniques"
```
When used with `--from-session`, the writer draws from the current session's research findings, making it a natural follow-up to a deep research or literature review workflow.
## How it works
The draft workflow leverages the writer agent, which specializes in producing structured academic prose. When given a topic, it first consults the researcher agents to gather source material, then organizes the findings into a coherent document with proper narrative flow.
When working from existing session context (after a deep research or literature review), the writer skips the research phase and works directly with the findings already gathered. This produces a more focused draft because the source material has already been vetted and organized.
The writer pays attention to academic conventions: claims are attributed to their sources with inline citations, methodology sections describe procedures precisely, and limitations are discussed honestly. The draft includes placeholder sections for any content the writer cannot generate from available sources, clearly marking what needs human input.
## Output format
The draft follows standard academic structure:
- **Abstract** -- Concise summary of the document's scope and findings
- **Introduction** -- Motivation, context, and contribution statement
- **Body Sections** -- Organized by topic with subsections as needed
- **Discussion** -- Interpretation of findings and implications
- **Limitations** -- Honest assessment of scope and gaps
- **References** -- Complete bibliography in a consistent citation format
## Preview and iteration
After generating the draft, use `/preview` to render it as HTML or PDF with proper formatting, math rendering, and typography. You can iterate on the draft by asking Feynman to revise specific sections, add more detail, or restructure the argument.

View File

@@ -1,31 +1,45 @@
--- ---
title: Literature Review title: Literature Review
description: Map consensus, disagreements, and open questions description: Run a structured literature review with consensus mapping and gap analysis.
section: Workflows section: Workflows
order: 2 order: 2
--- ---
The literature review workflow produces a structured survey of the academic landscape on a given topic. Unlike deep research which aims for a comprehensive brief, the literature review focuses specifically on mapping the state of the field -- what researchers agree on, where they disagree, and what remains unexplored.
## Usage ## Usage
``` From the REPL:
/lit <topic>
```
## What it does
Runs a structured literature review that searches across academic papers and web sources. Explicitly separates consensus findings from disagreements and open questions.
## Example
``` ```
/lit multimodal reasoning benchmarks for large language models /lit Scaling laws for language model performance
``` ```
## Output From the CLI:
A structured review covering: ```bash
feynman lit "Scaling laws for language model performance"
```
- **Consensus** — What the field agrees on ## How it works
- **Disagreements** — Where sources conflict
- **Open questions** — What remains unresolved The literature review workflow begins by having researcher agents search for papers on the topic across AlphaXiv and the web. The agents prioritize survey papers, highly-cited foundational work, and recent publications to capture both established knowledge and the current frontier.
- **Sources** — Direct links to all referenced papers and articles
After gathering sources, the agents extract claims, results, and methodology from each paper. The synthesis step then organizes findings into a structured review that maps out where the community has reached consensus, where active debate exists, and where gaps in the literature remain.
The output is organized chronologically and thematically, showing how ideas evolved over time and how different research groups approach the problem differently. Citation counts and publication venues are used as signals for weighting claims, though the review explicitly notes when influential work contradicts the mainstream view.
## Output format
The literature review produces:
- **Scope and Methodology** -- What was searched and how papers were selected
- **Consensus** -- Claims that most papers agree on, with supporting citations
- **Disagreements** -- Active debates where papers present conflicting evidence or interpretations
- **Open Questions** -- Topics that the literature has not adequately addressed
- **Timeline** -- Key milestones and how the field evolved
- **References** -- Complete bibliography organized by relevance
## When to use it
Use `/lit` when you need a map of the research landscape rather than a deep dive into a specific question. It is particularly useful at the start of a new research project when you need to understand what has already been done, or when preparing a related work section for a paper.

View File

@@ -1,42 +1,50 @@
--- ---
title: Replication title: Replication
description: Plan replications of papers and claims description: Plan or execute a replication of a paper's experiments and claims.
section: Workflows section: Workflows
order: 5 order: 5
--- ---
The replication workflow helps you plan and execute reproductions of published experiments, benchmark results, or specific claims. It generates a detailed replication plan, identifies potential pitfalls, and can guide you through the execution step by step.
## Usage ## Usage
``` From the REPL:
/replicate <paper or claim>
```
## What it does
Extracts key implementation details from a paper, identifies what's needed to replicate the results, and asks where to run before executing anything.
Before running code, Feynman asks you to choose an execution environment:
- **Local** — run in the current working directory
- **Virtual environment** — create an isolated venv/conda env first
- **Docker** — run experiment code inside an isolated Docker container
- **Plan only** — produce the replication plan without executing
## Example
``` ```
/replicate "chain-of-thought prompting improves math reasoning" /replicate arxiv:2401.12345
``` ```
## Output ```
/replicate "The claim that sparse attention achieves 95% of dense attention quality at 60% compute"
```
A replication plan covering: From the CLI:
- Key claims to verify ```bash
- Required resources (compute, data, models) feynman replicate "paper or claim"
- Implementation details extracted from the paper ```
- Potential pitfalls and underspecified details
- Step-by-step replication procedure
- Success criteria
If an execution environment is selected, also produces runnable scripts and captured results. You can point the workflow at a full paper for a comprehensive replication plan, or at a specific claim for a focused reproduction.
## How it works
The replication workflow starts with the researcher agent reading the target paper and extracting every detail needed for reproduction: model architecture, hyperparameters, training schedule, dataset preparation, evaluation protocol, and hardware requirements. It cross-references these details against the codebase (if available) using the same machinery as the code audit workflow.
Next, the workflow generates a structured replication plan that breaks the experiment into discrete steps, estimates compute and time requirements, and identifies where the paper is underspecified. For each underspecified detail, it suggests reasonable defaults based on common practices in the field and flags the assumption as a potential source of divergence.
The plan also includes a risk assessment: which parts of the experiment are most likely to cause replication failure, what tolerance to expect for numerical results, and which claims are most sensitive to implementation details.
## Output format
The replication plan includes:
- **Requirements** -- Hardware, software, data, and estimated compute cost
- **Step-by-step Plan** -- Ordered steps from environment setup through final evaluation
- **Underspecified Details** -- Where the paper leaves out information needed for replication
- **Risk Assessment** -- Which steps are most likely to cause divergence from reported results
- **Success Criteria** -- What results would constitute a successful replication
## Iterative execution
After generating the plan, you can execute the replication interactively. Feynman walks you through each step, helps you write the code, monitors training runs, and compares intermediate results against the paper's reported values. When results diverge, it helps diagnose whether the cause is an implementation difference, a hyperparameter mismatch, or a genuine replication failure.

View File

@@ -1,49 +1,52 @@
--- ---
title: Peer Review title: Peer Review
description: Simulated peer review with severity-graded feedback description: Simulate a rigorous peer review with severity-graded feedback.
section: Workflows section: Workflows
order: 3 order: 3
--- ---
The peer review workflow simulates a thorough academic peer review of a paper, draft, or research artifact. It produces severity-graded feedback with inline annotations, covering methodology, claims, writing quality, and reproducibility.
## Usage ## Usage
``` From the REPL:
/review <artifact>
```
## What it does
Simulates a tough-but-fair peer review for AI research artifacts. Evaluates novelty, empirical rigor, baselines, ablations, and reproducibility.
The reviewer agent identifies:
- Weak baselines
- Missing ablations
- Evaluation mismatches
- Benchmark leakage
- Under-specified implementation details
## Severity levels
Feedback is graded by severity:
- **FATAL** — Fundamental issues that invalidate the claims
- **MAJOR** — Significant problems that need addressing
- **MINOR** — Small improvements or clarifications
## Example
``` ```
/review outputs/scaling-laws-brief.md /review arxiv:2401.12345
``` ```
## Output ```
/review ~/papers/my-draft.pdf
```
Structured review with: From the CLI:
- Summary of the work ```bash
- Strengths feynman review arxiv:2401.12345
- Weaknesses (severity-graded) feynman review my-draft.md
- Questions for the authors ```
- Verdict (accept / revise / reject)
- Revision plan You can pass an arXiv ID, a URL, or a local file path. For arXiv papers, Feynman fetches the full PDF through AlphaXiv.
## How it works
The review workflow assigns the reviewer agent to read the document end-to-end and evaluate it against standard academic criteria. The reviewer examines the paper's claims, checks whether the methodology supports the conclusions, evaluates the experimental design for potential confounds, and assesses the clarity and completeness of the writing.
Each piece of feedback is assigned a severity level: **critical** (fundamental issues that undermine the paper's validity), **major** (significant problems that should be addressed), **minor** (suggestions for improvement), or **nit** (stylistic or formatting issues). This grading helps you triage feedback and focus on what matters most.
The reviewer also produces a summary assessment with an overall recommendation and a confidence score indicating how certain it is about each finding. When the reviewer identifies a claim that cannot be verified from the paper alone, it flags it as needing additional evidence.
## Output format
The review output includes:
- **Summary Assessment** -- Overall evaluation and recommendation
- **Strengths** -- What the paper does well
- **Critical Issues** -- Fundamental problems that need to be addressed
- **Major Issues** -- Significant concerns with suggested fixes
- **Minor Issues** -- Smaller improvements and suggestions
- **Inline Annotations** -- Specific comments tied to sections of the document
## Customization
You can focus the review by specifying what to examine: "focus on the statistical methodology" or "check the claims in Section 4 against the experimental results." The reviewer adapts its analysis to your priorities while still performing a baseline check of the full document.

View File

@@ -1,29 +1,54 @@
--- ---
title: Watch title: Watch
description: Recurring research monitoring description: Set up recurring research monitoring on a topic.
section: Workflows section: Workflows
order: 9 order: 9
--- ---
The watch workflow sets up recurring research monitoring that periodically checks for new papers, articles, and developments on a topic you care about. It notifies you when something relevant appears and can automatically summarize new findings.
## Usage ## Usage
``` From the REPL:
/watch <topic>
```
## What it does
Schedules a recurring research watch. Sets a baseline of current knowledge and defines what constitutes a meaningful change worth reporting.
## Example
``` ```
/watch new papers on test-time compute scaling /watch New developments in state space models for sequence modeling
``` ```
From the CLI:
```bash
feynman watch "New developments in state space models for sequence modeling"
```
After setting up a watch, Feynman periodically runs searches on the topic and alerts you when it finds new relevant material.
## How it works ## How it works
1. Feynman establishes a baseline by surveying current sources The watch workflow is built on `pi-schedule-prompt`, which manages scheduled and recurring tasks. When you create a watch, Feynman stores the topic and search parameters, then runs a lightweight search at regular intervals (default: daily).
2. Defines change signals (new papers, updated results, new repos)
3. Schedules periodic checks via `pi-schedule-prompt` Each check searches AlphaXiv for new papers and the web for new articles matching your topic. Results are compared against what was found in previous checks to surface only genuinely new material. When new items are found, Feynman produces a brief summary of each and stores it in your session history.
4. Reports only when meaningful changes are detected
The watch is smart about relevance. It does not just keyword-match -- it uses the same researcher agent that powers deep research to evaluate whether new papers are genuinely relevant to your topic or just superficially related. This keeps the signal-to-noise ratio high even for broad topics.
## Managing watches
List active watches:
```
/jobs
```
The `/jobs` command shows all active watches along with their schedule, last check time, and number of new items found. You can pause, resume, or delete watches from within the REPL.
## Output format
Each watch check produces:
- **New Papers** -- Titles, authors, and one-paragraph summaries of newly discovered papers
- **New Articles** -- Relevant blog posts, documentation updates, or news articles
- **Relevance Notes** -- Why each item was flagged as relevant to your watch topic
## When to use it
Use `/watch` to stay current on a research area without manually searching every day. It is particularly useful for fast-moving fields where new papers appear frequently, for tracking specific research groups or topics related to your own work, and for monitoring the literature while you focus on other tasks.

View File

@@ -1,58 +0,0 @@
---
import { ViewTransitions } from 'astro:transitions';
import Nav from '../components/Nav.astro';
import Footer from '../components/Footer.astro';
import '../styles/global.css';
interface Props {
title: string;
description?: string;
active?: 'home' | 'docs';
}
const { title, description = 'Research-first AI agent', active = 'home' } = Astro.props;
---
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="description" content={description} />
<title>{title}</title>
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link href="https://fonts.googleapis.com/css2?family=VT323&display=swap" rel="stylesheet" />
<ViewTransitions fallback="none" />
<script is:inline>
(function() {
var stored = localStorage.getItem('theme');
var prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
if (stored === 'dark' || (!stored && prefersDark)) {
document.documentElement.classList.add('dark');
}
})();
</script>
<script is:inline>
document.addEventListener('astro:after-swap', function() {
var stored = localStorage.getItem('theme');
var prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
if (stored === 'dark' || (!stored && prefersDark)) {
document.documentElement.classList.add('dark');
}
var isDark = document.documentElement.classList.contains('dark');
var sun = document.getElementById('sun-icon');
var moon = document.getElementById('moon-icon');
if (sun) sun.style.display = isDark ? 'block' : 'none';
if (moon) moon.style.display = isDark ? 'none' : 'block';
});
</script>
</head>
<body class="min-h-screen flex flex-col antialiased">
<Nav active={active} />
<main class="flex-1">
<slot />
</main>
<Footer />
</body>
</html>

View File

@@ -1,79 +0,0 @@
---
import Base from './Base.astro';
import Sidebar from '../components/Sidebar.astro';
interface Props {
title: string;
description?: string;
currentSlug: string;
}
const { title, description, currentSlug } = Astro.props;
---
<Base title={`${title} — Feynman Docs`} description={description} active="docs">
<div class="max-w-6xl mx-auto px-6">
<div class="flex gap-8">
<Sidebar currentSlug={currentSlug} />
<button id="mobile-menu-btn" class="lg:hidden fixed bottom-6 right-6 z-40 p-3 rounded-full bg-accent text-bg shadow-lg" aria-label="Toggle sidebar">
<svg class="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
<path d="M4 6h16M4 12h16M4 18h16" />
</svg>
</button>
<div id="mobile-overlay" class="hidden fixed inset-0 bg-black/50 z-30 lg:hidden"></div>
<article class="flex-1 min-w-0 py-8 max-w-3xl">
<h1 class="text-3xl font-bold mb-8 tracking-tight">{title}</h1>
<div class="prose">
<slot />
</div>
</article>
</div>
</div>
<script is:inline>
(function() {
function init() {
var btn = document.getElementById('mobile-menu-btn');
var sidebar = document.getElementById('sidebar');
var overlay = document.getElementById('mobile-overlay');
if (btn && sidebar && overlay) {
function toggle() {
sidebar.classList.toggle('hidden');
sidebar.classList.toggle('fixed');
sidebar.classList.toggle('inset-0');
sidebar.classList.toggle('z-40');
sidebar.classList.toggle('bg-bg');
sidebar.classList.toggle('w-full');
sidebar.classList.toggle('p-6');
overlay.classList.toggle('hidden');
}
btn.addEventListener('click', toggle);
overlay.addEventListener('click', toggle);
}
document.querySelectorAll('.prose pre').forEach(function(pre) {
if (pre.querySelector('.copy-code')) return;
var copyBtn = document.createElement('button');
copyBtn.className = 'copy-code';
copyBtn.setAttribute('aria-label', 'Copy code');
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
pre.appendChild(copyBtn);
copyBtn.addEventListener('click', function() {
var code = pre.querySelector('code');
var text = code ? code.textContent : pre.textContent;
navigator.clipboard.writeText(text);
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><path d="M20 6L9 17l-5-5"/></svg>';
setTimeout(function() {
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
}, 2000);
});
});
}
document.addEventListener('DOMContentLoaded', init);
document.addEventListener('astro:after-swap', init);
})();
</script>
</Base>

View File

@@ -0,0 +1,149 @@
---
import "@/styles/global.css"
import { ViewTransitions } from "astro:transitions"
interface Props {
title?: string
description?: string
active?: "home" | "docs"
}
const {
title = "Feynman - The open source AI research agent",
description = "An AI-powered research agent that helps you discover, analyze, and synthesize scientific literature.",
active = "home",
} = Astro.props
---
<html lang="en" class="dark">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<meta name="description" content={description} />
<title>{title}</title>
<link rel="preconnect" href="https://fonts.googleapis.com" />
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
<link href="https://fonts.googleapis.com/css2?family=VT323&display=swap" rel="stylesheet" />
<ViewTransitions />
<script is:inline>
;(function () {
const theme = localStorage.getItem("theme")
if (theme === "dark" || (!theme && window.matchMedia("(prefers-color-scheme: dark)").matches)) {
document.documentElement.classList.add("dark")
} else {
document.documentElement.classList.remove("dark")
}
})()
</script>
</head>
<body class="flex min-h-screen flex-col bg-background text-foreground antialiased">
<nav class="sticky top-0 z-50 bg-background">
<div class="mx-auto flex h-14 max-w-6xl items-center justify-between px-6">
<a href="/" class="flex items-center gap-2">
<span class="font-['VT323'] text-2xl text-primary">feynman</span>
</a>
<div class="flex items-center gap-6">
<a
href="/docs/getting-started/installation"
class:list={[
"text-sm transition-colors hover:text-foreground",
active === "docs" ? "text-foreground" : "text-muted-foreground",
]}
>
Docs
</a>
<a
href="https://github.com/getcompanion-ai/feynman"
target="_blank"
rel="noopener noreferrer"
class="text-sm text-muted-foreground transition-colors hover:text-foreground"
>
GitHub
</a>
<button
id="theme-toggle"
type="button"
class="inline-flex size-9 items-center justify-center rounded-md text-muted-foreground transition-colors hover:bg-muted hover:text-foreground"
aria-label="Toggle theme"
>
<svg
id="sun-icon"
class="hidden size-4"
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
stroke-width="2"
stroke="currentColor"
>
<path stroke-linecap="round" stroke-linejoin="round" d="M12 3v2.25m6.364.386l-1.591 1.591M21 12h-2.25m-.386 6.364l-1.591-1.591M12 18.75V21m-4.773-4.227l-1.591 1.591M5.25 12H3m4.227-4.773L5.636 5.636M15.75 12a3.75 3.75 0 11-7.5 0 3.75 3.75 0 017.5 0z" />
</svg>
<svg
id="moon-icon"
class="hidden size-4"
xmlns="http://www.w3.org/2000/svg"
fill="none"
viewBox="0 0 24 24"
stroke-width="2"
stroke="currentColor"
>
<path stroke-linecap="round" stroke-linejoin="round" d="M21.752 15.002A9.718 9.718 0 0118 15.75c-5.385 0-9.75-4.365-9.75-9.75 0-1.33.266-2.597.748-3.752A9.753 9.753 0 003 11.25C3 16.635 7.365 21 12.75 21a9.753 9.753 0 009.002-5.998z" />
</svg>
</button>
</div>
</nav>
<main class="flex flex-1 flex-col">
<slot />
</main>
<footer>
<div class="mx-auto flex max-w-6xl flex-col items-center justify-between gap-4 px-6 py-8 sm:flex-row">
<p class="text-sm text-muted-foreground">
&copy; {new Date().getFullYear()} Companion, Inc.
</p>
<div class="flex items-center gap-4 text-sm">
<a href="/docs/getting-started/installation" class="text-muted-foreground transition-colors hover:text-foreground">Docs</a>
<a
href="https://github.com/getcompanion-ai/feynman"
target="_blank"
rel="noopener noreferrer"
class="text-muted-foreground transition-colors hover:text-foreground"
>
GitHub
</a>
</div>
</div>
</footer>
<script is:inline>
function updateThemeIcons() {
const isDark = document.documentElement.classList.contains("dark")
document.getElementById("sun-icon").classList.toggle("hidden", !isDark)
document.getElementById("moon-icon").classList.toggle("hidden", isDark)
}
function setupThemeToggle() {
updateThemeIcons()
document.getElementById("theme-toggle").addEventListener("click", function () {
document.documentElement.classList.toggle("dark")
const isDark = document.documentElement.classList.contains("dark")
localStorage.setItem("theme", isDark ? "dark" : "light")
updateThemeIcons()
})
}
setupThemeToggle()
document.addEventListener("astro:after-swap", function () {
const theme = localStorage.getItem("theme")
if (theme === "dark" || (!theme && window.matchMedia("(prefers-color-scheme: dark)").matches)) {
document.documentElement.classList.add("dark")
} else {
document.documentElement.classList.remove("dark")
}
setupThemeToggle()
})
</script>
</body>
</html>

6
website/src/lib/utils.ts Normal file
View File

@@ -0,0 +1,6 @@
import { clsx, type ClassValue } from "clsx"
import { twMerge } from "tailwind-merge"
export function cn(...inputs: ClassValue[]) {
return twMerge(clsx(inputs))
}

View File

@@ -0,0 +1,13 @@
---
import Layout from "@/layouts/main.astro"
---
<Layout title="404 — Feynman">
<section class="flex flex-1 items-center justify-center">
<div class="flex flex-col items-center gap-4 text-center">
<h1 class="font-['VT323'] text-9xl text-primary">404</h1>
<p class="text-lg text-muted-foreground">Page not found.</p>
<a href="/" class="text-sm text-primary hover:underline">Back to home</a>
</div>
</section>
</Layout>

View File

@@ -1,6 +1,6 @@
--- ---
import { getCollection } from 'astro:content'; import { getCollection } from 'astro:content';
import Docs from '../../layouts/Docs.astro'; import Layout from '@/layouts/main.astro';
export async function getStaticPaths() { export async function getStaticPaths() {
const docs = await getCollection('docs'); const docs = await getCollection('docs');
@@ -12,8 +12,143 @@ export async function getStaticPaths() {
const { entry } = Astro.props; const { entry } = Astro.props;
const { Content } = await entry.render(); const { Content } = await entry.render();
const currentSlug = entry.slug;
const sections = [
{
title: 'Getting Started',
items: [
{ label: 'Installation', slug: 'getting-started/installation' },
{ label: 'Quick Start', slug: 'getting-started/quickstart' },
{ label: 'Setup', slug: 'getting-started/setup' },
{ label: 'Configuration', slug: 'getting-started/configuration' },
],
},
{
title: 'Workflows',
items: [
{ label: 'Deep Research', slug: 'workflows/deep-research' },
{ label: 'Literature Review', slug: 'workflows/literature-review' },
{ label: 'Peer Review', slug: 'workflows/review' },
{ label: 'Code Audit', slug: 'workflows/audit' },
{ label: 'Replication', slug: 'workflows/replication' },
{ label: 'Source Comparison', slug: 'workflows/compare' },
{ label: 'Draft Writing', slug: 'workflows/draft' },
{ label: 'Autoresearch', slug: 'workflows/autoresearch' },
{ label: 'Watch', slug: 'workflows/watch' },
],
},
{
title: 'Agents',
items: [
{ label: 'Researcher', slug: 'agents/researcher' },
{ label: 'Reviewer', slug: 'agents/reviewer' },
{ label: 'Writer', slug: 'agents/writer' },
{ label: 'Verifier', slug: 'agents/verifier' },
],
},
{
title: 'Tools',
items: [
{ label: 'AlphaXiv', slug: 'tools/alphaxiv' },
{ label: 'Web Search', slug: 'tools/web-search' },
{ label: 'Session Search', slug: 'tools/session-search' },
{ label: 'Preview', slug: 'tools/preview' },
],
},
{
title: 'Reference',
items: [
{ label: 'CLI Commands', slug: 'reference/cli-commands' },
{ label: 'Slash Commands', slug: 'reference/slash-commands' },
{ label: 'Package Stack', slug: 'reference/package-stack' },
],
},
];
--- ---
<Docs title={entry.data.title} description={entry.data.description} currentSlug={entry.slug}> <Layout title={`${entry.data.title} — Feynman Docs`} description={entry.data.description} active="docs">
<div class="max-w-6xl mx-auto px-6">
<div class="flex gap-8">
<aside id="sidebar" class="w-64 shrink-0 h-[calc(100vh-3.5rem)] sticky top-14 overflow-y-auto py-6 pr-4 hidden lg:block border-r border-border">
{sections.map((section) => (
<div class="mb-6">
<div class="text-xs font-semibold text-primary uppercase tracking-wider px-3 mb-2">{section.title}</div>
{section.items.map((item) => (
<a
href={`/docs/${item.slug}`}
class:list={[
'block px-3 py-1.5 text-sm border-l-[2px] transition-colors',
currentSlug === item.slug
? 'border-primary text-foreground'
: 'border-transparent text-muted-foreground hover:text-foreground',
]}
>
{item.label}
</a>
))}
</div>
))}
</aside>
<button id="mobile-menu-btn" class="lg:hidden fixed bottom-6 right-6 z-40 p-3 rounded-full bg-primary text-primary-foreground shadow-lg" aria-label="Toggle sidebar">
<svg class="w-5 h-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
<path d="M4 6h16M4 12h16M4 18h16" />
</svg>
</button>
<div id="mobile-overlay" class="hidden fixed inset-0 bg-black/50 z-30 lg:hidden"></div>
<article class="flex-1 min-w-0 py-8 max-w-3xl">
<h1 class="text-3xl font-bold mb-8 tracking-tight">{entry.data.title}</h1>
<div class="prose">
<Content /> <Content />
</Docs> </div>
</article>
</div>
</div>
<script is:inline>
(function() {
function init() {
var btn = document.getElementById('mobile-menu-btn');
var sidebar = document.getElementById('sidebar');
var overlay = document.getElementById('mobile-overlay');
if (btn && sidebar && overlay) {
function toggle() {
sidebar.classList.toggle('hidden');
sidebar.classList.toggle('fixed');
sidebar.classList.toggle('inset-0');
sidebar.classList.toggle('z-40');
sidebar.classList.toggle('bg-background');
sidebar.classList.toggle('w-full');
sidebar.classList.toggle('p-6');
overlay.classList.toggle('hidden');
}
btn.addEventListener('click', toggle);
overlay.addEventListener('click', toggle);
}
document.querySelectorAll('.prose pre').forEach(function(pre) {
if (pre.querySelector('.copy-code')) return;
var copyBtn = document.createElement('button');
copyBtn.className = 'copy-code';
copyBtn.setAttribute('aria-label', 'Copy code');
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
pre.appendChild(copyBtn);
copyBtn.addEventListener('click', function() {
var code = pre.querySelector('code');
var text = code ? code.textContent : pre.textContent;
navigator.clipboard.writeText(text);
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><path d="M20 6L9 17l-5-5"/></svg>';
setTimeout(function() {
copyBtn.innerHTML = '<svg width="14" height="14" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>';
}, 2000);
});
});
}
init();
document.addEventListener('astro:after-swap', init);
})();
</script>
</Layout>

View File

@@ -0,0 +1,3 @@
---
return Astro.redirect('/docs/getting-started/installation')
---

View File

@@ -1,161 +1,284 @@
--- ---
import Base from '../layouts/Base.astro'; import Layout from "@/layouts/main.astro"
import AsciiLogo from '../components/AsciiLogo.astro'; import { Button } from "@/components/ui/button"
import { Card, CardHeader, CardTitle, CardDescription, CardContent } from "@/components/ui/card"
const workflows = [
{ command: "/deepresearch", description: "Multi-agent investigation across papers, web, and code" },
{ command: "/lit", description: "Literature review from primary sources with consensus mapping" },
{ command: "/review", description: "Simulated peer review with severity scores and a revision plan" },
{ command: "/audit", description: "Paper-to-code mismatch audit for reproducibility claims" },
{ command: "/replicate", description: "Replication plan and execution in a sandboxed Docker container" },
{ command: "/compare", description: "Side-by-side source comparison with agreement and conflict matrix" },
{ command: "/draft", description: "Polished paper-style draft with inline citations from findings" },
{ command: "/autoresearch", description: "Autonomous loop: hypothesize, experiment, measure, repeat" },
{ command: "/watch", description: "Recurring monitor for new papers, code, or product updates" },
{ command: "/outputs", description: "Browse all research artifacts, papers, notes, and experiments" },
]
const agents = [
{ name: "Researcher", description: "Hunts for evidence across papers, the web, repos, and docs" },
{ name: "Reviewer", description: "Grades claims by severity, flags gaps, and suggests revisions" },
{ name: "Writer", description: "Structures notes into briefs, drafts, and paper-style output" },
{ name: "Verifier", description: "Checks every citation, verifies URLs, removes dead links" },
]
const sources = [
{ name: "AlphaXiv", description: "Paper search, Q&A, code reading, and annotations via the alpha CLI", href: "https://alphaxiv.org" },
{ name: "Web search", description: "Searches via Gemini or Perplexity" },
{ name: "Session search", description: "Indexed recall across prior research sessions" },
{ name: "Preview", description: "Browser and PDF export of generated artifacts" },
]
const compute = [
{ name: "Docker", description: "Isolated local containers for safe experiments", href: "https://www.docker.com/" },
{ name: "Modal", description: "Serverless GPU compute for burst training and inference", href: "https://modal.com/" },
{ name: "RunPod", description: "Persistent GPU pods with SSH access for long-running runs", href: "https://www.runpod.io/" },
]
const terminalCommands = [
{ command: 'feynman "what do we know about scaling laws"', description: "Cited research brief from papers and web" },
{ command: 'feynman deepresearch "mechanistic interpretability"', description: "Multi-agent deep dive with synthesis and verification" },
{ command: 'feynman lit "RLHF alternatives"', description: "Literature review with consensus and open questions" },
{ command: "feynman audit 2401.12345", description: "Paper claims vs. what the code actually does" },
{ command: 'feynman replicate "chain-of-thought improves math"', description: "Replication plan, compute target, experiment execution" },
]
const installCommands = [
{ label: "curl", command: "curl -fsSL https://feynman.is/install | bash" },
{ label: "pnpm", command: "pnpm add -g @companion-ai/feynman" },
{ label: "bun", command: "bun add -g @companion-ai/feynman" },
]
--- ---
<Base title="Feynman — The open source AI research agent" active="home"> <Layout title="Feynman — The open source AI research agent" active="home">
<section class="text-center pt-24 pb-20 px-6"> <div class="mx-auto max-w-5xl px-6">
<div class="max-w-2xl mx-auto">
<AsciiLogo size="hero" class="mb-4" /> <section class="flex flex-col items-center gap-8 pb-16 pt-20 text-center">
<h1 class="text-5xl sm:text-6xl font-bold tracking-tight mb-6" style="text-wrap: balance">The open source AI research agent</h1> <div class="flex max-w-3xl flex-col gap-4">
<p class="text-lg text-text-muted mb-10 leading-relaxed" style="text-wrap: pretty">Investigate topics, write papers, run experiments, review research, audit codebases &mdash; every output cited and source-grounded</p> <h1 class="text-4xl font-bold tracking-tight sm:text-5xl lg:text-6xl">
<button id="copy-btn" class="group inline-flex items-center justify-between gap-3 bg-surface rounded-lg px-5 py-3 mb-8 font-mono text-sm hover:border-accent/40 hover:text-accent transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent rounded-lg border border-border max-w-full" aria-label="Copy install command"> The open source AI<br />research agent
<code class="text-accent text-left">curl -fsSL https://feynman.is/install | bash</code> </h1>
<span id="copy-icon" class="shrink-0 text-text-dim group-hover:text-accent transition-colors" aria-hidden="true"> <p class="mx-auto max-w-2xl text-lg text-muted-foreground">
<svg class="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2" /><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1" /></svg> Reads papers, searches the web, writes drafts, runs experiments, and cites every claim. All locally on your computer.
</span> </p>
</div>
<div class="flex w-full max-w-3xl flex-col items-center gap-4">
<div class="flex w-full flex-col">
<div class="flex self-start">
{installCommands.map((entry, i) => (
<button
class:list={[
"install-toggle px-4 py-2 text-sm font-medium transition-colors cursor-pointer",
i === 0 ? "rounded-tl-lg" : "",
i === installCommands.length - 1 ? "rounded-tr-lg" : "",
entry.label === installCommands[0].label
? "bg-muted text-foreground"
: "bg-muted/30 text-muted-foreground hover:text-foreground hover:bg-muted/50",
]}
data-command={entry.command}
aria-label={`Show ${entry.label} install command`}
>
{entry.label}
</button> </button>
<div class="flex gap-4 justify-center flex-wrap"> ))}
<a href="/docs/getting-started/installation" class="px-6 py-2.5 rounded-lg bg-accent text-bg font-semibold text-sm hover:bg-accent-hover transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent focus-visible:ring-offset-2 focus-visible:ring-offset-bg">Get started</a> </div>
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="px-6 py-2.5 rounded-lg border border-border text-text-muted font-semibold text-sm hover:border-text-dim hover:text-text-primary transition-colors focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-accent focus-visible:ring-offset-2 focus-visible:ring-offset-bg">GitHub</a> <button
id="install-cmd"
class="group flex w-full items-center justify-between gap-3 rounded-b-lg rounded-tr-lg bg-muted px-4 py-3 text-left font-mono text-sm transition-colors hover:bg-muted/80 cursor-pointer"
data-command={installCommands[0].command}
aria-label="Copy install command"
>
<span id="install-command" class="min-w-0 truncate">{installCommands[0].command}</span>
<svg id="install-copy" class="size-4 shrink-0 text-muted-foreground transition-colors group-hover:text-foreground" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>
<svg id="install-check" class="hidden size-4 shrink-0 text-primary" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><path d="M20 6L9 17l-5-5"/></svg>
</button>
</div>
<div class="flex items-center gap-3">
<a href="/docs/getting-started/installation">
<Button client:load size="lg">Get Started</Button>
</a>
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener noreferrer" class="inline-flex h-10 items-center justify-center gap-2 rounded-md border border-input bg-background px-4 text-sm font-medium text-foreground transition-colors hover:bg-accent hover:text-accent-foreground">
GitHub
</a>
</div>
</div>
<img src="/hero.png" class="w-full" alt="Feynman CLI" />
</section>
<section class="py-16">
<div class="flex flex-col items-center gap-8 text-center">
<div class="flex flex-col gap-2">
<h2 class="text-2xl font-bold tracking-tight sm:text-3xl">What you type &rarr; what happens</h2>
<p class="text-muted-foreground">Ask a question or run a workflow. Every answer is cited.</p>
</div>
<Card client:load className="w-full text-left">
<CardContent client:load>
<div class="flex flex-col gap-3 font-mono text-sm">
{terminalCommands.map((cmd) => (
<div class="flex flex-col gap-0.5">
<span class="text-primary">{cmd.command}</span>
<span class="text-xs text-muted-foreground">{cmd.description}</span>
</div>
))}
</div>
</CardContent>
</Card>
</div>
</section>
<section class="py-16">
<div class="flex flex-col items-center gap-8 text-center">
<div class="flex flex-col gap-2">
<h2 class="text-2xl font-bold tracking-tight sm:text-3xl">Workflows</h2>
<p class="text-muted-foreground">Slash commands or natural language. Your call.</p>
</div>
<div class="grid w-full gap-4 sm:grid-cols-2 lg:grid-cols-3">
{workflows.map((wf) => (
<Card client:load size="sm">
<CardHeader client:load>
<CardTitle client:load className="font-mono text-sm text-primary">{wf.command}</CardTitle>
<CardDescription client:load>{wf.description}</CardDescription>
</CardHeader>
</Card>
))}
</div> </div>
</div> </div>
</section> </section>
<section class="py-20 px-6"> <section class="py-16">
<div class="max-w-5xl mx-auto"> <div class="flex flex-col items-center gap-8 text-center">
<h2 class="text-2xl font-bold text-center mb-12">What you type &rarr; what happens</h2> <div class="flex flex-col gap-2">
<div class="bg-surface rounded-xl p-6 font-mono text-sm leading-loose max-w-2xl mx-auto"> <h2 class="text-2xl font-bold tracking-tight sm:text-3xl">Agents</h2>
<div class="flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> "what do we know about scaling laws"</span></div> <p class="text-muted-foreground">You ask a question. The right team assembles.</p>
<div class="text-text-dim mt-1 ml-6 text-xs">Searches papers and web, produces a cited research brief</div> </div>
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> deepresearch "mechanistic interpretability"</span></div> <div class="grid w-full gap-4 sm:grid-cols-2 lg:grid-cols-4">
<div class="text-text-dim mt-1 ml-6 text-xs">Multi-agent investigation with parallel researchers, synthesis, verification</div> {agents.map((agent) => (
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> lit "RLHF alternatives"</span></div> <Card client:load size="sm" className="text-center">
<div class="text-text-dim mt-1 ml-6 text-xs">Literature review with consensus, disagreements, open questions</div> <CardHeader client:load className="items-center">
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> audit 2401.12345</span></div> <CardTitle client:load>{agent.name}</CardTitle>
<div class="text-text-dim mt-1 ml-6 text-xs">Compares paper claims against the public codebase</div> <CardDescription client:load>{agent.description}</CardDescription>
<div class="mt-4 flex gap-4"><span class="text-text-dim shrink-0">$</span><span><span class="text-accent">feynman</span> replicate "chain-of-thought improves math"</span></div> </CardHeader>
<div class="text-text-dim mt-1 ml-6 text-xs">Asks where to run, then builds a replication plan</div> </Card>
))}
</div> </div>
</div> </div>
</section> </section>
<section class="py-20 px-6"> <section class="py-16">
<div class="max-w-5xl mx-auto"> <div class="flex flex-col items-center gap-8 text-center">
<h2 class="text-2xl font-bold text-center mb-12">Workflows</h2> <div class="flex flex-col gap-2">
<p class="text-center text-text-muted mb-10">Ask naturally or use slash commands as shortcuts.</p> <h2 class="text-2xl font-bold tracking-tight sm:text-3xl">Skills &amp; Tools</h2>
<div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-3 gap-4 max-w-4xl mx-auto"> <p class="text-muted-foreground">How Feynman searches, remembers, and exports work.</p>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/deepresearch</div>
<p class="text-sm text-text-muted">Source-heavy multi-agent investigation</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/lit</div>
<p class="text-sm text-text-muted">Literature review from paper search and primary sources</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/review</div>
<p class="text-sm text-text-muted">Simulated peer review with severity and revision plan</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/audit</div>
<p class="text-sm text-text-muted">Paper vs. codebase mismatch audit</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/replicate</div>
<p class="text-sm text-text-muted">Replication plan with environment selection</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/compare</div>
<p class="text-sm text-text-muted">Source comparison matrix</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/draft</div>
<p class="text-sm text-text-muted">Paper-style draft from research findings</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/autoresearch</div>
<p class="text-sm text-text-muted">Autonomous experiment loop</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-mono text-sm text-accent mb-2">/watch</div>
<p class="text-sm text-text-muted">Recurring research watch</p>
</div> </div>
<div class="grid w-full gap-4 sm:grid-cols-2 lg:grid-cols-4">
{sources.map((source) => (
<Card client:load size="sm" className="text-center">
<CardHeader client:load className="items-center">
<CardTitle client:load>
{source.href ? (
<a href={source.href} target="_blank" rel="noopener noreferrer" class="text-primary hover:underline">
{source.name}
</a>
) : (
source.name
)}
</CardTitle>
<CardDescription client:load>{source.description}</CardDescription>
</CardHeader>
</Card>
))}
</div> </div>
</div> </div>
</section> </section>
<section class="py-20 px-6"> <section class="py-16">
<div class="max-w-5xl mx-auto"> <div class="flex flex-col items-center gap-8 text-center">
<h2 class="text-2xl font-bold text-center mb-12">Agents</h2> <div class="flex flex-col gap-2">
<p class="text-center text-text-muted mb-10">Four bundled research agents, dispatched automatically.</p> <h2 class="text-2xl font-bold tracking-tight sm:text-3xl">Compute</h2>
<div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-4"> <p class="text-muted-foreground">Run experiments locally or burst onto managed GPU infrastructure when needed.</p>
<div class="bg-surface rounded-xl p-6 text-center">
<div class="font-semibold text-accent mb-2">Researcher</div>
<p class="text-sm text-text-muted">Gathers evidence across papers, web, repos, and docs</p>
</div>
<div class="bg-surface rounded-xl p-6 text-center">
<div class="font-semibold text-accent mb-2">Reviewer</div>
<p class="text-sm text-text-muted">Simulated peer review with severity-graded feedback</p>
</div>
<div class="bg-surface rounded-xl p-6 text-center">
<div class="font-semibold text-accent mb-2">Writer</div>
<p class="text-sm text-text-muted">Structured briefs and drafts from research notes</p>
</div>
<div class="bg-surface rounded-xl p-6 text-center">
<div class="font-semibold text-accent mb-2">Verifier</div>
<p class="text-sm text-text-muted">Inline citations and source URL verification</p>
</div> </div>
<div class="grid w-full gap-4 sm:grid-cols-3">
{compute.map((provider) => (
<Card client:load size="sm" className="text-center">
<CardHeader client:load className="items-center">
<CardTitle client:load>
<a href={provider.href} target="_blank" rel="noopener noreferrer" class="text-primary hover:underline">
{provider.name}
</a>
</CardTitle>
<CardDescription client:load>{provider.description}</CardDescription>
</CardHeader>
</Card>
))}
</div> </div>
</div> </div>
</section> </section>
<section class="py-20 px-6"> <section class="flex flex-col items-center gap-6 py-20 text-center">
<div class="max-w-5xl mx-auto"> <p class="text-muted-foreground">
<h2 class="text-2xl font-bold text-center mb-12">Tools</h2> Built on <a href="https://github.com/badlogic/pi-mono" class="text-primary hover:underline">Pi</a> and <a href="https://www.alphaxiv.org/" class="text-primary hover:underline">alphaXiv</a>. Capabilities ship as Pi skills and every output stays source-grounded.
<div class="grid grid-cols-1 sm:grid-cols-2 gap-4 max-w-3xl mx-auto"> </p>
<div class="bg-surface rounded-xl p-5"> <div class="flex items-center gap-3">
<div class="font-semibold mb-1"><a href="https://www.alphaxiv.org/" class="text-accent hover:underline">AlphaXiv</a></div> <a href="/docs/getting-started/installation">
<p class="text-sm text-text-muted">Paper search, Q&A, code reading, persistent annotations</p> <Button client:load size="lg">Get Started</Button>
</div> </a>
<div class="bg-surface rounded-xl p-5"> <a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener noreferrer">
<div class="font-semibold mb-1"><a href="https://www.docker.com/" class="text-accent hover:underline">Docker</a></div> <Button client:load variant="outline" size="lg">View on GitHub</Button>
<p class="text-sm text-text-muted">Isolated container execution for safe local experiments</p> </a>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-semibold mb-1">Web search</div>
<p class="text-sm text-text-muted">Gemini or Perplexity, zero-config default</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-semibold mb-1">Session search</div>
<p class="text-sm text-text-muted">Indexed recall across prior research sessions</p>
</div>
<div class="bg-surface rounded-xl p-5">
<div class="font-semibold mb-1">Preview</div>
<p class="text-sm text-text-muted">Browser and PDF export of generated artifacts</p>
</div>
</div>
</div> </div>
</section> </section>
<section class="py-20 px-6 text-center">
<div class="max-w-xl mx-auto">
<p class="text-text-muted mb-6">Built on <a href="https://github.com/badlogic/pi-mono" class="text-accent hover:underline">Pi</a> and <a href="https://www.alphaxiv.org/" class="text-accent hover:underline">alphaXiv</a>. MIT licensed. Open source.</p>
<div class="flex gap-4 justify-center flex-wrap">
<a href="/docs/getting-started/installation" class="px-6 py-2.5 rounded-lg bg-accent text-bg font-semibold text-sm hover:bg-accent-hover transition-colors">Get started</a>
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="px-6 py-2.5 rounded-lg border border-border text-text-muted font-semibold text-sm hover:border-text-dim hover:text-text-primary transition-colors">GitHub</a>
</div> </div>
</div>
</section>
<script is:inline> <script is:inline>
document.getElementById('copy-btn').addEventListener('click', function() { (function () {
navigator.clipboard.writeText('curl -fsSL https://feynman.is/install | bash'); function init() {
var icon = document.getElementById('copy-icon'); var toggles = Array.from(document.querySelectorAll(".install-toggle"))
icon.innerHTML = '<svg class="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><path d="M20 6L9 17l-5-5"/></svg>'; var btn = document.getElementById("install-cmd")
setTimeout(function() { var text = document.getElementById("install-command")
icon.innerHTML = '<svg class="w-4 h-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2"><rect x="9" y="9" width="13" height="13" rx="2"/><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/></svg>'; var copyIcon = document.getElementById("install-copy")
}, 2000); var checkIcon = document.getElementById("install-check")
}); if (!btn || !text || !copyIcon || !checkIcon) return
toggles.forEach(function (toggle) {
if (toggle._b) return
toggle._b = true
toggle.addEventListener("click", function () {
var cmd = toggle.getAttribute("data-command")
if (!cmd) return
btn.setAttribute("data-command", cmd)
text.textContent = cmd
toggles.forEach(function (t) {
t.classList.remove("bg-muted", "text-foreground")
t.classList.add("bg-muted/30", "text-muted-foreground", "hover:text-foreground", "hover:bg-muted/50")
})
toggle.classList.remove("bg-muted/30", "text-muted-foreground", "hover:text-foreground", "hover:bg-muted/50")
toggle.classList.add("bg-muted", "text-foreground")
})
})
if (!btn._b) {
btn._b = true
btn.addEventListener("click", function () {
var cmd = btn.getAttribute("data-command")
if (!cmd) return
navigator.clipboard.writeText(cmd).then(function () {
copyIcon.classList.add("hidden")
checkIcon.classList.remove("hidden")
setTimeout(function () {
copyIcon.classList.remove("hidden")
checkIcon.classList.add("hidden")
}, 2000)
})
})
}
}
init()
document.addEventListener("astro:after-swap", init)
})()
</script> </script>
</Base> </Layout>

View File

@@ -1,47 +1,133 @@
@tailwind base; @import "tailwindcss";
@tailwind components; @import "tw-animate-css";
@tailwind utilities; @import "shadcn/tailwind.css";
@import "@fontsource-variable/ibm-plex-sans";
@custom-variant dark (&:is(.dark *));
@theme inline {
--font-heading: var(--font-sans);
--font-sans: 'IBM Plex Sans Variable', sans-serif;
--color-sidebar-ring: var(--sidebar-ring);
--color-sidebar-border: var(--sidebar-border);
--color-sidebar-accent-foreground: var(--sidebar-accent-foreground);
--color-sidebar-accent: var(--sidebar-accent);
--color-sidebar-primary-foreground: var(--sidebar-primary-foreground);
--color-sidebar-primary: var(--sidebar-primary);
--color-sidebar-foreground: var(--sidebar-foreground);
--color-sidebar: var(--sidebar);
--color-chart-5: var(--chart-5);
--color-chart-4: var(--chart-4);
--color-chart-3: var(--chart-3);
--color-chart-2: var(--chart-2);
--color-chart-1: var(--chart-1);
--color-ring: var(--ring);
--color-input: var(--input);
--color-border: var(--border);
--color-destructive: var(--destructive);
--color-accent-foreground: var(--accent-foreground);
--color-accent: var(--accent);
--color-muted-foreground: var(--muted-foreground);
--color-muted: var(--muted);
--color-secondary-foreground: var(--secondary-foreground);
--color-secondary: var(--secondary);
--color-primary-foreground: var(--primary-foreground);
--color-primary: var(--primary);
--color-popover-foreground: var(--popover-foreground);
--color-popover: var(--popover);
--color-card-foreground: var(--card-foreground);
--color-card: var(--card);
--color-foreground: var(--foreground);
--color-background: var(--background);
--radius-sm: calc(var(--radius) * 0.6);
--radius-md: calc(var(--radius) * 0.8);
--radius-lg: var(--radius);
--radius-xl: calc(var(--radius) * 1.4);
--radius-2xl: calc(var(--radius) * 1.8);
--radius-3xl: calc(var(--radius) * 2.2);
--radius-4xl: calc(var(--radius) * 2.6);
}
:root { :root {
--color-bg: #f0f5f1; --background: oklch(0.974 0.026 90.1);
--color-surface: #e4ece6; --foreground: oklch(0.30 0.02 150);
--color-surface-2: #d8e3db; --card: oklch(0.952 0.031 98.9);
--color-border: #c2d1c6; --card-foreground: oklch(0.30 0.02 150);
--color-text: #1a2e22; --popover: oklch(0.952 0.031 98.9);
--color-text-muted: #3d5c4a; --popover-foreground: oklch(0.30 0.02 150);
--color-text-dim: #6b8f7a; --primary: oklch(0.45 0.12 145);
--color-accent: #0d9668; --primary-foreground: oklch(0.97 0.02 90);
--color-accent-hover: #077a54; --secondary: oklch(0.937 0.031 98.9);
--color-accent-subtle: #c6e4d4; --secondary-foreground: oklch(0.30 0.02 150);
--color-teal: #0e8a7d; --muted: oklch(0.937 0.031 98.9);
--muted-foreground: oklch(0.55 0.02 150);
--accent: oklch(0.937 0.031 98.9);
--accent-foreground: oklch(0.30 0.02 150);
--destructive: oklch(0.709 0.128 19.6);
--border: oklch(0.892 0.028 98.1);
--input: oklch(0.892 0.028 98.1);
--ring: oklch(0.45 0.12 145);
--chart-1: oklch(0.45 0.12 145);
--chart-2: oklch(0.749 0.063 185.5);
--chart-3: oklch(0.750 0.082 349.2);
--chart-4: oklch(0.709 0.128 19.6);
--chart-5: oklch(0.30 0.02 150);
--radius: 0.625rem;
--sidebar: oklch(0.952 0.031 98.9);
--sidebar-foreground: oklch(0.30 0.02 150);
--sidebar-primary: oklch(0.45 0.12 145);
--sidebar-primary-foreground: oklch(0.97 0.02 90);
--sidebar-accent: oklch(0.937 0.031 98.9);
--sidebar-accent-foreground: oklch(0.30 0.02 150);
--sidebar-border: oklch(0.892 0.028 98.1);
--sidebar-ring: oklch(0.45 0.12 145);
} }
.dark { .dark {
--color-bg: #050a08; --background: oklch(0.324 0.015 240.4);
--color-surface: #0c1410; --foreground: oklch(0.830 0.041 86.1);
--color-surface-2: #131f1a; --card: oklch(0.360 0.017 227.1);
--color-border: #1b2f26; --card-foreground: oklch(0.830 0.041 86.1);
--color-text: #f0f5f2; --popover: oklch(0.360 0.017 227.1);
--color-text-muted: #8aaa9a; --popover-foreground: oklch(0.830 0.041 86.1);
--color-text-dim: #4d7565; --primary: oklch(0.773 0.091 125.8);
--color-accent: #34d399; --primary-foreground: oklch(0.324 0.015 240.4);
--color-accent-hover: #10b981; --secondary: oklch(0.386 0.019 229.5);
--color-accent-subtle: #064e3b; --secondary-foreground: oklch(0.830 0.041 86.1);
--color-teal: #2dd4bf; --muted: oklch(0.386 0.019 229.5);
--muted-foreground: oklch(0.723 0.019 153.4);
--accent: oklch(0.386 0.019 229.5);
--accent-foreground: oklch(0.830 0.041 86.1);
--destructive: oklch(0.709 0.128 19.6);
--border: oklch(0.515 0.021 232.9);
--input: oklch(0.515 0.021 232.9);
--ring: oklch(0.773 0.091 125.8);
--chart-1: oklch(0.773 0.091 125.8);
--chart-2: oklch(0.749 0.063 185.5);
--chart-3: oklch(0.750 0.082 349.2);
--chart-4: oklch(0.709 0.128 19.6);
--chart-5: oklch(0.647 0.020 155.6);
--sidebar: oklch(0.360 0.017 227.1);
--sidebar-foreground: oklch(0.830 0.041 86.1);
--sidebar-primary: oklch(0.773 0.091 125.8);
--sidebar-primary-foreground: oklch(0.324 0.015 240.4);
--sidebar-accent: oklch(0.416 0.023 157.1);
--sidebar-accent-foreground: oklch(0.830 0.041 86.1);
--sidebar-border: oklch(0.515 0.021 232.9);
--sidebar-ring: oklch(0.773 0.091 125.8);
} }
html { @layer base {
* {
@apply border-border outline-ring/50;
}
body {
@apply bg-background text-foreground;
}
html {
@apply font-sans;
scroll-behavior: smooth; scroll-behavior: smooth;
} }
::view-transition-old(root),
::view-transition-new(root) {
animation: none !important;
}
body {
background-color: var(--color-bg);
color: var(--color-text);
} }
.prose h2 { .prose h2 {
@@ -49,7 +135,7 @@ body {
font-weight: 700; font-weight: 700;
margin-top: 2.5rem; margin-top: 2.5rem;
margin-bottom: 1rem; margin-bottom: 1rem;
color: var(--color-text); color: var(--foreground);
} }
.prose h3 { .prose h3 {
@@ -57,13 +143,13 @@ body {
font-weight: 600; font-weight: 600;
margin-top: 2rem; margin-top: 2rem;
margin-bottom: 0.75rem; margin-bottom: 0.75rem;
color: var(--color-teal); color: var(--primary);
} }
.prose p { .prose p {
margin-bottom: 1rem; margin-bottom: 1rem;
line-height: 1.75; line-height: 1.75;
color: var(--color-text-muted); color: var(--muted-foreground);
} }
.prose ul { .prose ul {
@@ -81,64 +167,101 @@ body {
.prose li { .prose li {
margin-bottom: 0.375rem; margin-bottom: 0.375rem;
line-height: 1.65; line-height: 1.65;
color: var(--color-text-muted); color: var(--muted-foreground);
} }
.prose code { .prose code {
font-family: 'SF Mono', 'Fira Code', 'JetBrains Mono', monospace; font-family: 'SF Mono', 'Fira Code', 'JetBrains Mono', monospace;
font-size: 0.875rem; font-size: 0.875rem;
background-color: var(--color-surface); background-color: var(--muted);
padding: 0.125rem 0.375rem; padding: 0.125rem 0.375rem;
border-radius: 0.25rem; border-radius: 0.25rem;
color: var(--color-text); color: var(--foreground);
} }
.prose pre { .prose pre {
position: relative; position: relative;
background-color: var(--color-surface) !important;
border-radius: 0.5rem; border-radius: 0.5rem;
padding: 1rem 1.25rem; padding: 1rem 1.25rem;
overflow-x: auto; overflow-x: auto;
overflow-y: visible;
margin-bottom: 1.25rem; margin-bottom: 1.25rem;
font-family: 'SF Mono', 'Fira Code', 'JetBrains Mono', monospace; font-family: 'SF Mono', 'Fira Code', 'JetBrains Mono', monospace;
font-size: 0.875rem; font-size: 0.875rem;
line-height: 1.7; line-height: 1.7;
background-color: var(--card) !important;
color: var(--card-foreground);
}
.prose .astro-code {
position: relative !important;
background-color: var(--card) !important;
color: var(--card-foreground) !important;
} }
.prose pre code { .prose pre code {
background: none !important; background: none !important;
border: none; border: none;
padding: 0; padding: 0;
color: var(--color-text); color: var(--card-foreground);
} }
.copy-code { .prose pre .copy-code {
all: unset;
position: absolute; position: absolute;
top: 0.75rem; top: 0.75rem;
right: 0.75rem; right: 0.75rem;
z-index: 10;
display: grid; display: grid;
place-items: center; place-items: center;
width: 28px; width: 28px;
height: 28px; height: 28px;
padding: 0;
margin: 0;
border: 1px solid var(--border);
border-radius: 0.25rem; border-radius: 0.25rem;
color: var(--color-text-dim); color: var(--muted-foreground);
background: var(--color-surface-2); background: var(--background);
opacity: 0; opacity: 0.6;
transition: opacity 0.15s, color 0.15s; transition: opacity 0.15s, color 0.15s;
cursor: pointer; cursor: pointer;
pointer-events: auto;
} }
pre:hover .copy-code { .prose pre:hover .copy-code {
opacity: 1; opacity: 1;
} }
.copy-code:hover { .prose .astro-code .copy-code {
color: var(--color-accent); position: absolute;
top: 0.75rem;
right: 0.75rem;
z-index: 10;
display: grid;
place-items: center;
width: 28px;
height: 28px;
padding: 0;
margin: 0;
border: 1px solid var(--border);
border-radius: 0.25rem;
color: var(--muted-foreground);
background: var(--background);
opacity: 0.6;
transition: opacity 0.15s, color 0.15s;
cursor: pointer;
pointer-events: auto;
} }
.prose pre code span { .prose .astro-code:hover .copy-code {
color: inherit !important; opacity: 1;
}
.prose .astro-code .copy-code:hover {
color: var(--primary);
}
.prose pre .copy-code:hover {
color: var(--primary);
} }
.prose table { .prose table {
@@ -149,61 +272,98 @@ pre:hover .copy-code {
} }
.prose th { .prose th {
background-color: var(--color-surface); background-color: var(--card);
padding: 0.625rem 0.875rem; padding: 0.625rem 0.875rem;
text-align: left; text-align: left;
font-weight: 600; font-weight: 600;
color: var(--color-text); color: var(--foreground);
border-bottom: 1px solid var(--color-border); border-bottom: 1px solid var(--border);
} }
.prose td { .prose td {
padding: 0.625rem 0.875rem; padding: 0.625rem 0.875rem;
border-bottom: 1px solid var(--color-border); border-bottom: 1px solid var(--border);
} }
.prose td code { .prose td code {
background-color: var(--color-surface-2); background-color: var(--muted);
padding: 0.125rem 0.375rem; padding: 0.125rem 0.375rem;
border-radius: 0.25rem; border-radius: 0.25rem;
font-size: 0.85rem; font-size: 0.85rem;
} }
.prose tr:nth-child(even) { .prose tr:nth-child(even) {
background-color: var(--color-surface); background-color: var(--card);
} }
.prose a { .prose a {
color: var(--color-accent); color: var(--primary);
text-decoration: underline; text-decoration: underline;
text-underline-offset: 2px; text-underline-offset: 2px;
} }
.prose a:hover { .prose a:hover {
color: var(--color-accent-hover); opacity: 0.8;
} }
.prose strong { .prose strong {
color: var(--color-text); color: var(--foreground);
font-weight: 600; font-weight: 600;
} }
.prose hr { .prose hr {
border-color: var(--color-border); border-color: var(--border);
margin: 2rem 0; margin: 2rem 0;
} }
.prose blockquote { .prose blockquote {
border-left: 2px solid var(--color-text-dim); border-left: 2px solid var(--muted-foreground);
padding-left: 1rem; padding-left: 1rem;
color: var(--color-text-dim); color: var(--muted-foreground);
font-style: italic; font-style: italic;
margin-bottom: 1rem; margin-bottom: 1rem;
} }
.agent-entry { .dark .astro-code {
background-color: var(--color-surface); background-color: var(--card) !important;
border-radius: 0.75rem; }
padding: 1.25rem 1.5rem;
margin-bottom: 1rem; .dark .astro-code code span {
color: var(--shiki-dark) !important;
background-color: var(--shiki-dark-bg) !important;
font-style: var(--shiki-dark-font-style) !important;
font-weight: var(--shiki-dark-font-weight) !important;
text-decoration: var(--shiki-dark-text-decoration) !important;
}
* {
scrollbar-width: thin;
scrollbar-color: var(--border) transparent;
}
::-webkit-scrollbar {
width: 6px;
height: 6px;
}
::-webkit-scrollbar-track {
background: transparent;
}
::-webkit-scrollbar-thumb {
background: var(--border);
border-radius: 3px;
}
::-webkit-scrollbar-thumb:hover {
background: var(--muted-foreground);
}
::-webkit-scrollbar-corner {
background: transparent;
}
::selection {
background: var(--primary);
color: var(--primary-foreground);
} }

View File

@@ -1,25 +0,0 @@
export default {
content: ['./src/**/*.{astro,html,js,jsx,md,mdx,svelte,ts,tsx,vue}'],
darkMode: 'class',
theme: {
extend: {
colors: {
bg: 'var(--color-bg)',
surface: 'var(--color-surface)',
'surface-2': 'var(--color-surface-2)',
border: 'var(--color-border)',
'text-primary': 'var(--color-text)',
'text-muted': 'var(--color-text-muted)',
'text-dim': 'var(--color-text-dim)',
accent: 'var(--color-accent)',
'accent-hover': 'var(--color-accent-hover)',
'accent-subtle': 'var(--color-accent-subtle)',
teal: 'var(--color-teal)',
},
fontFamily: {
mono: ['"SF Mono"', '"Fira Code"', '"JetBrains Mono"', 'monospace'],
},
},
},
plugins: [],
};

View File

@@ -1,3 +1,13 @@
{ {
"extends": "astro/tsconfigs/strict" "extends": "astro/tsconfigs/strict",
"include": [".astro/types.d.ts", "**/*"],
"exclude": ["dist"],
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "react",
"baseUrl": ".",
"paths": {
"@/*": ["./src/*"]
}
}
} }