Replace Pi tool registrations with skills and CLI integration

- Remove all manually registered Pi tools (alpha_search, alpha_get_paper,
  alpha_ask_paper, alpha_annotate_paper, alpha_list_annotations,
  alpha_read_code, session_search, preview_file) and their wrappers
  (alpha.ts, preview.ts, session-search.ts, alpha-tools.test.ts)
- Add Pi skill files for alpha-research, session-search, preview,
  modal-compute, and runpod-compute in skills/
- Sync skills to ~/.feynman/agent/skills/ on startup via syncBundledAssets
- Add node_modules/.bin to Pi subprocess PATH so alpha CLI is accessible
- Add /outputs extension command to browse research artifacts via dialog
- Add Modal and RunPod as execution environments in /replicate and
  /autoresearch prompts
- Remove redundant /alpha-login /alpha-logout /alpha-status REPL commands
  (feynman alpha CLI still works)
- Update README, researcher agent, metadata, and website docs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Advait Paliwal
2026-03-25 00:38:45 -07:00
parent 5fab329ad1
commit 7024a86024
26 changed files with 320 additions and 1009 deletions

View File

@@ -0,0 +1,42 @@
---
name: alpha-research
description: Search, read, and query research papers via the `alpha` CLI (alphaXiv-backed). Use when the user asks about academic papers, wants to find research on a topic, needs to read a specific paper, ask questions about a paper, inspect a paper's code repository, or manage paper annotations.
---
# Alpha Research CLI
Use the `alpha` CLI via bash for all paper research operations.
## Commands
| Command | Description |
|---------|-------------|
| `alpha search "<query>"` | Search papers. Modes: `--mode semantic`, `--mode keyword`, `--mode agentic` |
| `alpha get <arxiv-id-or-url>` | Fetch paper content and any local annotation |
| `alpha get --full-text <arxiv-id>` | Get raw full text instead of AI report |
| `alpha ask <arxiv-id> "<question>"` | Ask a question about a paper's PDF |
| `alpha code <github-url> [path]` | Read files from a paper's GitHub repo. Use `/` for overview |
| `alpha annotate <paper-id> "<note>"` | Save a persistent annotation on a paper |
| `alpha annotate --clear <paper-id>` | Remove an annotation |
| `alpha annotate --list` | List all annotations |
## Auth
Run `alpha login` to authenticate with alphaXiv. Check status with `alpha status`.
## Examples
```bash
alpha search "transformer scaling laws"
alpha search --mode agentic "efficient attention mechanisms for long context"
alpha get 2106.09685
alpha ask 2106.09685 "What optimizer did they use?"
alpha code https://github.com/karpathy/nanoGPT src/model.py
alpha annotate 2106.09685 "Key paper on LoRA - revisit for adapter comparison"
```
## When to use
- Academic paper search, reading, Q&A → `alpha`
- Current topics (products, releases, docs) → web search tools
- Mixed topics → combine both

View File

@@ -0,0 +1,56 @@
---
name: modal-compute
description: Run GPU workloads on Modal's serverless infrastructure. Use when the user needs remote GPU compute for training, inference, benchmarks, or batch processing and Modal CLI is available.
---
# Modal Compute
Use the `modal` CLI for serverless GPU workloads. No pod lifecycle to manage — write a decorated Python script and run it.
## Setup
```bash
pip install modal
modal setup
```
## Commands
| Command | Description |
|---------|-------------|
| `modal run script.py` | Run a script on Modal (ephemeral) |
| `modal run --detach script.py` | Run detached (background) |
| `modal deploy script.py` | Deploy persistently |
| `modal serve script.py` | Serve with hot-reload (dev) |
| `modal shell --gpu a100` | Interactive shell with GPU |
| `modal app list` | List deployed apps |
## GPU types
`T4`, `L4`, `A10G`, `L40S`, `A100`, `A100-80GB`, `H100`, `H200`, `B200`
Multi-GPU: `"H100:4"` for 4x H100s.
## Script pattern
```python
import modal
app = modal.App("experiment")
image = modal.Image.debian_slim(python_version="3.11").pip_install("torch==2.8.0")
@app.function(gpu="A100", image=image, timeout=600)
def train():
import torch
# training code here
@app.local_entrypoint()
def main():
train.remote()
```
## When to use
- Stateless burst GPU jobs (training, inference, benchmarks)
- No persistent state needed between runs
- Check availability: `command -v modal`

27
skills/preview/SKILL.md Normal file
View File

@@ -0,0 +1,27 @@
---
name: preview
description: Preview Markdown, LaTeX, PDF, or code artifacts in the browser or as PDF. Use when the user wants to review a written artifact, export a report, or view a rendered document.
---
# Preview
Use the `/preview` command to render and open artifacts.
## Commands
| Command | Description |
|---------|-------------|
| `/preview` | Preview the most recent artifact in the browser |
| `/preview --file <path>` | Preview a specific file |
| `/preview-browser` | Force browser preview |
| `/preview-pdf` | Export to PDF via pandoc + LaTeX |
| `/preview-clear-cache` | Clear rendered preview cache |
## Fallback
If the preview commands are not available, use bash:
```bash
open <file.md> # macOS — opens in default app
open <file.pdf> # macOS — opens in Preview
```

View File

@@ -0,0 +1,48 @@
---
name: runpod-compute
description: Provision and manage GPU pods on RunPod for long-running experiments. Use when the user needs persistent GPU compute with SSH access, large datasets, or multi-step experiments.
---
# RunPod Compute
Use `runpodctl` CLI for persistent GPU pods with SSH access.
## Setup
```bash
brew install runpod/runpodctl/runpodctl # macOS
runpodctl config --apiKey=YOUR_KEY
```
## Commands
| Command | Description |
|---------|-------------|
| `runpodctl create pod --gpuType "NVIDIA A100 80GB PCIe" --imageName "runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04" --name experiment` | Create a pod |
| `runpodctl get pod` | List all pods |
| `runpodctl stop pod <id>` | Stop (preserves volume) |
| `runpodctl start pod <id>` | Resume a stopped pod |
| `runpodctl remove pod <id>` | Terminate and delete |
| `runpodctl gpu list` | List available GPU types and prices |
| `runpodctl send <file>` | Transfer files to/from pods |
| `runpodctl receive <code>` | Receive transferred files |
## SSH access
```bash
ssh root@<IP> -p <PORT> -i ~/.ssh/id_ed25519
```
Get connection details from `runpodctl get pod <id>`. Pods must expose port `22/tcp`.
## GPU types
`NVIDIA GeForce RTX 4090`, `NVIDIA RTX A6000`, `NVIDIA A40`, `NVIDIA A100 80GB PCIe`, `NVIDIA H100 80GB HBM3`
## When to use
- Long-running experiments needing persistent state
- Large dataset processing
- Multi-step work with SSH access between iterations
- Always stop or remove pods after experiments
- Check availability: `command -v runpodctl`

View File

@@ -0,0 +1,26 @@
---
name: session-search
description: Search past Feynman session transcripts to recover prior work, conversations, and research context. Use when the user references something from a previous session, asks "what did we do before", or when you suspect relevant past context exists.
---
# Session Search
Use the `/search` command to search prior Feynman sessions interactively, or search session JSONL files directly via bash.
## Interactive search
```
/search <query>
```
Opens the session search UI. Supports `resume <sessionPath>` to continue a found session.
## Direct file search
Session transcripts are stored as JSONL files in `~/.feynman/sessions/`. Each line is a JSON record with `type` (session, message, model_change) and `message.content` fields.
```bash
grep -ril "scaling laws" ~/.feynman/sessions/
```
For structured search across sessions, use the interactive `/search` command.