Improve Feynman packaging and research prompts

This commit is contained in:
Advait Paliwal
2026-03-24 09:57:25 -07:00
parent 6ff4dde341
commit 0f62901ab0
17 changed files with 253 additions and 36 deletions

View File

@@ -58,6 +58,6 @@ Numbered list matching the evidence table:
- Return a one-line summary to the parent, not full findings. The parent reads the output file. - Return a one-line summary to the parent, not full findings. The parent reads the output file.
## Output contract ## Output contract
- Save to the output file (default: `research.md`). - Save to the output path specified by the parent (default: `research.md`).
- Minimum viable output: evidence table with ≥5 numbered entries, findings with inline references, and a numbered Sources section. - Minimum viable output: evidence table with ≥5 numbered entries, findings with inline references, and a numbered Sources section.
- Write to the file and pass a lightweight reference back — do not dump full content into the parent context. - Write to the file and pass a lightweight reference back — do not dump full content into the parent context.

View File

@@ -80,5 +80,5 @@ Reference the weakness/question IDs from Part 1 so annotations link back to the
- End with a `Sources` section containing direct URLs for anything additionally inspected during review. - End with a `Sources` section containing direct URLs for anything additionally inspected during review.
## Output contract ## Output contract
- Save the main artifact to `review.md`. - Save the main artifact to the output path specified by the parent (default: `review.md`).
- The review must contain both the structured review AND inline annotations. - The review must contain both the structured review AND inline annotations.

View File

@@ -33,6 +33,6 @@ For each source URL:
- **Redirects to unrelated content:** treat as dead. - **Redirects to unrelated content:** treat as dead.
## Output contract ## Output contract
- Save to the output file (default: `cited.md`). - Save to the output path specified by the parent (default: `cited.md`).
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section. - The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
- Do not change the substance or structure of the draft. Only add citations and fix dead sources. - Do not change the substance or structure of the draft. Only add citations and fix dead sources.

View File

@@ -32,7 +32,21 @@ Do **not** restate per-agent prompt text here unless there is a repo-wide constr
- Paper-style drafts go in `papers/`. - Paper-style drafts go in `papers/`.
- Session logs go in `notes/`. - Session logs go in `notes/`.
- Plan artifacts for long-running workflows go in `outputs/.plans/`. - Plan artifacts for long-running workflows go in `outputs/.plans/`.
- Intermediate research artifacts such as `research-web.md` and `research-papers.md` are written to disk by subagents and read by the lead agent. They are not returned inline unless the user explicitly asks for them. - Intermediate research artifacts are written to disk by subagents and read by the lead agent. They are not returned inline unless the user explicitly asks for them.
## File naming
Every workflow that produces artifacts must derive a short **slug** from the topic (lowercase, hyphens, no filler words, ≤5 words — e.g. `cloud-sandbox-pricing`). All files in a single run use that slug as a prefix:
- Plan: `outputs/.plans/<slug>.md`
- Intermediate research: `<slug>-research-web.md`, `<slug>-research-papers.md`, etc.
- Draft: `outputs/.drafts/<slug>-draft.md`
- Cited brief: `<slug>-brief.md`
- Verification: `<slug>-verification.md`
- Final output: `outputs/<slug>.md` or `papers/<slug>.md`
- Provenance: `<slug>.provenance.md` (next to the final output)
Never use generic names like `research.md`, `draft.md`, `brief.md`, or `summary.md`. Concurrent runs must not collide.
## Provenance and verification ## Provenance and verification

View File

@@ -85,7 +85,7 @@ feynman search status # web search config
## How it works ## How it works
Built on [Pi](https://github.com/mariozechner/pi-coding-agent) for the agent runtime, [alphaXiv](https://www.alphaxiv.org/) for paper search and analysis, [Docker](https://www.docker.com/) for isolated local execution, and [Agent Computer](https://agentcomputer.ai) for secure cloud workloads Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alphaXiv](https://www.alphaxiv.org/) for paper search and analysis, [Docker](https://www.docker.com/) for isolated local execution, and [Agent Computer](https://agentcomputer.ai) for secure cloud workloads
Every output is source-grounded — claims link to papers, docs, or repos with direct URLs Every output is source-grounded — claims link to papers, docs, or repos with direct URLs

View File

@@ -14,6 +14,7 @@
"dist/", "dist/",
"metadata/", "metadata/",
".feynman/agents/", ".feynman/agents/",
".feynman/runtime-workspace.tgz",
".feynman/settings.json", ".feynman/settings.json",
".feynman/SYSTEM.md", ".feynman/SYSTEM.md",
".feynman/themes/", ".feynman/themes/",
@@ -30,6 +31,7 @@
"scripts": { "scripts": {
"build": "tsc -p tsconfig.build.json", "build": "tsc -p tsconfig.build.json",
"dev": "tsx src/index.ts", "dev": "tsx src/index.ts",
"prepack": "node ./scripts/prepare-runtime-workspace.mjs",
"postinstall": "node ./scripts/patch-embedded-pi.mjs", "postinstall": "node ./scripts/patch-embedded-pi.mjs",
"start": "tsx src/index.ts", "start": "tsx src/index.ts",
"start:dist": "node ./bin/feynman.js", "start:dist": "node ./bin/feynman.js",

View File

@@ -6,10 +6,12 @@ topLevelCli: true
--- ---
Audit the paper and codebase for: $@ Audit the paper and codebase for: $@
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements: Requirements:
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Present the plan to the user and confirm before proceeding. - Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial. - Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
- Compare claimed methods, defaults, metrics, and data handling against the actual code. - Compare claimed methods, defaults, metrics, and data handling against the actual code.
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks. - Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
- Save exactly one audit artifact to `outputs/` as markdown. - Save exactly one audit artifact to `outputs/<slug>-audit.md`.
- End with a `Sources` section containing paper and repository URLs. - End with a `Sources` section containing paper and repository URLs.

View File

@@ -6,11 +6,13 @@ topLevelCli: true
--- ---
Compare sources for: $@ Compare sources for: $@
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements: Requirements:
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Present the plan to the user and confirm before proceeding. - Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix. - Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence. - Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons. - Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.
- Distinguish agreement, disagreement, and uncertainty clearly. - Distinguish agreement, disagreement, and uncertainty clearly.
- Save exactly one comparison to `outputs/` as markdown. - Save exactly one comparison to `outputs/<slug>-comparison.md`.
- End with a `Sources` section containing direct URLs for every source used. - End with a `Sources` section containing direct URLs for every source used.

View File

@@ -17,7 +17,7 @@ Analyze the research question using extended thinking. Develop a research strate
- Source types and time periods that matter - Source types and time periods that matter
- Acceptance criteria: what evidence would make the answer "sufficient" - Acceptance criteria: what evidence would make the answer "sufficient"
Write the plan to `outputs/.plans/deepresearch-plan.md` as a self-contained artifact: Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words — e.g. "cloud-sandbox-pricing" not "deepresearch-plan"). Write the plan to `outputs/.plans/<slug>.md` as a self-contained artifact. Use this same slug for all artifacts in this run.
```markdown ```markdown
# Research Plan: [topic] # Research Plan: [topic]
@@ -38,7 +38,7 @@ Write the plan to `outputs/.plans/deepresearch-plan.md` as a self-contained arti
(Updated as the workflow progresses) (Updated as the workflow progresses)
``` ```
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.plan`) so it survives context truncation. Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
Present the plan to the user and ask them to confirm before proceeding. If the user wants changes, revise the plan first. Present the plan to the user and ask them to confirm before proceeding. If the user wants changes, revise the plan first.
@@ -66,8 +66,8 @@ Assign each researcher a clearly disjoint dimension — different source types,
``` ```
{ {
tasks: [ tasks: [
{ agent: "researcher", task: "...", output: "research-web.md" }, { agent: "researcher", task: "...", output: "<slug>-research-web.md" },
{ agent: "researcher", task: "...", output: "research-papers.md" } { agent: "researcher", task: "...", output: "<slug>-research-papers.md" }
], ],
concurrency: 4, concurrency: 4,
failFast: false failFast: false
@@ -86,7 +86,7 @@ After researchers return, read their output files and critically assess:
If gaps are significant, spawn another targeted batch of researchers. No fixed cap on rounds — iterate until evidence is sufficient or sources are exhausted. If gaps are significant, spawn another targeted batch of researchers. No fixed cap on rounds — iterate until evidence is sufficient or sources are exhausted.
Update the plan artifact (`outputs/.plans/deepresearch-plan.md`) decision log after each round. Update the plan artifact (`outputs/.plans/<slug>.md`) decision log after each round.
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions. Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
@@ -111,14 +111,14 @@ Unresolved issues, disagreements between sources, gaps in evidence.
When the research includes quantitative data (benchmarks, performance comparisons, trends), generate charts using `pi-charts`. Use Mermaid diagrams for architectures and processes. Every visual must have a caption and reference the underlying data. When the research includes quantitative data (benchmarks, performance comparisons, trends), generate charts using `pi-charts`. Use Mermaid diagrams for architectures and processes. Every visual must have a caption and reference the underlying data.
Save this draft to a temp file (e.g., `draft.md` in the chain artifacts dir or a temp path). Save this draft to `outputs/.drafts/<slug>-draft.md`.
## 6. Cite ## 6. Cite
Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output: Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output:
``` ```
{ agent: "verifier", task: "Add inline citations to draft.md using the research files as source material. Verify every URL.", output: "brief.md" } { agent: "verifier", task: "Add inline citations to <slug>-draft.md using the research files as source material. Verify every URL.", output: "<slug>-brief.md" }
``` ```
The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section. The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
@@ -132,7 +132,7 @@ Spawn the `reviewer` agent against the cited draft. The reviewer checks for:
- Overstated confidence relative to evidence quality - Overstated confidence relative to evidence quality
``` ```
{ agent: "reviewer", task: "Verify brief.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "verification.md" } { agent: "reviewer", task: "Verify <slug>-brief.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "<slug>-verification.md" }
``` ```
If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted. If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted.
@@ -143,9 +143,9 @@ Copy the final cited and verified output to the appropriate folder:
- Paper-style drafts → `papers/` - Paper-style drafts → `papers/`
- Everything else → `outputs/` - Everything else → `outputs/`
Use a descriptive filename based on the topic. Save the final output as `<slug>.md` (in `outputs/` or `papers/` per the rule above).
Write a provenance record alongside the main artifact as `<filename>.provenance.md`: Write a provenance record alongside it as `<slug>.provenance.md`:
```markdown ```markdown
# Provenance: [topic] # Provenance: [topic]
@@ -156,8 +156,8 @@ Write a provenance record alongside the main artifact as `<filename>.provenance.
- **Sources accepted:** [sources that survived citation verification] - **Sources accepted:** [sources that survived citation verification]
- **Sources rejected:** [dead links, unverifiable, or removed] - **Sources rejected:** [dead links, unverifiable, or removed]
- **Verification:** [PASS / PASS WITH NOTES — summary of reviewer findings] - **Verification:** [PASS / PASS WITH NOTES — summary of reviewer findings]
- **Plan:** outputs/.plans/deepresearch-plan.md - **Plan:** outputs/.plans/<slug>.md
- **Research files:** [list of intermediate research-*.md files] - **Research files:** [list of intermediate <slug>-research-*.md files]
``` ```
## Background execution ## Background execution

View File

@@ -17,5 +17,5 @@ Delegate the following task to a remote Agent Computer machine: $@
- What artifact to produce when done (summary file) - What artifact to produce when done (summary file)
- Any tools or data sources to use - Any tools or data sources to use
6. **Monitor** — Use `computer agent watch <machine> --session <session_id>` to stream progress. Report status to the user at meaningful milestones. 6. **Monitor** — Use `computer agent watch <machine> --session <session_id>` to stream progress. Report status to the user at meaningful milestones.
7. **Retrieve results** — When the remote agent finishes, pull the summary back with `computer agent prompt <machine> "cat /workspace/outputs/summary.md" --session <session_id>`. Present results to the user. 7. **Retrieve results** — When the remote agent finishes, pull the results back with `computer agent prompt <machine> "cat /workspace/outputs/<slug>.md" --session <session_id>` (derive the slug from the task topic). Present results to the user.
8. **Clean up** — Close the session with `computer agent close <machine> --session <session_id>` unless the user wants to continue. 8. **Clean up** — Close the session with `computer agent close <machine> --session <session_id>` unless the user wants to continue.

View File

@@ -6,11 +6,13 @@ topLevelCli: true
--- ---
Write a paper-style draft for: $@ Write a paper-style draft for: $@
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements: Requirements:
- Before writing, outline the draft structure: proposed title, sections, key claims to make, and source material to draw from. Present the outline to the user and confirm before proceeding. - Before writing, outline the draft structure: proposed title, sections, key claims to make, and source material to draw from. Write the outline to `outputs/.plans/<slug>.md`. Present the outline to the user and confirm before proceeding.
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources. - Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion. - Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
- Use clean Markdown with LaTeX where equations materially help. - Use clean Markdown with LaTeX where equations materially help.
- Generate charts with `pi-charts` for quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines. Every figure needs a caption. - Generate charts with `pi-charts` for quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines. Every figure needs a caption.
- Save exactly one draft to `papers/` as markdown. - Save exactly one draft to `papers/<slug>.md`.
- End with a `Sources` appendix with direct URLs for all primary references. - End with a `Sources` appendix with direct URLs for all primary references.

View File

@@ -6,11 +6,13 @@ topLevelCli: true
--- ---
Investigate the following topic as a literature review: $@ Investigate the following topic as a literature review: $@
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
## Workflow ## Workflow
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, and expected sections. Present the plan to the user and confirm before proceeding. 1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, and expected sections. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. 2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`.
2. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. 3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines.
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft. 4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
5. **Verify** — Spawn the `reviewer` agent to check the cited draft for unsupported claims, logical gaps, and single-source critical findings. Fix FATAL issues before delivering. Note MAJOR issues in Open Questions. 5. **Verify** — Spawn the `reviewer` agent to check the cited draft for unsupported claims, logical gaps, and single-source critical findings. Fix FATAL issues before delivering. Note MAJOR issues in Open Questions.
6. **Deliver** — Save exactly one literature review to `outputs/` as markdown. Write a provenance record alongside it as `<filename>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used. 6. **Deliver** — Save the final literature review to `outputs/<slug>.md`. Write a provenance record alongside it as `outputs/<slug>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used.

View File

@@ -6,10 +6,12 @@ topLevelCli: true
--- ---
Review this AI research artifact: $@ Review this AI research artifact: $@
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements: Requirements:
- Before starting, outline what will be reviewed and the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.). Present the plan to the user and confirm before proceeding. - Before starting, outline what will be reviewed and the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.). Present the plan to the user and confirm before proceeding.
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `research.md`. - Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
- Spawn a `reviewer` subagent with `research.md` to produce the final peer review with inline annotations. - Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead. - For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
- Save exactly one review artifact to `outputs/` as markdown. - Save exactly one review artifact to `outputs/<slug>-review.md`.
- End with a `Sources` section containing direct URLs for every inspected external source. - End with a `Sources` section containing direct URLs for every inspected external source.

View File

@@ -6,9 +6,11 @@ topLevelCli: true
--- ---
Create a research watch for: $@ Create a research watch for: $@
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements: Requirements:
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Present the plan to the user and confirm before proceeding. - Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
- Start with a baseline sweep of the topic. - Start with a baseline sweep of the topic.
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later. - Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
- Save exactly one baseline artifact to `outputs/`. - Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.
- End with a `Sources` section containing direct URLs for every source used. - End with a `Sources` section containing direct URLs for every source used.

View File

@@ -1,5 +1,5 @@
import { spawnSync } from "node:child_process"; import { spawnSync } from "node:child_process";
import { existsSync, mkdirSync, readFileSync, writeFileSync } from "node:fs"; import { existsSync, mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs";
import { dirname, resolve } from "node:path"; import { dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url"; import { fileURLToPath } from "node:url";
import { FEYNMAN_LOGO_HTML } from "../logo.mjs"; import { FEYNMAN_LOGO_HTML } from "../logo.mjs";
@@ -54,6 +54,45 @@ const piMemoryPath = resolve(workspaceRoot, "@samfp", "pi-memory", "src", "index
const settingsPath = resolve(appRoot, ".feynman", "settings.json"); const settingsPath = resolve(appRoot, ".feynman", "settings.json");
const workspaceDir = resolve(appRoot, ".feynman", "npm"); const workspaceDir = resolve(appRoot, ".feynman", "npm");
const workspacePackageJsonPath = resolve(workspaceDir, "package.json"); const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
const workspaceArchivePath = resolve(appRoot, ".feynman", "runtime-workspace.tgz");
function parsePackageName(spec) {
const match = spec.match(/^(@?[^@]+(?:\/[^@]+)?)(?:@.+)?$/);
return match?.[1] ?? spec;
}
function restorePackagedWorkspace(packageSpecs) {
if (!existsSync(workspaceArchivePath)) return false;
rmSync(workspaceDir, { recursive: true, force: true });
mkdirSync(resolve(appRoot, ".feynman"), { recursive: true });
const result = spawnSync("tar", ["-xzf", workspaceArchivePath, "-C", resolve(appRoot, ".feynman")], {
stdio: ["ignore", "ignore", "pipe"],
timeout: 300000,
});
if (result.status !== 0) {
if (result.stderr?.length) process.stderr.write(result.stderr);
return false;
}
return packageSpecs.every((spec) => existsSync(resolve(workspaceRoot, parsePackageName(spec))));
}
function refreshPackagedWorkspace(packageSpecs) {
const result = spawnSync("npm", ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs], {
stdio: ["ignore", "ignore", "pipe"],
timeout: 300000,
});
if (result.status !== 0) {
if (result.stderr?.length) process.stderr.write(result.stderr);
return false;
}
return true;
}
function resolveExecutable(name, fallbackPaths = []) { function resolveExecutable(name, fallbackPaths = []) {
for (const candidate of fallbackPaths) { for (const candidate of fallbackPaths) {
@@ -82,7 +121,8 @@ function ensurePackageWorkspace() {
: []; : [];
if (packageSpecs.length === 0) return; if (packageSpecs.length === 0) return;
if (existsSync(resolve(workspaceRoot, packageSpecs[0]))) return; if (existsSync(resolve(workspaceRoot, parsePackageName(packageSpecs[0])))) return;
if (restorePackagedWorkspace(packageSpecs) && refreshPackagedWorkspace(packageSpecs)) return;
mkdirSync(workspaceDir, { recursive: true }); mkdirSync(workspaceDir, { recursive: true });
writeFileSync( writeFileSync(

View File

@@ -0,0 +1,149 @@
import { existsSync, mkdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
import { resolve } from "node:path";
import { spawnSync } from "node:child_process";
const appRoot = resolve(import.meta.dirname, "..");
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
const feynmanDir = resolve(appRoot, ".feynman");
const workspaceDir = resolve(appRoot, ".feynman", "npm");
const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
function readPackageSpecs() {
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
if (!Array.isArray(settings.packages)) {
return [];
}
return settings.packages
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
.map((value) => value.slice(4));
}
function parsePackageName(spec) {
const match = spec.match(/^(@?[^@]+(?:\/[^@]+)?)(?:@.+)?$/);
return match?.[1] ?? spec;
}
function arraysMatch(left, right) {
return left.length === right.length && left.every((value, index) => value === right[index]);
}
function workspaceIsCurrent(packageSpecs) {
if (!existsSync(manifestPath) || !existsSync(workspaceNodeModulesDir)) {
return false;
}
try {
const manifest = JSON.parse(readFileSync(manifestPath, "utf8"));
if (!Array.isArray(manifest.packageSpecs) || !arraysMatch(manifest.packageSpecs, packageSpecs)) {
return false;
}
if (
manifest.nodeAbi !== process.versions.modules ||
manifest.platform !== process.platform ||
manifest.arch !== process.arch
) {
return false;
}
return packageSpecs.every((spec) => existsSync(resolve(workspaceNodeModulesDir, parsePackageName(spec))));
} catch {
return false;
}
}
function writeWorkspacePackageJson() {
writeFileSync(
workspacePackageJsonPath,
JSON.stringify(
{
name: "feynman-runtime",
private: true,
},
null,
2,
) + "\n",
"utf8",
);
}
function prepareWorkspace(packageSpecs) {
rmSync(workspaceDir, { recursive: true, force: true });
mkdirSync(workspaceDir, { recursive: true });
writeWorkspacePackageJson();
if (packageSpecs.length === 0) {
return;
}
const result = spawnSync(
process.env.npm_execpath ? process.execPath : "npm",
process.env.npm_execpath
? [process.env.npm_execpath, "install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
: ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
{ stdio: "inherit" },
);
if (result.status !== 0) {
process.exit(result.status ?? 1);
}
}
function writeManifest(packageSpecs) {
writeFileSync(
manifestPath,
JSON.stringify(
{
packageSpecs,
generatedAt: new Date().toISOString(),
nodeAbi: process.versions.modules,
nodeVersion: process.version,
platform: process.platform,
arch: process.arch,
},
null,
2,
) + "\n",
"utf8",
);
}
function archiveIsCurrent() {
if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) {
return false;
}
return statSync(workspaceArchivePath).mtimeMs >= statSync(manifestPath).mtimeMs;
}
function createWorkspaceArchive() {
rmSync(workspaceArchivePath, { force: true });
const result = spawnSync("tar", ["-czf", workspaceArchivePath, "-C", feynmanDir, "npm"], {
stdio: "inherit",
});
if (result.status !== 0) {
process.exit(result.status ?? 1);
}
}
const packageSpecs = readPackageSpecs();
if (workspaceIsCurrent(packageSpecs)) {
console.log("[feynman] vendored runtime workspace already up to date");
if (archiveIsCurrent()) {
process.exit(0);
}
console.log("[feynman] refreshing runtime workspace archive...");
createWorkspaceArchive();
console.log("[feynman] runtime workspace archive ready");
process.exit(0);
}
console.log("[feynman] preparing vendored runtime workspace...");
prepareWorkspace(packageSpecs);
writeManifest(packageSpecs);
createWorkspaceArchive();
console.log("[feynman] vendored runtime workspace ready");

View File

@@ -144,7 +144,7 @@ import AsciiLogo from '../components/AsciiLogo.astro';
<section class="py-20 px-6 text-center"> <section class="py-20 px-6 text-center">
<div class="max-w-xl mx-auto"> <div class="max-w-xl mx-auto">
<p class="text-text-muted mb-6">Built on <a href="https://github.com/mariozechner/pi-coding-agent" class="text-accent hover:underline">Pi</a>, <a href="https://www.alphaxiv.org/" class="text-accent hover:underline">alphaXiv</a>, and <a href="https://agentcomputer.ai" class="text-accent hover:underline">Agent Computer</a>. MIT licensed. Open source.</p> <p class="text-text-muted mb-6">Built on <a href="https://github.com/badlogic/pi-mono" class="text-accent hover:underline">Pi</a>, <a href="https://www.alphaxiv.org/" class="text-accent hover:underline">alphaXiv</a>, and <a href="https://agentcomputer.ai" class="text-accent hover:underline">Agent Computer</a>. MIT licensed. Open source.</p>
<div class="flex gap-4 justify-center flex-wrap"> <div class="flex gap-4 justify-center flex-wrap">
<a href="/docs/getting-started/installation" class="px-6 py-2.5 rounded-lg bg-accent text-bg font-semibold text-sm hover:bg-accent-hover transition-colors">Get started</a> <a href="/docs/getting-started/installation" class="px-6 py-2.5 rounded-lg bg-accent text-bg font-semibold text-sm hover:bg-accent-hover transition-colors">Get started</a>
<a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="px-6 py-2.5 rounded-lg border border-border text-text-muted font-semibold text-sm hover:border-text-dim hover:text-text-primary transition-colors">GitHub</a> <a href="https://github.com/getcompanion-ai/feynman" target="_blank" rel="noopener" class="px-6 py-2.5 rounded-lg border border-border text-text-muted font-semibold text-sm hover:border-text-dim hover:text-text-primary transition-colors">GitHub</a>