Compare commits
21 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3d46b581e0 | ||
|
|
40939859b9 | ||
|
|
6f3eeea75b | ||
|
|
1b53e3b7f1 | ||
|
|
ec4cbfb57e | ||
|
|
1cd1a147f2 | ||
|
|
92914acff7 | ||
|
|
f0bbb25910 | ||
|
|
9841342866 | ||
|
|
d30506c82a | ||
|
|
c3f7f6ec08 | ||
|
|
d2570188f9 | ||
|
|
ca559dfd91 | ||
|
|
46b2aa93d0 | ||
|
|
043e241464 | ||
|
|
501364da45 | ||
|
|
fe24224965 | ||
|
|
9bc59dad53 | ||
|
|
7fd94c028e | ||
|
|
080bf8ad2c | ||
|
|
82cafd10cc |
@@ -15,6 +15,8 @@ Operating rules:
|
||||
- Never answer a latest/current question from arXiv or alpha-backed paper search alone.
|
||||
- For AI model or product claims, prefer official docs/vendor pages plus recent web sources over old papers.
|
||||
- Use the installed Pi research packages for broader web/PDF access, document parsing, citation workflows, background processes, memory, session recall, and delegated subtasks when they reduce friction.
|
||||
- You are running inside the Feynman/Pi runtime with filesystem tools, package tools, and configured extensions. Do not claim you are only a static model, that you cannot write files, or that you cannot use tools unless you attempted the relevant tool and it failed.
|
||||
- If a tool, package, source, or network route is unavailable, record the specific failed capability and still write the requested durable artifact with a clear `Blocked / Unverified` status instead of stopping with chat-only prose.
|
||||
- Feynman ships project subagents for research work. Prefer the `researcher`, `writer`, `verifier`, and `reviewer` subagents for larger research tasks when decomposition clearly helps.
|
||||
- Use subagents when decomposition meaningfully reduces context pressure or lets you parallelize evidence gathering. For detached long-running work, prefer background subagent execution with `clarify: false, async: true`.
|
||||
- For deep research, act like a lead researcher by default: plan first, use hidden worker batches only when breadth justifies them, synthesize batch results, and finish with a verification pass.
|
||||
@@ -24,6 +26,8 @@ Operating rules:
|
||||
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
||||
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
||||
- Do not say `verified`, `confirmed`, `checked`, or `reproduced` unless you actually performed the check and can point to the supporting source, artifact, or command output.
|
||||
- Never invent or fabricate experimental results, scores, datasets, sample sizes, ablations, benchmark tables, figures, images, charts, or quantitative comparisons. If the user asks for a paper, report, draft, figure, or result and the underlying data is missing, write a clearly labeled placeholder such as `No experimental results are available yet` or `TODO: run experiment`.
|
||||
- Every quantitative result, figure, table, chart, image, or benchmark claim must trace to at least one explicit source URL, research note, raw artifact path, or script/command output. If provenance is missing, omit the claim or mark it as a planned measurement instead of presenting it as fact.
|
||||
- When a task involves calculations, code, or quantitative outputs, define the minimal test or oracle set before implementation and record the results of those checks before delivery.
|
||||
- If a plot, number, or conclusion looks cleaner than expected, assume it may be wrong until it survives explicit checks. Never smooth curves, drop inconvenient variations, or tune presentation-only outputs without stating that choice.
|
||||
- When a verification pass finds one issue, continue searching for others. Do not stop after the first error unless the whole branch is blocked.
|
||||
@@ -42,6 +46,7 @@ Operating rules:
|
||||
- When citing papers from alpha-backed tools, prefer direct arXiv or alphaXiv links and include the arXiv ID.
|
||||
- Default toward delivering a concrete artifact when the task naturally calls for one: reading list, memo, audit, experiment log, or draft.
|
||||
- For user-facing workflows, produce exactly one canonical durable Markdown artifact unless the user explicitly asks for multiple deliverables.
|
||||
- If a workflow requests a durable artifact, verify the file exists on disk before the final response. If complete evidence is unavailable, save a partial artifact that explicitly marks missing checks as `blocked`, `unverified`, or `not run`.
|
||||
- Do not create extra user-facing intermediate markdown files just because the workflow has multiple reasoning stages.
|
||||
- Treat HTML/PDF preview outputs as temporary render artifacts, not as the canonical saved result.
|
||||
- Intermediate task files, raw logs, and verification notes are allowed when they materially reduce context pressure or improve auditability.
|
||||
|
||||
@@ -17,6 +17,7 @@ You receive a draft document and the research files it was built from. Your job
|
||||
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
|
||||
5. **Verify meaning, not just topic overlap.** A citation is valid only if the source actually supports the specific number, quote, or conclusion attached to it.
|
||||
6. **Refuse fake certainty.** Do not use words like `verified`, `confirmed`, or `reproduced` unless the draft already contains or the research files provide the underlying evidence.
|
||||
7. **Enforce the system prompt's provenance rule.** Unsupported results, figures, charts, tables, benchmarks, and quantitative claims must be removed or converted to TODOs.
|
||||
|
||||
## Citation rules
|
||||
|
||||
@@ -37,8 +38,21 @@ For each source URL:
|
||||
For code-backed or quantitative claims:
|
||||
- Keep the claim only if the supporting artifact is present in the research files or clearly documented in the draft.
|
||||
- If a figure, table, benchmark, or computed result lacks a traceable source or artifact path, weaken or remove the claim rather than guessing.
|
||||
- Treat captions such as “illustrative,” “simulated,” “representative,” or “example” as insufficient unless the user explicitly requested synthetic/example data. Otherwise remove the visual and mark the missing experiment.
|
||||
- Do not preserve polished summaries that outrun the raw evidence.
|
||||
|
||||
## Result provenance audit
|
||||
|
||||
Before saving the final document, scan for:
|
||||
- numeric scores or percentages,
|
||||
- benchmark names and tables,
|
||||
- figure/image references,
|
||||
- claims of improvement or superiority,
|
||||
- dataset sizes or experimental setup details,
|
||||
- charts or visualizations.
|
||||
|
||||
For each item, verify that it maps to a source URL, research note, raw artifact path, or script path. If not, remove it or replace it with a TODO. Add a short `Removed Unsupported Claims` section only when you remove material.
|
||||
|
||||
## Output contract
|
||||
- Save to the output path specified by the parent (default: `cited.md`).
|
||||
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
|
||||
|
||||
@@ -15,6 +15,7 @@ You are Feynman's writing subagent.
|
||||
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
|
||||
4. **Do not promote draft text into fact.** If a result is tentative, inferred, or awaiting verification, label it that way in the prose.
|
||||
5. **No aesthetic laundering.** Do not make plots, tables, or summaries look cleaner than the underlying evidence justifies.
|
||||
6. **Follow the system prompt's provenance rule.** Missing results become gaps or TODOs, never plausible-looking data.
|
||||
|
||||
## Output structure
|
||||
|
||||
@@ -36,9 +37,10 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
||||
|
||||
## Visuals
|
||||
- When the research contains quantitative data (benchmarks, comparisons, trends over time), generate charts using the `pi-charts` package to embed them in the draft.
|
||||
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams.
|
||||
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui`.
|
||||
- Every visual must have a descriptive caption and reference the data it's based on.
|
||||
- Do not create charts from invented or example data. If values are missing, describe the planned measurement instead.
|
||||
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams only when the structure is supported by the supplied evidence.
|
||||
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui` only for source-backed data.
|
||||
- Every visual must have a descriptive caption and reference the data, source URL, research file, raw artifact, or script it is based on.
|
||||
- Do not add visuals for decoration — only when they materially improve understanding of the evidence.
|
||||
|
||||
## Operating rules
|
||||
@@ -48,6 +50,7 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
||||
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
||||
- Do NOT add a Sources section — the verifier agent builds that.
|
||||
- Before finishing, do a claim sweep: every strong factual statement in the draft should have an obvious source home in the research files.
|
||||
- Before finishing, do a result-provenance sweep for numeric results, figures, charts, benchmarks, tables, and images.
|
||||
|
||||
## Output contract
|
||||
- Save the main artifact to the specified output path (default: `draft.md`).
|
||||
|
||||
84
.github/workflows/publish.yml
vendored
84
.github/workflows/publish.yml
vendored
@@ -5,62 +5,64 @@ env:
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v*"
|
||||
branches: [main]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
tag:
|
||||
description: Existing git tag to publish and release (for example: v0.2.18)
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
verify:
|
||||
version-check:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
outputs:
|
||||
tag: ${{ steps.meta.outputs.tag }}
|
||||
version: ${{ steps.meta.outputs.version }}
|
||||
version: ${{ steps.version.outputs.version }}
|
||||
should_release: ${{ steps.version.outputs.should_release }}
|
||||
steps:
|
||||
- name: Resolve release metadata
|
||||
id: meta
|
||||
- uses: actions/checkout@v6
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 24
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
- id: version
|
||||
shell: bash
|
||||
env:
|
||||
INPUT_TAG: ${{ inputs.tag }}
|
||||
REF_NAME: ${{ github.ref_name }}
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
TAG="${INPUT_TAG:-$REF_NAME}"
|
||||
VERSION="${TAG#v}"
|
||||
echo "tag=$TAG" >> "$GITHUB_OUTPUT"
|
||||
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
|
||||
LOCAL=$(node -p "require('./package.json').version")
|
||||
echo "version=$LOCAL" >> "$GITHUB_OUTPUT"
|
||||
PUBLISHED=$(npm view @companion-ai/feynman version 2>/dev/null || true)
|
||||
if [ "$PUBLISHED" = "$LOCAL" ] || gh release view "v$LOCAL" >/dev/null 2>&1; then
|
||||
echo "should_release=false" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "should_release=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
verify:
|
||||
needs: version-check
|
||||
if: needs.version-check.outputs.should_release == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
ref: refs/tags/${{ steps.meta.outputs.tag }}
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 24
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
- run: npm ci
|
||||
- name: Verify package version matches tag
|
||||
shell: bash
|
||||
run: |
|
||||
ACTUAL="$(node -p "require('./package.json').version")"
|
||||
EXPECTED="${{ steps.meta.outputs.version }}"
|
||||
test "$ACTUAL" = "$EXPECTED"
|
||||
- run: npm test
|
||||
- run: npm pack
|
||||
|
||||
publish-npm:
|
||||
needs: verify
|
||||
needs:
|
||||
- version-check
|
||||
- verify
|
||||
if: needs.version-check.outputs.should_release == 'true' && needs.verify.result == 'success'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
ref: refs/tags/${{ needs.verify.outputs.tag }}
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 24
|
||||
@@ -69,7 +71,8 @@ jobs:
|
||||
- run: npm publish --provenance --access public
|
||||
|
||||
build-native-bundles:
|
||||
needs: verify
|
||||
needs: version-check
|
||||
if: needs.version-check.outputs.should_release == 'true'
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
@@ -87,8 +90,6 @@ jobs:
|
||||
contents: read
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
ref: refs/tags/${{ needs.verify.outputs.tag }}
|
||||
- uses: actions/setup-node@v6
|
||||
with:
|
||||
node-version: 24
|
||||
@@ -121,8 +122,10 @@ jobs:
|
||||
|
||||
release-github:
|
||||
needs:
|
||||
- version-check
|
||||
- publish-npm
|
||||
- build-native-bundles
|
||||
if: needs.version-check.outputs.should_release == 'true' && needs.publish-npm.result == 'success' && needs.build-native-bundles.result == 'success'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
@@ -136,17 +139,18 @@ jobs:
|
||||
env:
|
||||
GH_REPO: ${{ github.repository }}
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
TAG: ${{ needs.verify.outputs.tag }}
|
||||
VERSION: ${{ needs.version-check.outputs.version }}
|
||||
run: |
|
||||
if gh release view "$TAG" >/dev/null 2>&1; then
|
||||
gh release upload "$TAG" release-assets/* --clobber
|
||||
gh release edit "$TAG" \
|
||||
--title "$TAG" \
|
||||
if gh release view "v$VERSION" >/dev/null 2>&1; then
|
||||
gh release upload "v$VERSION" release-assets/* --clobber
|
||||
gh release edit "v$VERSION" \
|
||||
--title "v$VERSION" \
|
||||
--notes "Standalone Feynman bundles for native installation." \
|
||||
--draft=false \
|
||||
--latest
|
||||
else
|
||||
gh release create "$TAG" release-assets/* \
|
||||
--title "$TAG" \
|
||||
--notes "Standalone Feynman bundles for native installation."
|
||||
gh release create "v$VERSION" release-assets/* \
|
||||
--title "v$VERSION" \
|
||||
--notes "Standalone Feynman bundles for native installation." \
|
||||
--target "$GITHUB_SHA"
|
||||
fi
|
||||
|
||||
16
README.md
16
README.md
@@ -25,7 +25,7 @@ curl -fsSL https://feynman.is/install | bash
|
||||
irm https://feynman.is/install.ps1 | iex
|
||||
```
|
||||
|
||||
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.18`.
|
||||
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.31`.
|
||||
|
||||
The installer downloads a standalone native bundle with its own Node.js runtime.
|
||||
|
||||
@@ -33,7 +33,7 @@ To upgrade the standalone app later, rerun the installer. `feynman update` only
|
||||
|
||||
To uninstall the standalone app, remove the launcher and runtime bundle, then optionally remove `~/.feynman` if you also want to delete settings, sessions, and installed package state. If you also want to delete alphaXiv login state, remove `~/.ahub`. See the installation guide for platform-specific paths.
|
||||
|
||||
Local models are supported through the custom-provider flow. For Ollama, run `feynman setup`, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at `http://localhost:11434/v1`.
|
||||
Local models are supported through the setup flow. For LM Studio, run `feynman setup`, choose `LM Studio`, and keep the default `http://localhost:1234/v1` unless you changed the server port. For LiteLLM, choose `LiteLLM Proxy` and keep the default `http://localhost:4000/v1`. For Ollama or vLLM, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at the local `/v1` endpoint.
|
||||
|
||||
### Skills Only
|
||||
|
||||
@@ -142,6 +142,18 @@ Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alpha
|
||||
|
||||
---
|
||||
|
||||
### Star History
|
||||
|
||||
<a href="https://www.star-history.com/?repos=getcompanion-ai%2Ffeynman&type=date&legend=top-left">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&theme=dark&legend=top-left" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
---
|
||||
|
||||
### Contributing
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full contributor guide.
|
||||
|
||||
1105
package-lock.json
generated
1105
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
21
package.json
21
package.json
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@companion-ai/feynman",
|
||||
"version": "0.2.18",
|
||||
"version": "0.2.33",
|
||||
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
||||
"license": "MIT",
|
||||
"type": "module",
|
||||
@@ -61,16 +61,16 @@
|
||||
"dependencies": {
|
||||
"@clack/prompts": "^1.2.0",
|
||||
"@companion-ai/alpha-hub": "^0.1.3",
|
||||
"@mariozechner/pi-ai": "^0.66.1",
|
||||
"@mariozechner/pi-coding-agent": "^0.66.1",
|
||||
"@sinclair/typebox": "^0.34.48",
|
||||
"dotenv": "^17.3.1"
|
||||
"@mariozechner/pi-ai": "^0.67.6",
|
||||
"@mariozechner/pi-coding-agent": "^0.67.6",
|
||||
"@sinclair/typebox": "^0.34.49",
|
||||
"dotenv": "^17.4.2"
|
||||
},
|
||||
"overrides": {
|
||||
"basic-ftp": "5.2.2",
|
||||
"basic-ftp": "5.3.0",
|
||||
"@modelcontextprotocol/sdk": {
|
||||
"@hono/node-server": "1.19.13",
|
||||
"hono": "4.12.12"
|
||||
"@hono/node-server": "1.19.14",
|
||||
"hono": "4.12.14"
|
||||
},
|
||||
"express": {
|
||||
"router": {
|
||||
@@ -80,16 +80,17 @@
|
||||
"proxy-agent": {
|
||||
"pac-proxy-agent": {
|
||||
"get-uri": {
|
||||
"basic-ftp": "5.2.2"
|
||||
"basic-ftp": "5.3.0"
|
||||
}
|
||||
}
|
||||
},
|
||||
"protobufjs": "7.5.5",
|
||||
"minimatch": {
|
||||
"brace-expansion": "5.0.5"
|
||||
}
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^25.5.0",
|
||||
"@types/node": "^25.6.0",
|
||||
"tsx": "^4.21.0",
|
||||
"typescript": "^5.9.3"
|
||||
},
|
||||
|
||||
@@ -9,7 +9,7 @@ Audit the paper and codebase for: $@
|
||||
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||
|
||||
Requirements:
|
||||
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
|
||||
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
||||
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
||||
|
||||
@@ -9,7 +9,7 @@ Compare sources for: $@
|
||||
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||
|
||||
Requirements:
|
||||
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
|
||||
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
||||
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.
|
||||
|
||||
@@ -4,195 +4,175 @@ args: <topic>
|
||||
section: Research Workflows
|
||||
topLevelCli: true
|
||||
---
|
||||
Run a deep research workflow for: $@
|
||||
Run deep research for: $@
|
||||
|
||||
You are the Lead Researcher. You plan, delegate, evaluate, verify, write, and cite. Internal orchestration is invisible to the user unless they ask.
|
||||
This is an execution request, not a request to explain or implement the workflow instructions.
|
||||
Execute the workflow. Do not answer by describing the protocol, do not explain these instructions, do not restate the protocol, and do not ask for confirmation. Do not stop after planning. Your first actions should be tool calls that create directories and write the plan artifact.
|
||||
|
||||
## 1. Plan
|
||||
## Required Artifacts
|
||||
|
||||
Analyze the research question using extended thinking. Develop a research strategy:
|
||||
- Key questions that must be answered
|
||||
- Evidence types needed (papers, web, code, data, docs)
|
||||
- Sub-questions disjoint enough to parallelize
|
||||
- Source types and time periods that matter
|
||||
- Acceptance criteria: what evidence would make the answer "sufficient"
|
||||
Derive a short slug from the topic: lowercase, hyphenated, no filler words, at most 5 words.
|
||||
|
||||
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words — e.g. "cloud-sandbox-pricing" not "deepresearch-plan"). Write the plan to `outputs/.plans/<slug>.md` as a self-contained artifact. Use this same slug for all artifacts in this run.
|
||||
If `CHANGELOG.md` exists, read the most recent relevant entries before finalizing the plan. Once the workflow becomes multi-round or spans enough work to merit resume support, append concise entries to `CHANGELOG.md` after meaningful progress and before stopping.
|
||||
Every run must leave these files on disk:
|
||||
- `outputs/.plans/<slug>.md`
|
||||
- `outputs/.drafts/<slug>-draft.md`
|
||||
- `outputs/.drafts/<slug>-cited.md`
|
||||
- `outputs/<slug>.md` or `papers/<slug>.md`
|
||||
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md`
|
||||
|
||||
```markdown
|
||||
# Research Plan: [topic]
|
||||
If any capability fails, continue in degraded mode and still write a blocked or partial final output and provenance sidecar. Never end with chat-only output. Never end with only an explanation in chat. Use `Verification: BLOCKED` when verification could not be completed.
|
||||
|
||||
## Questions
|
||||
1. ...
|
||||
## Step 1: Plan
|
||||
|
||||
## Strategy
|
||||
- Researcher allocations and dimensions
|
||||
- Expected rounds
|
||||
Create `outputs/.plans/<slug>.md` immediately. The plan must include:
|
||||
- Key questions
|
||||
- Evidence needed
|
||||
- Scale decision
|
||||
- Task ledger
|
||||
- Verification log
|
||||
- Decision log
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] All key questions answered with ≥2 independent sources
|
||||
- [ ] Contradictions identified and addressed
|
||||
- [ ] No single-source claims on critical findings
|
||||
Make the scale decision before assigning owners in the plan. If the topic is a narrow "what is X" explainer, the plan must use lead-owned direct search tasks only; do not allocate researcher subagents in the task ledger.
|
||||
|
||||
## Task Ledger
|
||||
| ID | Owner | Task | Status | Output |
|
||||
|---|---|---|---|---|
|
||||
| T1 | lead / researcher | ... | todo | ... |
|
||||
Also save the plan with `memory_remember` using key `deepresearch.<slug>.plan` if that tool is available. If it is not available, continue without it.
|
||||
|
||||
## Verification Log
|
||||
| Item | Method | Status | Evidence |
|
||||
|---|---|---|---|
|
||||
| Critical claim / computation / figure | source cross-read / rerun / direct fetch / code check | pending | path or URL |
|
||||
After writing the plan, continue immediately. Do not pause for approval.
|
||||
|
||||
## Decision Log
|
||||
(Updated as the workflow progresses)
|
||||
```
|
||||
## Step 2: Scale
|
||||
|
||||
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
|
||||
Use direct search for:
|
||||
- Single fact or narrow question, including "what is X" explainers
|
||||
- Work you can answer with 3-10 tool calls
|
||||
|
||||
Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting in the terminal, give them a brief chance to request plan changes before proceeding.
|
||||
For "what is X" explainer topics, you MUST NOT spawn researcher subagents unless the user explicitly asks for comprehensive coverage, current landscape, benchmarks, or production deployment.
|
||||
Do not inflate a simple explainer into a multi-agent survey.
|
||||
|
||||
## 2. Scale decision
|
||||
Use subagents only when decomposition clearly helps:
|
||||
- Direct comparison of 2-3 items: 2 `researcher` subagents
|
||||
- Broad survey or multi-faceted topic: 3-4 `researcher` subagents
|
||||
- Complex multi-domain research: 4-6 `researcher` subagents
|
||||
|
||||
| Query type | Execution |
|
||||
|---|---|
|
||||
| Single fact or narrow question | Search directly yourself, no subagents, 3-10 tool calls |
|
||||
| Direct comparison (2-3 items) | 2 parallel `researcher` subagents |
|
||||
| Broad survey or multi-faceted topic | 3-4 parallel `researcher` subagents |
|
||||
| Complex multi-domain research | 4-6 parallel `researcher` subagents |
|
||||
## Step 3: Gather Evidence
|
||||
|
||||
Never spawn subagents for work you can do in 5 tool calls.
|
||||
Avoid crash-prone PDF parsing in this workflow. Do not call `alpha_get_paper` and do not fetch `.pdf` URLs unless the user explicitly asks for PDF extraction. Prefer paper metadata, abstracts, HTML pages, official docs, and web snippets. If only a PDF exists, cite the PDF URL from search metadata and mark full-text PDF parsing as blocked instead of fetching it.
|
||||
|
||||
## 3. Spawn researchers
|
||||
If direct search was chosen:
|
||||
- Skip researcher spawning entirely.
|
||||
- Search and fetch sources yourself.
|
||||
- Write notes to `<slug>-research-direct.md`.
|
||||
- Continue to synthesis.
|
||||
|
||||
Launch parallel `researcher` subagents via `subagent`. Each gets a structured brief with:
|
||||
- **Objective:** what to find
|
||||
- **Output format:** numbered sources, evidence table, inline source references
|
||||
- **Tool guidance:** which search tools to prioritize
|
||||
- **Task boundaries:** what NOT to cover (another researcher handles that)
|
||||
- **Task IDs:** the specific ledger rows they own and must report back on
|
||||
If subagents were chosen:
|
||||
- Write a per-researcher brief first, such as `outputs/.plans/<slug>-T1.md`.
|
||||
- Keep `subagent` tool-call JSON small and valid.
|
||||
- Do not place multi-paragraph instructions inside the `subagent` JSON.
|
||||
- Use only supported `subagent` keys. Do not add extra keys such as `artifacts` unless the tool schema explicitly exposes them.
|
||||
- Always set `failFast: false`.
|
||||
- Do not name exact tool commands in subagent tasks unless those tool names are visible in the current tool set.
|
||||
- Prefer broad guidance such as "use paper search and web search"; if a PDF parser or paper fetch fails, the researcher must continue from metadata, abstracts, and web sources and mark PDF parsing as blocked.
|
||||
|
||||
Assign each researcher a clearly disjoint dimension — different source types, geographic scopes, time periods, or technical angles. Never duplicate coverage.
|
||||
Example shape:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
tasks: [
|
||||
{ agent: "researcher", task: "...", output: "<slug>-research-web.md" },
|
||||
{ agent: "researcher", task: "...", output: "<slug>-research-papers.md" }
|
||||
"tasks": [
|
||||
{ "agent": "researcher", "task": "Read outputs/.plans/<slug>-T1.md and write <slug>-research-web.md.", "output": "<slug>-research-web.md" },
|
||||
{ "agent": "researcher", "task": "Read outputs/.plans/<slug>-T2.md and write <slug>-research-papers.md.", "output": "<slug>-research-papers.md" }
|
||||
],
|
||||
concurrency: 4,
|
||||
failFast: false
|
||||
"concurrency": 4,
|
||||
"failFast": false
|
||||
}
|
||||
```
|
||||
|
||||
Researchers write full outputs to files and pass references back — do not have them return full content into your context.
|
||||
Researchers must not silently merge or skip assigned tasks. If something is impossible or redundant, mark the ledger row `blocked` or `superseded` with a note.
|
||||
After evidence gathering, update the plan ledger and verification log. If research failed, record exactly what failed and proceed with a blocked or partial draft.
|
||||
|
||||
## 4. Evaluate and loop
|
||||
## Step 4: Draft
|
||||
|
||||
After researchers return, read their output files and critically assess:
|
||||
- Which plan questions remain unanswered?
|
||||
- Which answers rest on only one source?
|
||||
- Are there contradictions needing resolution?
|
||||
- Is any key angle missing entirely?
|
||||
- Did every assigned ledger task actually get completed, blocked, or explicitly superseded?
|
||||
Write the report yourself. Do not delegate synthesis.
|
||||
|
||||
If gaps are significant, spawn another targeted batch of researchers. No fixed cap on rounds — iterate until evidence is sufficient or sources are exhausted.
|
||||
Save to `outputs/.drafts/<slug>-draft.md`.
|
||||
|
||||
Update the plan artifact (`outputs/.plans/<slug>.md`) task ledger, verification log, and decision log after each round.
|
||||
When the work spans multiple rounds, also append a concise chronological entry to `CHANGELOG.md` covering what changed, what was verified, what remains blocked, and the next recommended step.
|
||||
Include:
|
||||
- Executive summary
|
||||
- Findings organized by question/theme
|
||||
- Evidence-backed caveats and disagreements
|
||||
- Open questions
|
||||
- No invented sources, results, figures, benchmarks, images, charts, or tables
|
||||
|
||||
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
|
||||
Before citation, sweep the draft:
|
||||
- Every critical claim, number, figure, table, or benchmark must map to a source URL, research note, raw artifact path, or command/script output.
|
||||
- Remove or downgrade unsupported claims.
|
||||
- Mark inferences as inferences.
|
||||
|
||||
## 5. Write the report
|
||||
## Step 5: Cite
|
||||
|
||||
Once evidence is sufficient, YOU write the full research brief directly. Do not delegate writing to another agent. Read the research files, synthesize the findings, and produce a complete document:
|
||||
If direct search/no researcher subagents was chosen:
|
||||
- Do citation yourself.
|
||||
- Verify reachable HTML/doc URLs with available fetch/search tools.
|
||||
- Copy or rewrite `outputs/.drafts/<slug>-draft.md` to `outputs/.drafts/<slug>-cited.md` with inline citations and a Sources section.
|
||||
- Do not spawn the `verifier` subagent for simple direct-search runs.
|
||||
|
||||
```markdown
|
||||
# Title
|
||||
If researcher subagents were used, run the `verifier` agent after the draft exists. This step is mandatory and must complete before any reviewer runs. Do not run the `verifier` and `reviewer` in the same parallel `subagent` call.
|
||||
|
||||
## Executive Summary
|
||||
2-3 paragraph overview of key findings.
|
||||
Use this shape:
|
||||
|
||||
## Section 1: ...
|
||||
Detailed findings organized by theme or question.
|
||||
|
||||
## Section N: ...
|
||||
|
||||
## Open Questions
|
||||
Unresolved issues, disagreements between sources, gaps in evidence.
|
||||
```json
|
||||
{
|
||||
"agent": "verifier",
|
||||
"task": "Add inline citations to outputs/.drafts/<slug>-draft.md using the research files as source material. Verify every URL. Write the complete cited brief to outputs/.drafts/<slug>-cited.md.",
|
||||
"output": "outputs/.drafts/<slug>-cited.md"
|
||||
}
|
||||
```
|
||||
|
||||
When the research includes quantitative data (benchmarks, performance comparisons, trends), generate charts using `pi-charts`. Use Mermaid diagrams for architectures and processes. Every visual must have a caption and reference the underlying data.
|
||||
After the verifier returns, verify on disk that `outputs/.drafts/<slug>-cited.md` exists. If the verifier wrote elsewhere, find the cited file and move or copy it to `outputs/.drafts/<slug>-cited.md`.
|
||||
|
||||
Before finalizing the draft, do a claim sweep:
|
||||
- map each critical claim, number, and figure to its supporting source or artifact in the verification log
|
||||
- downgrade or remove anything that cannot be grounded
|
||||
- label inferences as inferences
|
||||
- if code or calculations were involved, record which checks were actually run and which remain unverified
|
||||
## Step 6: Review
|
||||
|
||||
Save this draft to `outputs/.drafts/<slug>-draft.md`.
|
||||
If direct search/no researcher subagents was chosen:
|
||||
- Review the cited draft yourself.
|
||||
- Write `<slug>-verification.md` with FATAL / MAJOR / MINOR findings and the checks performed.
|
||||
- Fix FATAL issues before delivery.
|
||||
- Do not spawn the `reviewer` subagent for simple direct-search runs.
|
||||
|
||||
## 6. Cite
|
||||
If researcher subagents were used, only after `outputs/.drafts/<slug>-cited.md` exists, run the `reviewer` agent against it.
|
||||
|
||||
Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output:
|
||||
Use this shape:
|
||||
|
||||
```
|
||||
{ agent: "verifier", task: "Add inline citations to <slug>-draft.md using the research files as source material. Verify every URL.", output: "<slug>-brief.md" }
|
||||
```json
|
||||
{
|
||||
"agent": "reviewer",
|
||||
"task": "Verify outputs/.drafts/<slug>-cited.md. Flag unsupported claims, logical gaps, single-source critical claims, and overstated confidence. This is a verification pass, not a peer review.",
|
||||
"output": "<slug>-verification.md"
|
||||
}
|
||||
```
|
||||
|
||||
The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
|
||||
If the reviewer flags FATAL issues, fix them before delivery and run one more review pass. Note MAJOR issues in Open Questions. Accept MINOR issues.
|
||||
|
||||
## 7. Verify
|
||||
When applying reviewer fixes, do not issue one giant `edit` tool call with many replacements. Use small localized edits only for 1-3 simple corrections. For section rewrites, table rewrites, or more than 3 substantive fixes, read the cited draft and write a corrected full file to `outputs/.drafts/<slug>-revised.md` instead.
|
||||
|
||||
Spawn the `reviewer` agent against the cited draft. The reviewer checks for:
|
||||
- Unsupported claims that slipped past citation
|
||||
- Logical gaps or contradictions between sections
|
||||
- Single-source claims on critical findings
|
||||
- Overstated confidence relative to evidence quality
|
||||
The final candidate is `outputs/.drafts/<slug>-revised.md` if it exists; otherwise it is `outputs/.drafts/<slug>-cited.md`.
|
||||
|
||||
```
|
||||
{ agent: "reviewer", task: "Verify <slug>-brief.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "<slug>-verification.md" }
|
||||
```
|
||||
## Step 7: Deliver
|
||||
|
||||
If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted.
|
||||
After fixes, run at least one more review-style verification pass if any FATAL issues were found. Do not assume one fix solved everything.
|
||||
Copy the final candidate to:
|
||||
- `papers/<slug>.md` for paper-style drafts
|
||||
- `outputs/<slug>.md` for everything else
|
||||
|
||||
## 8. Deliver
|
||||
|
||||
Copy the final cited and verified output to the appropriate folder:
|
||||
- Paper-style drafts → `papers/`
|
||||
- Everything else → `outputs/`
|
||||
|
||||
Save the final output as `<slug>.md` (in `outputs/` or `papers/` per the rule above).
|
||||
|
||||
Write a provenance record alongside it as `<slug>.provenance.md`:
|
||||
Write provenance next to it as `<slug>.provenance.md`:
|
||||
|
||||
```markdown
|
||||
# Provenance: [topic]
|
||||
|
||||
- **Date:** [date]
|
||||
- **Rounds:** [number of researcher rounds]
|
||||
- **Sources consulted:** [total unique sources across all research files]
|
||||
- **Sources accepted:** [sources that survived citation verification]
|
||||
- **Sources rejected:** [dead links, unverifiable, or removed]
|
||||
- **Verification:** [PASS / PASS WITH NOTES — summary of reviewer findings]
|
||||
- **Rounds:** [number of research rounds]
|
||||
- **Sources consulted:** [count and/or list]
|
||||
- **Sources accepted:** [count and/or list]
|
||||
- **Sources rejected:** [dead, unverifiable, or removed]
|
||||
- **Verification:** [PASS / PASS WITH NOTES / BLOCKED]
|
||||
- **Plan:** outputs/.plans/<slug>.md
|
||||
- **Research files:** [list of intermediate <slug>-research-*.md files]
|
||||
- **Research files:** [files used]
|
||||
```
|
||||
|
||||
Before you stop, verify on disk that all of these exist:
|
||||
- `outputs/.plans/<slug>.md`
|
||||
- `outputs/.drafts/<slug>-draft.md`
|
||||
- `<slug>-brief.md` intermediate cited brief
|
||||
- `outputs/<slug>.md` or `papers/<slug>.md` final promoted deliverable
|
||||
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md` provenance sidecar
|
||||
Before responding, verify on disk that all required artifacts exist. If verification could not be completed, set `Verification: BLOCKED` or `PASS WITH NOTES` and list the missing checks.
|
||||
|
||||
Do not stop at `<slug>-brief.md` alone. If the cited brief exists but the promoted final output or provenance sidecar does not, create them before responding.
|
||||
|
||||
## Background execution
|
||||
|
||||
If the user wants unattended execution or the sweep will clearly take a while:
|
||||
- Launch the full workflow via `subagent` using `clarify: false, async: true`
|
||||
- Report the async ID and how to check status with `subagent_status`
|
||||
Final response should be brief: link the final file, provenance file, and any blocked checks.
|
||||
|
||||
@@ -9,11 +9,12 @@ Write a paper-style draft for: $@
|
||||
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||
|
||||
Requirements:
|
||||
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Present the outline to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Briefly summarize the outline to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested outline review.
|
||||
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
|
||||
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
||||
- Use clean Markdown with LaTeX where equations materially help.
|
||||
- Generate charts with `pi-charts` for quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines. Every figure needs a caption.
|
||||
- Follow the system prompt's provenance rules for all results, figures, charts, images, tables, benchmarks, and quantitative comparisons. If evidence is missing, leave a placeholder or proposed experimental plan instead of claiming an outcome.
|
||||
- Generate charts with `pi-charts` only for source-backed quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines only when the structure is supported by sources. Every figure needs a provenance-bearing caption.
|
||||
- Before delivery, sweep the draft for any claim that sounds stronger than its support. Mark tentative results as tentative and remove unsupported numerics instead of letting the verifier discover them later.
|
||||
- Save exactly one draft to `papers/<slug>.md`.
|
||||
- End with a `Sources` appendix with direct URLs for all primary references.
|
||||
|
||||
@@ -10,7 +10,7 @@ Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 wo
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`. Do not silently skip assigned questions; mark them `done`, `blocked`, or `superseded`.
|
||||
3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. Before finishing the draft, sweep every strong claim against the verification log and downgrade anything that is inferred or single-source critical.
|
||||
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
|
||||
|
||||
@@ -9,7 +9,7 @@ Review this AI research artifact: $@
|
||||
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||
|
||||
Requirements:
|
||||
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
|
||||
- Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
|
||||
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
||||
|
||||
@@ -101,7 +101,7 @@ print(f"[summarize] chunks={len(chunks)} chunk_size={chunk_size} overlap={overla
|
||||
|
||||
### 3b. Confirm before spawning
|
||||
|
||||
If this is an unattended or one-shot run, continue automatically. Otherwise tell the user: "Source is ~<chars> chars -> <N> chunks -> <N> researcher subagents. This may take several minutes. Proceed?" Wait for confirmation before launching Tier 3.
|
||||
Briefly summarize: "Source is ~<chars> chars -> <N> chunks -> <N> researcher subagents. This may take several minutes." Then continue automatically. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested review before launching.
|
||||
|
||||
### 3c. Dispatch researcher subagents
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ Create a research watch for: $@
|
||||
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||
|
||||
Requirements:
|
||||
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||
- Start with a baseline sweep of the topic.
|
||||
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
|
||||
- Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.
|
||||
|
||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
||||
Workarounds:
|
||||
- try again after the release finishes publishing
|
||||
- pass the latest published version explicitly, e.g.:
|
||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.18
|
||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
|
||||
"@
|
||||
}
|
||||
|
||||
|
||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
||||
Workarounds:
|
||||
- try again after the release finishes publishing
|
||||
- pass the latest published version explicitly, e.g.:
|
||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.18
|
||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
export const PI_SUBAGENTS_PATCH_TARGETS: string[];
|
||||
export function patchPiSubagentsSource(relativePath: string, source: string): string;
|
||||
export function stripPiSubagentBuiltinModelSource(source: string): string;
|
||||
|
||||
@@ -5,11 +5,13 @@ export const PI_SUBAGENTS_PATCH_TARGETS = [
|
||||
"run-history.ts",
|
||||
"skills.ts",
|
||||
"chain-clarify.ts",
|
||||
"subagent-executor.ts",
|
||||
"schemas.ts",
|
||||
];
|
||||
|
||||
const RESOLVE_PI_AGENT_DIR_HELPER = [
|
||||
"function resolvePiAgentDir(): string {",
|
||||
' const configured = process.env.PI_CODING_AGENT_DIR?.trim();',
|
||||
' const configured = process.env.FEYNMAN_CODING_AGENT_DIR?.trim() || process.env.PI_CODING_AGENT_DIR?.trim();',
|
||||
' if (!configured) return path.join(os.homedir(), ".pi", "agent");',
|
||||
' return configured.startsWith("~/") ? path.join(os.homedir(), configured.slice(2)) : configured;',
|
||||
"}",
|
||||
@@ -66,6 +68,24 @@ function replaceAll(source, from, to) {
|
||||
return source.split(from).join(to);
|
||||
}
|
||||
|
||||
export function stripPiSubagentBuiltinModelSource(source) {
|
||||
if (!source.startsWith("---\n")) {
|
||||
return source;
|
||||
}
|
||||
|
||||
const endIndex = source.indexOf("\n---", 4);
|
||||
if (endIndex === -1) {
|
||||
return source;
|
||||
}
|
||||
|
||||
const frontmatter = source.slice(4, endIndex);
|
||||
const nextFrontmatter = frontmatter
|
||||
.split("\n")
|
||||
.filter((line) => !/^\s*model\s*:/.test(line))
|
||||
.join("\n");
|
||||
return `---\n${nextFrontmatter}${source.slice(endIndex)}`;
|
||||
}
|
||||
|
||||
export function patchPiSubagentsSource(relativePath, source) {
|
||||
let patched = source;
|
||||
|
||||
@@ -76,6 +96,11 @@ export function patchPiSubagentsSource(relativePath, source) {
|
||||
'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
|
||||
'const configPath = path.join(resolvePiAgentDir(), "extensions", "subagent", "config.json");',
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
"• PARALLEL: { tasks: [{agent,task,count?}, ...], concurrency?: number, worktree?: true } - concurrent execution (worktree: isolate each task in a git worktree)",
|
||||
"• PARALLEL: { tasks: [{agent,task,count?,output?}, ...], concurrency?: number, worktree?: true } - concurrent execution (output: per-task file target, worktree: isolate each task in a git worktree)",
|
||||
);
|
||||
break;
|
||||
case "agents.ts":
|
||||
patched = replaceAll(
|
||||
@@ -172,6 +197,138 @@ export function patchPiSubagentsSource(relativePath, source) {
|
||||
'const dir = path.join(resolvePiAgentDir(), "agents");',
|
||||
);
|
||||
break;
|
||||
case "subagent-executor.ts":
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\tcwd?: string;",
|
||||
"\tcount?: number;",
|
||||
"\tmodel?: string;",
|
||||
"\tskill?: string | string[] | boolean;",
|
||||
].join("\n"),
|
||||
[
|
||||
"\tcwd?: string;",
|
||||
"\tcount?: number;",
|
||||
"\tmodel?: string;",
|
||||
"\tskill?: string | string[] | boolean;",
|
||||
"\toutput?: string | false;",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\t\tcwd: task.cwd,",
|
||||
"\t\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\t\tcwd: task.cwd,",
|
||||
"\t\t\toutput: task.output,",
|
||||
"\t\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\tcwd: task.cwd,",
|
||||
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\tcwd: task.cwd,",
|
||||
"\t\toutput: task.output,",
|
||||
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\t\t\tcwd: t.cwd,",
|
||||
"\t\t\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\t\t\tcwd: t.cwd,",
|
||||
"\t\t\t\toutput: t.output,",
|
||||
"\t\t\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\tcwd: t.cwd,",
|
||||
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\tcwd: t.cwd,",
|
||||
"\t\toutput: t.output,",
|
||||
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
|
||||
"\t\t\tresolveStepBehavior(c, { skills: skillOverrides[i] }),",
|
||||
"\t\t);",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
|
||||
"\t\t\tresolveStepBehavior(c, { output: tasks[i]?.output, skills: skillOverrides[i] }),",
|
||||
"\t\t);",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
"\tconst behaviors = agentConfigs.map((config) => resolveStepBehavior(config, {}));",
|
||||
"\tconst behaviors = agentConfigs.map((config, i) => resolveStepBehavior(config, { output: tasks[i]?.output, skills: skillOverrides[i] }));",
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
|
||||
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, input.taskTexts[index]!, {",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
|
||||
"\t\tconst outputPath = typeof input.behaviors[index]?.output === \"string\"",
|
||||
"\t\t\t? resolveSingleOutputPath(input.behaviors[index]?.output, input.ctx.cwd, taskCwd)",
|
||||
"\t\t\t: undefined;",
|
||||
"\t\tconst taskText = injectSingleOutputInstruction(input.taskTexts[index]!, outputPath);",
|
||||
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, taskText, {",
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\t\t\tmaxOutput: input.maxOutput,",
|
||||
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
|
||||
].join("\n"),
|
||||
[
|
||||
"\t\t\tmaxOutput: input.maxOutput,",
|
||||
"\t\t\toutputPath,",
|
||||
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
|
||||
].join("\n"),
|
||||
);
|
||||
break;
|
||||
case "schemas.ts":
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
[
|
||||
"\tcwd: Type.Optional(Type.String()),",
|
||||
'\tcount: Type.Optional(Type.Integer({ minimum: 1, description: "Repeat this parallel task N times with the same settings." })),',
|
||||
'\tmodel: Type.Optional(Type.String({ description: "Override model for this task (e.g. \'google/gemini-3-pro\')" })),',
|
||||
].join("\n"),
|
||||
[
|
||||
"\tcwd: Type.Optional(Type.String()),",
|
||||
'\tcount: Type.Optional(Type.Integer({ minimum: 1, description: "Repeat this parallel task N times with the same settings." })),',
|
||||
'\toutput: Type.Optional(Type.Any({ description: "Output file for this parallel task (string), or false to disable. Relative paths resolve against cwd." })),',
|
||||
'\tmodel: Type.Optional(Type.String({ description: "Override model for this task (e.g. \'google/gemini-3-pro\')" })),',
|
||||
].join("\n"),
|
||||
);
|
||||
patched = replaceAll(
|
||||
patched,
|
||||
'tasks: Type.Optional(Type.Array(TaskItem, { description: "PARALLEL mode: [{agent, task, count?}, ...]" })),',
|
||||
'tasks: Type.Optional(Type.Array(TaskItem, { description: "PARALLEL mode: [{agent, task, count?, output?}, ...]" })),',
|
||||
);
|
||||
break;
|
||||
default:
|
||||
return source;
|
||||
}
|
||||
@@ -180,5 +337,5 @@ export function patchPiSubagentsSource(relativePath, source) {
|
||||
return source;
|
||||
}
|
||||
|
||||
return injectResolvePiAgentDirHelper(patched);
|
||||
return patched.includes("resolvePiAgentDir()") ? injectResolvePiAgentDirHelper(patched) : patched;
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { spawnSync } from "node:child_process";
|
||||
import { existsSync, lstatSync, mkdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||
import { existsSync, lstatSync, mkdirSync, readdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||
import { createRequire } from "node:module";
|
||||
import { homedir } from "node:os";
|
||||
import { delimiter, dirname, resolve } from "node:path";
|
||||
@@ -9,7 +9,7 @@ import { patchAlphaHubAuthSource } from "./lib/alpha-hub-auth-patch.mjs";
|
||||
import { patchPiExtensionLoaderSource } from "./lib/pi-extension-loader-patch.mjs";
|
||||
import { patchPiGoogleLegacySchemaSource } from "./lib/pi-google-legacy-schema-patch.mjs";
|
||||
import { PI_WEB_ACCESS_PATCH_TARGETS, patchPiWebAccessSource } from "./lib/pi-web-access-patch.mjs";
|
||||
import { PI_SUBAGENTS_PATCH_TARGETS, patchPiSubagentsSource } from "./lib/pi-subagents-patch.mjs";
|
||||
import { PI_SUBAGENTS_PATCH_TARGETS, patchPiSubagentsSource, stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs";
|
||||
|
||||
const here = dirname(fileURLToPath(import.meta.url));
|
||||
const appRoot = resolve(here, "..");
|
||||
@@ -260,6 +260,23 @@ function ensureParentDir(path) {
|
||||
mkdirSync(dirname(path), { recursive: true });
|
||||
}
|
||||
|
||||
function packageDependencyExists(packagePath, globalNodeModulesRoot, dependency) {
|
||||
return existsSync(resolve(packagePath, "node_modules", dependency)) ||
|
||||
existsSync(resolve(globalNodeModulesRoot, dependency));
|
||||
}
|
||||
|
||||
function installedPackageLooksUsable(packagePath, globalNodeModulesRoot) {
|
||||
if (!existsSync(resolve(packagePath, "package.json"))) return false;
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(resolve(packagePath, "package.json"), "utf8"));
|
||||
return Object.keys(pkg.dependencies ?? {}).every((dependency) =>
|
||||
packageDependencyExists(packagePath, globalNodeModulesRoot, dependency)
|
||||
);
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function linkPointsTo(linkPath, targetPath) {
|
||||
try {
|
||||
if (!lstatSync(linkPath).isSymbolicLink()) return false;
|
||||
@@ -269,26 +286,53 @@ function linkPointsTo(linkPath, targetPath) {
|
||||
}
|
||||
}
|
||||
|
||||
function ensureBundledPackageLinks(packageSpecs) {
|
||||
if (!workspaceMatchesRuntime(packageSpecs)) return;
|
||||
function listWorkspacePackageNames(root) {
|
||||
if (!existsSync(root)) return [];
|
||||
const names = [];
|
||||
for (const entry of readdirSync(root, { withFileTypes: true })) {
|
||||
if (!entry.isDirectory() && !entry.isSymbolicLink()) continue;
|
||||
if (entry.name.startsWith(".")) continue;
|
||||
if (entry.name.startsWith("@")) {
|
||||
const scopeRoot = resolve(root, entry.name);
|
||||
for (const scopedEntry of readdirSync(scopeRoot, { withFileTypes: true })) {
|
||||
if (!scopedEntry.isDirectory() && !scopedEntry.isSymbolicLink()) continue;
|
||||
names.push(`${entry.name}/${scopedEntry.name}`);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
names.push(entry.name);
|
||||
}
|
||||
return names;
|
||||
}
|
||||
|
||||
for (const spec of packageSpecs) {
|
||||
const packageName = parsePackageName(spec);
|
||||
function linkBundledPackage(packageName) {
|
||||
const sourcePath = resolve(workspaceRoot, packageName);
|
||||
const targetPath = resolve(globalNodeModulesRoot, packageName);
|
||||
if (!existsSync(sourcePath)) continue;
|
||||
if (linkPointsTo(targetPath, sourcePath)) continue;
|
||||
if (!existsSync(sourcePath)) return false;
|
||||
if (linkPointsTo(targetPath, sourcePath)) return false;
|
||||
try {
|
||||
if (lstatSync(targetPath).isSymbolicLink()) {
|
||||
rmSync(targetPath, { force: true });
|
||||
} else if (!installedPackageLooksUsable(targetPath, globalNodeModulesRoot)) {
|
||||
rmSync(targetPath, { recursive: true, force: true });
|
||||
}
|
||||
} catch {}
|
||||
if (existsSync(targetPath)) continue;
|
||||
if (existsSync(targetPath)) return false;
|
||||
|
||||
ensureParentDir(targetPath);
|
||||
try {
|
||||
symlinkSync(sourcePath, targetPath, process.platform === "win32" ? "junction" : "dir");
|
||||
} catch {}
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function ensureBundledPackageLinks(packageSpecs) {
|
||||
if (!workspaceMatchesRuntime(packageSpecs)) return;
|
||||
|
||||
for (const packageName of listWorkspacePackageNames(workspaceRoot)) {
|
||||
linkBundledPackage(packageName);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -435,6 +479,19 @@ if (existsSync(piSubagentsRoot)) {
|
||||
writeFileSync(entryPath, patched, "utf8");
|
||||
}
|
||||
}
|
||||
|
||||
const builtinAgentsRoot = resolve(piSubagentsRoot, "agents");
|
||||
if (existsSync(builtinAgentsRoot)) {
|
||||
for (const entry of readdirSync(builtinAgentsRoot, { withFileTypes: true })) {
|
||||
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
||||
const entryPath = resolve(builtinAgentsRoot, entry.name);
|
||||
const source = readFileSync(entryPath, "utf8");
|
||||
const patched = stripPiSubagentBuiltinModelSource(source);
|
||||
if (patched !== source) {
|
||||
writeFileSync(entryPath, patched, "utf8");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (packageJsonPath && existsSync(packageJsonPath)) {
|
||||
|
||||
@@ -1,26 +1,44 @@
|
||||
import { existsSync, mkdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
|
||||
import { existsSync, mkdirSync, readdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
|
||||
import { createHash } from "node:crypto";
|
||||
import { resolve } from "node:path";
|
||||
import { spawnSync } from "node:child_process";
|
||||
|
||||
import { stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs";
|
||||
|
||||
const appRoot = resolve(import.meta.dirname, "..");
|
||||
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
||||
const packageJsonPath = resolve(appRoot, "package.json");
|
||||
const packageLockPath = resolve(appRoot, "package-lock.json");
|
||||
const feynmanDir = resolve(appRoot, ".feynman");
|
||||
const workspaceDir = resolve(appRoot, ".feynman", "npm");
|
||||
const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
|
||||
const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
|
||||
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
||||
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
|
||||
const PRUNE_VERSION = 3;
|
||||
const PRUNE_VERSION = 4;
|
||||
const PINNED_RUNTIME_PACKAGES = [
|
||||
"@mariozechner/pi-agent-core",
|
||||
"@mariozechner/pi-ai",
|
||||
"@mariozechner/pi-coding-agent",
|
||||
"@mariozechner/pi-tui",
|
||||
];
|
||||
|
||||
function readPackageSpecs() {
|
||||
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
|
||||
if (!Array.isArray(settings.packages)) {
|
||||
return [];
|
||||
const packageSpecs = Array.isArray(settings.packages)
|
||||
? settings.packages
|
||||
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
|
||||
.map((value) => value.slice(4))
|
||||
: [];
|
||||
|
||||
for (const packageName of PINNED_RUNTIME_PACKAGES) {
|
||||
const version = readLockedPackageVersion(packageName);
|
||||
if (version) {
|
||||
packageSpecs.push(`${packageName}@${version}`);
|
||||
}
|
||||
}
|
||||
|
||||
return settings.packages
|
||||
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
|
||||
.map((value) => value.slice(4));
|
||||
return Array.from(new Set(packageSpecs));
|
||||
}
|
||||
|
||||
function parsePackageName(spec) {
|
||||
@@ -28,10 +46,41 @@ function parsePackageName(spec) {
|
||||
return match?.[1] ?? spec;
|
||||
}
|
||||
|
||||
function readLockedPackageVersion(packageName) {
|
||||
if (!existsSync(packageLockPath)) {
|
||||
return undefined;
|
||||
}
|
||||
try {
|
||||
const lockfile = JSON.parse(readFileSync(packageLockPath, "utf8"));
|
||||
const entry = lockfile.packages?.[`node_modules/${packageName}`];
|
||||
return typeof entry?.version === "string" ? entry.version : undefined;
|
||||
} catch {
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
function arraysMatch(left, right) {
|
||||
return left.length === right.length && left.every((value, index) => value === right[index]);
|
||||
}
|
||||
|
||||
function hashFile(path) {
|
||||
if (!existsSync(path)) {
|
||||
return null;
|
||||
}
|
||||
return createHash("sha256").update(readFileSync(path)).digest("hex");
|
||||
}
|
||||
|
||||
function getRuntimeInputHash() {
|
||||
const hash = createHash("sha256");
|
||||
for (const path of [packageJsonPath, packageLockPath, settingsPath]) {
|
||||
hash.update(path);
|
||||
hash.update("\0");
|
||||
hash.update(hashFile(path) ?? "missing");
|
||||
hash.update("\0");
|
||||
}
|
||||
return hash.digest("hex");
|
||||
}
|
||||
|
||||
function workspaceIsCurrent(packageSpecs) {
|
||||
if (!existsSync(manifestPath) || !existsSync(workspaceNodeModulesDir)) {
|
||||
return false;
|
||||
@@ -42,6 +91,9 @@ function workspaceIsCurrent(packageSpecs) {
|
||||
if (!Array.isArray(manifest.packageSpecs) || !arraysMatch(manifest.packageSpecs, packageSpecs)) {
|
||||
return false;
|
||||
}
|
||||
if (manifest.runtimeInputHash !== getRuntimeInputHash()) {
|
||||
return false;
|
||||
}
|
||||
if (
|
||||
manifest.nodeAbi !== process.versions.modules ||
|
||||
manifest.platform !== process.platform ||
|
||||
@@ -72,6 +124,17 @@ function writeWorkspacePackageJson() {
|
||||
);
|
||||
}
|
||||
|
||||
function childNpmInstallEnv() {
|
||||
return {
|
||||
...process.env,
|
||||
// `npm pack --dry-run` exports dry-run config to lifecycle scripts. The
|
||||
// vendored runtime workspace must still install real node_modules so the
|
||||
// publish artifact can be validated without poisoning the archive.
|
||||
npm_config_dry_run: "false",
|
||||
NPM_CONFIG_DRY_RUN: "false",
|
||||
};
|
||||
}
|
||||
|
||||
function prepareWorkspace(packageSpecs) {
|
||||
rmSync(workspaceDir, { recursive: true, force: true });
|
||||
mkdirSync(workspaceDir, { recursive: true });
|
||||
@@ -84,9 +147,9 @@ function prepareWorkspace(packageSpecs) {
|
||||
const result = spawnSync(
|
||||
process.env.npm_execpath ? process.execPath : "npm",
|
||||
process.env.npm_execpath
|
||||
? [process.env.npm_execpath, "install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
|
||||
: ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
|
||||
{ stdio: "inherit" },
|
||||
? [process.env.npm_execpath, "install", "--prefer-online", "--no-audit", "--no-fund", "--no-dry-run", "--legacy-peer-deps", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
|
||||
: ["install", "--prefer-online", "--no-audit", "--no-fund", "--no-dry-run", "--legacy-peer-deps", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
|
||||
{ stdio: "inherit", env: childNpmInstallEnv() },
|
||||
);
|
||||
if (result.status !== 0) {
|
||||
process.exit(result.status ?? 1);
|
||||
@@ -99,6 +162,7 @@ function writeManifest(packageSpecs) {
|
||||
JSON.stringify(
|
||||
{
|
||||
packageSpecs,
|
||||
runtimeInputHash: getRuntimeInputHash(),
|
||||
generatedAt: new Date().toISOString(),
|
||||
nodeAbi: process.versions.modules,
|
||||
nodeVersion: process.version,
|
||||
@@ -122,6 +186,25 @@ function pruneWorkspace() {
|
||||
}
|
||||
}
|
||||
|
||||
function stripBundledPiSubagentModelPins() {
|
||||
const agentsRoot = resolve(workspaceNodeModulesDir, "pi-subagents", "agents");
|
||||
if (!existsSync(agentsRoot)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
let changed = false;
|
||||
for (const entry of readdirSync(agentsRoot, { withFileTypes: true })) {
|
||||
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
||||
const entryPath = resolve(agentsRoot, entry.name);
|
||||
const source = readFileSync(entryPath, "utf8");
|
||||
const patched = stripPiSubagentBuiltinModelSource(source);
|
||||
if (patched === source) continue;
|
||||
writeFileSync(entryPath, patched, "utf8");
|
||||
changed = true;
|
||||
}
|
||||
return changed;
|
||||
}
|
||||
|
||||
function archiveIsCurrent() {
|
||||
if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) {
|
||||
return false;
|
||||
@@ -145,6 +228,10 @@ const packageSpecs = readPackageSpecs();
|
||||
|
||||
if (workspaceIsCurrent(packageSpecs)) {
|
||||
console.log("[feynman] vendored runtime workspace already up to date");
|
||||
if (stripBundledPiSubagentModelPins()) {
|
||||
writeManifest(packageSpecs);
|
||||
console.log("[feynman] stripped bundled pi-subagents model pins");
|
||||
}
|
||||
if (archiveIsCurrent()) {
|
||||
process.exit(0);
|
||||
}
|
||||
@@ -157,6 +244,7 @@ if (workspaceIsCurrent(packageSpecs)) {
|
||||
console.log("[feynman] preparing vendored runtime workspace...");
|
||||
prepareWorkspace(packageSpecs);
|
||||
pruneWorkspace();
|
||||
stripBundledPiSubagentModelPins();
|
||||
writeManifest(packageSpecs);
|
||||
createWorkspaceArchive();
|
||||
console.log("[feynman] vendored runtime workspace ready");
|
||||
|
||||
@@ -558,6 +558,7 @@ export async function main(): Promise<void> {
|
||||
normalizeFeynmanSettings(feynmanSettingsPath, bundledSettingsPath, thinkingLevel, feynmanAuthPath);
|
||||
}
|
||||
|
||||
const workflowCommandNames = new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name));
|
||||
await launchPiChat({
|
||||
appRoot,
|
||||
workingDir,
|
||||
@@ -568,6 +569,6 @@ export async function main(): Promise<void> {
|
||||
thinkingLevel,
|
||||
explicitModelSpec,
|
||||
oneShotPrompt: values.prompt,
|
||||
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name))),
|
||||
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, workflowCommandNames),
|
||||
});
|
||||
}
|
||||
|
||||
@@ -48,6 +48,7 @@ const PROVIDER_LABELS: Record<string, string> = {
|
||||
huggingface: "Hugging Face",
|
||||
"amazon-bedrock": "Amazon Bedrock",
|
||||
"azure-openai-responses": "Azure OpenAI Responses",
|
||||
litellm: "LiteLLM Proxy",
|
||||
};
|
||||
|
||||
const RESEARCH_MODEL_PREFERENCES = [
|
||||
|
||||
@@ -83,6 +83,8 @@ const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
|
||||
{ id: "openai", label: "OpenAI Platform API", envVar: "OPENAI_API_KEY" },
|
||||
{ id: "anthropic", label: "Anthropic API", envVar: "ANTHROPIC_API_KEY" },
|
||||
{ id: "google", label: "Google Gemini API", envVar: "GEMINI_API_KEY" },
|
||||
{ id: "lm-studio", label: "LM Studio (local OpenAI-compatible server)" },
|
||||
{ id: "litellm", label: "LiteLLM Proxy (OpenAI-compatible gateway)" },
|
||||
{ id: "__custom__", label: "Custom provider (local/self-hosted/proxy)" },
|
||||
{ id: "amazon-bedrock", label: "Amazon Bedrock (AWS credential chain)" },
|
||||
{ id: "openrouter", label: "OpenRouter", envVar: "OPENROUTER_API_KEY" },
|
||||
@@ -126,13 +128,24 @@ export function resolveModelProviderForCommand(
|
||||
return undefined;
|
||||
}
|
||||
|
||||
function apiKeyProviderHint(provider: ApiKeyProviderInfo): string {
|
||||
if (provider.id === "__custom__") {
|
||||
return "Ollama, vLLM, LM Studio, proxies";
|
||||
}
|
||||
if (provider.id === "lm-studio") {
|
||||
return "http://localhost:1234/v1";
|
||||
}
|
||||
if (provider.id === "litellm") {
|
||||
return "http://localhost:4000/v1";
|
||||
}
|
||||
return provider.envVar ?? provider.id;
|
||||
}
|
||||
|
||||
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
|
||||
const options: PromptSelectOption<ApiKeyProviderInfo | "cancel">[] = API_KEY_PROVIDERS.map((provider) => ({
|
||||
value: provider,
|
||||
label: provider.label,
|
||||
hint: provider.id === "__custom__"
|
||||
? "Ollama, vLLM, LM Studio, proxies"
|
||||
: provider.envVar ?? provider.id,
|
||||
hint: apiKeyProviderHint(provider),
|
||||
}));
|
||||
options.push({ value: "cancel", label: "Cancel" });
|
||||
|
||||
@@ -362,6 +375,103 @@ async function promptCustomProviderSetup(): Promise<CustomProviderSetup | undefi
|
||||
return { providerId, modelIds, baseUrl, api, apiKeyConfig, authHeader };
|
||||
}
|
||||
|
||||
async function promptLmStudioProviderSetup(): Promise<CustomProviderSetup | undefined> {
|
||||
printSection("LM Studio");
|
||||
printInfo("Start the LM Studio local server first, then load a model.");
|
||||
|
||||
const baseUrlRaw = await promptText("Base URL", "http://localhost:1234/v1");
|
||||
const { baseUrl } = normalizeCustomProviderBaseUrl("openai-completions", baseUrlRaw);
|
||||
if (!baseUrl) {
|
||||
printWarning("Base URL is required.");
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const detectedModelIds = await bestEffortFetchOpenAiModelIds(baseUrl, "lm-studio", false);
|
||||
let modelIdsDefault = "local-model";
|
||||
if (detectedModelIds && detectedModelIds.length > 0) {
|
||||
const sample = detectedModelIds.slice(0, 10).join(", ");
|
||||
printInfo(`Detected LM Studio models: ${sample}${detectedModelIds.length > 10 ? ", ..." : ""}`);
|
||||
modelIdsDefault = detectedModelIds[0]!;
|
||||
} else {
|
||||
printInfo("No models detected from /models. Enter the exact model id shown in LM Studio.");
|
||||
}
|
||||
|
||||
const modelIdsRaw = await promptText("Model id(s) (comma-separated)", modelIdsDefault);
|
||||
const modelIds = normalizeModelIds(modelIdsRaw);
|
||||
if (modelIds.length === 0) {
|
||||
printWarning("At least one model id is required.");
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return {
|
||||
providerId: "lm-studio",
|
||||
modelIds,
|
||||
baseUrl,
|
||||
api: "openai-completions",
|
||||
apiKeyConfig: "lm-studio",
|
||||
authHeader: false,
|
||||
};
|
||||
}
|
||||
|
||||
async function promptLiteLlmProviderSetup(): Promise<CustomProviderSetup | undefined> {
|
||||
printSection("LiteLLM Proxy");
|
||||
printInfo("Start the LiteLLM proxy first. Feynman uses the OpenAI-compatible chat-completions API.");
|
||||
|
||||
const baseUrlRaw = await promptText("Base URL", "http://localhost:4000/v1");
|
||||
const { baseUrl } = normalizeCustomProviderBaseUrl("openai-completions", baseUrlRaw);
|
||||
if (!baseUrl) {
|
||||
printWarning("Base URL is required.");
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const keyChoices = [
|
||||
"Yes (use LITELLM_MASTER_KEY and send Authorization: Bearer <key>)",
|
||||
"No (proxy runs without authentication)",
|
||||
"Cancel",
|
||||
];
|
||||
const keySelection = await promptChoice("Is the proxy protected by a master key?", keyChoices, 0);
|
||||
if (keySelection >= 2) {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
const hasKey = keySelection === 0;
|
||||
const apiKeyConfig = hasKey ? "LITELLM_MASTER_KEY" : "local";
|
||||
const authHeader = hasKey;
|
||||
if (hasKey) {
|
||||
printInfo("Set LITELLM_MASTER_KEY in your shell or .env before using Feynman.");
|
||||
}
|
||||
|
||||
const resolvedKey = hasKey ? await resolveApiKeyConfig(apiKeyConfig) : apiKeyConfig;
|
||||
const detectedModelIds = resolvedKey
|
||||
? await bestEffortFetchOpenAiModelIds(baseUrl, resolvedKey, authHeader)
|
||||
: undefined;
|
||||
|
||||
let modelIdsDefault = "gpt-4";
|
||||
if (detectedModelIds && detectedModelIds.length > 0) {
|
||||
const sample = detectedModelIds.slice(0, 10).join(", ");
|
||||
printInfo(`Detected LiteLLM models: ${sample}${detectedModelIds.length > 10 ? ", ..." : ""}`);
|
||||
modelIdsDefault = detectedModelIds[0]!;
|
||||
} else {
|
||||
printInfo("No models detected from /models. Enter the model id(s) from your LiteLLM config.");
|
||||
}
|
||||
|
||||
const modelIdsRaw = await promptText("Model id(s) (comma-separated)", modelIdsDefault);
|
||||
const modelIds = normalizeModelIds(modelIdsRaw);
|
||||
if (modelIds.length === 0) {
|
||||
printWarning("At least one model id is required.");
|
||||
return undefined;
|
||||
}
|
||||
|
||||
return {
|
||||
providerId: "litellm",
|
||||
modelIds,
|
||||
baseUrl,
|
||||
api: "openai-completions",
|
||||
apiKeyConfig,
|
||||
authHeader,
|
||||
};
|
||||
}
|
||||
|
||||
async function verifyCustomProvider(setup: CustomProviderSetup, authPath: string): Promise<void> {
|
||||
const registry = createModelRegistry(authPath);
|
||||
const modelsError = registry.getError();
|
||||
@@ -548,6 +658,56 @@ async function configureApiKeyProvider(authPath: string, providerId?: string): P
|
||||
return configureBedrockProvider(authPath);
|
||||
}
|
||||
|
||||
if (provider.id === "lm-studio") {
|
||||
const setup = await promptLmStudioProviderSetup();
|
||||
if (!setup) {
|
||||
printInfo("LM Studio setup cancelled.");
|
||||
return false;
|
||||
}
|
||||
|
||||
const modelsJsonPath = getModelsJsonPath(authPath);
|
||||
const result = upsertProviderConfig(modelsJsonPath, setup.providerId, {
|
||||
baseUrl: setup.baseUrl,
|
||||
apiKey: setup.apiKeyConfig,
|
||||
api: setup.api,
|
||||
authHeader: setup.authHeader,
|
||||
models: setup.modelIds.map((id) => ({ id })),
|
||||
});
|
||||
if (!result.ok) {
|
||||
printWarning(result.error);
|
||||
return false;
|
||||
}
|
||||
|
||||
printSuccess("Saved LM Studio provider.");
|
||||
await verifyCustomProvider(setup, authPath);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (provider.id === "litellm") {
|
||||
const setup = await promptLiteLlmProviderSetup();
|
||||
if (!setup) {
|
||||
printInfo("LiteLLM setup cancelled.");
|
||||
return false;
|
||||
}
|
||||
|
||||
const modelsJsonPath = getModelsJsonPath(authPath);
|
||||
const result = upsertProviderConfig(modelsJsonPath, setup.providerId, {
|
||||
baseUrl: setup.baseUrl,
|
||||
apiKey: setup.apiKeyConfig,
|
||||
api: setup.api,
|
||||
authHeader: setup.authHeader,
|
||||
models: setup.modelIds.map((id) => ({ id })),
|
||||
});
|
||||
if (!result.ok) {
|
||||
printWarning(result.error);
|
||||
return false;
|
||||
}
|
||||
|
||||
printSuccess("Saved LiteLLM provider.");
|
||||
await verifyCustomProvider(setup, authPath);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (provider.id === "__custom__") {
|
||||
const setup = await promptCustomProviderSetup();
|
||||
if (!setup) {
|
||||
|
||||
@@ -1,11 +1,41 @@
|
||||
import { dirname, resolve } from "node:path";
|
||||
|
||||
import { AuthStorage, ModelRegistry } from "@mariozechner/pi-coding-agent";
|
||||
import { getModels } from "@mariozechner/pi-ai";
|
||||
import { anthropicOAuthProvider } from "@mariozechner/pi-ai/oauth";
|
||||
|
||||
export function getModelsJsonPath(authPath: string): string {
|
||||
return resolve(dirname(authPath), "models.json");
|
||||
}
|
||||
|
||||
export function createModelRegistry(authPath: string): ModelRegistry {
|
||||
return ModelRegistry.create(AuthStorage.create(authPath), getModelsJsonPath(authPath));
|
||||
function registerFeynmanModelOverlays(modelRegistry: ModelRegistry): void {
|
||||
const anthropicModels = getModels("anthropic");
|
||||
if (anthropicModels.some((model) => model.id === "claude-opus-4-7")) {
|
||||
return;
|
||||
}
|
||||
|
||||
const opus46 = anthropicModels.find((model) => model.id === "claude-opus-4-6");
|
||||
if (!opus46) {
|
||||
return;
|
||||
}
|
||||
|
||||
modelRegistry.registerProvider("anthropic", {
|
||||
baseUrl: "https://api.anthropic.com",
|
||||
api: "anthropic-messages",
|
||||
oauth: anthropicOAuthProvider,
|
||||
models: [
|
||||
...anthropicModels,
|
||||
{
|
||||
...opus46,
|
||||
id: "claude-opus-4-7",
|
||||
name: "Claude Opus 4.7",
|
||||
},
|
||||
],
|
||||
});
|
||||
}
|
||||
|
||||
export function createModelRegistry(authPath: string): ModelRegistry {
|
||||
const registry = ModelRegistry.create(AuthStorage.create(authPath), getModelsJsonPath(authPath));
|
||||
registerFeynmanModelOverlays(registry);
|
||||
return registry;
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { spawn } from "node:child_process";
|
||||
import { cpSync, existsSync, lstatSync, mkdirSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||
import { cpSync, existsSync, lstatSync, mkdirSync, readdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import { dirname, join, resolve } from "node:path";
|
||||
|
||||
@@ -169,6 +169,15 @@ function resolvePackageManagerCommand(settingsManager: SettingsManager): { comma
|
||||
return { command: executable, args };
|
||||
}
|
||||
|
||||
function childPackageManagerEnv(): NodeJS.ProcessEnv {
|
||||
return {
|
||||
...process.env,
|
||||
PATH: getPathWithCurrentNode(process.env.PATH),
|
||||
npm_config_dry_run: "false",
|
||||
NPM_CONFIG_DRY_RUN: "false",
|
||||
};
|
||||
}
|
||||
|
||||
async function runPackageManagerInstall(
|
||||
settingsManager: SettingsManager,
|
||||
workingDir: string,
|
||||
@@ -207,10 +216,7 @@ async function runPackageManagerInstall(
|
||||
const child = spawn(packageManagerCommand.command, args, {
|
||||
cwd: scope === "user" ? agentDir : workingDir,
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
env: {
|
||||
...process.env,
|
||||
PATH: getPathWithCurrentNode(process.env.PATH),
|
||||
},
|
||||
env: childPackageManagerEnv(),
|
||||
});
|
||||
|
||||
child.stdout?.on("data", (chunk) => relayFilteredOutput(chunk, process.stdout));
|
||||
@@ -423,6 +429,86 @@ function linkDirectory(linkPath: string, targetPath: string): void {
|
||||
}
|
||||
}
|
||||
|
||||
function packageNameToPath(root: string, packageName: string): string {
|
||||
return resolve(root, packageName);
|
||||
}
|
||||
|
||||
function listBundledWorkspacePackageNames(root: string): string[] {
|
||||
if (!existsSync(root)) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const names: string[] = [];
|
||||
for (const entry of readdirSync(root, { withFileTypes: true })) {
|
||||
if (!entry.isDirectory() && !entry.isSymbolicLink()) continue;
|
||||
if (entry.name.startsWith(".")) continue;
|
||||
if (entry.name.startsWith("@")) {
|
||||
const scopeRoot = resolve(root, entry.name);
|
||||
for (const scopedEntry of readdirSync(scopeRoot, { withFileTypes: true })) {
|
||||
if (!scopedEntry.isDirectory() && !scopedEntry.isSymbolicLink()) continue;
|
||||
names.push(`${entry.name}/${scopedEntry.name}`);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
names.push(entry.name);
|
||||
}
|
||||
return names;
|
||||
}
|
||||
|
||||
function packageDependencyExists(packagePath: string, globalNodeModulesRoot: string, dependency: string): boolean {
|
||||
return existsSync(packageNameToPath(resolve(packagePath, "node_modules"), dependency)) ||
|
||||
existsSync(packageNameToPath(globalNodeModulesRoot, dependency));
|
||||
}
|
||||
|
||||
function installedPackageLooksUsable(packagePath: string, globalNodeModulesRoot: string): boolean {
|
||||
if (!existsSync(resolve(packagePath, "package.json"))) {
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(resolve(packagePath, "package.json"), "utf8")) as {
|
||||
dependencies?: Record<string, string>;
|
||||
};
|
||||
const dependencies = Object.keys(pkg.dependencies ?? {});
|
||||
return dependencies.every((dependency) => packageDependencyExists(packagePath, globalNodeModulesRoot, dependency));
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function replaceBrokenPackageWithBundledCopy(targetPath: string, bundledPackagePath: string, globalNodeModulesRoot: string): boolean {
|
||||
if (!existsSync(targetPath)) {
|
||||
return false;
|
||||
}
|
||||
if (pathsMatchSymlinkTarget(targetPath, bundledPackagePath)) {
|
||||
return false;
|
||||
}
|
||||
if (installedPackageLooksUsable(targetPath, globalNodeModulesRoot)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
rmSync(targetPath, { recursive: true, force: true });
|
||||
linkDirectory(targetPath, bundledPackagePath);
|
||||
return true;
|
||||
}
|
||||
|
||||
function seedBundledPackage(globalNodeModulesRoot: string, bundledNodeModulesRoot: string, packageName: string): boolean {
|
||||
const bundledPackagePath = resolve(bundledNodeModulesRoot, packageName);
|
||||
if (!existsSync(bundledPackagePath)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const targetPath = resolve(globalNodeModulesRoot, packageName);
|
||||
if (replaceBrokenPackageWithBundledCopy(targetPath, bundledPackagePath, globalNodeModulesRoot)) {
|
||||
return true;
|
||||
}
|
||||
if (!existsSync(targetPath)) {
|
||||
linkDirectory(targetPath, bundledPackagePath);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
export function seedBundledWorkspacePackages(
|
||||
agentDir: string,
|
||||
appRoot: string,
|
||||
@@ -435,6 +521,10 @@ export function seedBundledWorkspacePackages(
|
||||
|
||||
const globalNodeModulesRoot = resolve(getFeynmanNpmPrefixPath(agentDir), "lib", "node_modules");
|
||||
const seeded: string[] = [];
|
||||
const bundledPackageNames = listBundledWorkspacePackageNames(bundledNodeModulesRoot);
|
||||
for (const packageName of bundledPackageNames) {
|
||||
seedBundledPackage(globalNodeModulesRoot, bundledNodeModulesRoot, packageName);
|
||||
}
|
||||
|
||||
for (const source of sources) {
|
||||
if (shouldSkipNativeSource(source)) continue;
|
||||
@@ -442,12 +532,8 @@ export function seedBundledWorkspacePackages(
|
||||
const parsed = parseNpmSource(source);
|
||||
if (!parsed) continue;
|
||||
|
||||
const bundledPackagePath = resolve(bundledNodeModulesRoot, parsed.name);
|
||||
if (!existsSync(bundledPackagePath)) continue;
|
||||
|
||||
const targetPath = resolve(globalNodeModulesRoot, parsed.name);
|
||||
if (!existsSync(targetPath)) {
|
||||
linkDirectory(targetPath, bundledPackagePath);
|
||||
if (pathsMatchSymlinkTarget(targetPath, resolve(bundledNodeModulesRoot, parsed.name))) {
|
||||
seeded.push(source);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -123,6 +123,8 @@ export function buildPiEnv(options: PiRuntimeOptions): NodeJS.ProcessEnv {
|
||||
FEYNMAN_BIN_PATH: resolve(options.appRoot, "bin", "feynman.js"),
|
||||
FEYNMAN_NPM_PREFIX: feynmanNpmPrefixPath,
|
||||
// Ensure the Pi child process uses Feynman's agent dir for auth/models/settings.
|
||||
// Patched Pi uses FEYNMAN_CODING_AGENT_DIR; upstream Pi uses PI_CODING_AGENT_DIR.
|
||||
FEYNMAN_CODING_AGENT_DIR: options.feynmanAgentDir,
|
||||
PI_CODING_AGENT_DIR: options.feynmanAgentDir,
|
||||
PANDOC_PATH: process.env.PANDOC_PATH ?? resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS),
|
||||
PI_HARDWARE_CURSOR: process.env.PI_HARDWARE_CURSOR ?? "1",
|
||||
|
||||
@@ -30,3 +30,103 @@ test("bundled prompts and skills do not contain blocked promotional product cont
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
test("research writing prompts forbid fabricated results and unproven figures", () => {
|
||||
const draftPrompt = readFileSync(join(repoRoot, "prompts", "draft.md"), "utf8");
|
||||
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
|
||||
const writerPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "writer.md"), "utf8");
|
||||
const verifierPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "verifier.md"), "utf8");
|
||||
|
||||
for (const [label, content] of [
|
||||
["system prompt", systemPrompt],
|
||||
] as const) {
|
||||
assert.match(content, /Never (invent|fabricate)/i, `${label} must explicitly forbid invented or fabricated results`);
|
||||
assert.match(content, /(figure|chart|image|table)/i, `${label} must cover visual/table provenance`);
|
||||
assert.match(content, /(provenance|source|artifact|script|raw)/i, `${label} must require traceable support`);
|
||||
}
|
||||
|
||||
for (const [label, content] of [
|
||||
["writer prompt", writerPrompt],
|
||||
["verifier prompt", verifierPrompt],
|
||||
["draft prompt", draftPrompt],
|
||||
] as const) {
|
||||
assert.match(content, /system prompt.*provenance rule/i, `${label} must point back to the system provenance rule`);
|
||||
}
|
||||
|
||||
assert.match(draftPrompt, /system prompt's provenance rules/i);
|
||||
assert.match(draftPrompt, /placeholder or proposed experimental plan/i);
|
||||
assert.match(draftPrompt, /source-backed quantitative data/i);
|
||||
});
|
||||
|
||||
test("deepresearch workflow requires durable artifacts even when blocked", () => {
|
||||
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
|
||||
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
|
||||
|
||||
assert.match(systemPrompt, /Do not claim you are only a static model/i);
|
||||
assert.match(systemPrompt, /write the requested durable artifact/i);
|
||||
assert.match(deepResearchPrompt, /Do not stop after planning/i);
|
||||
assert.match(deepResearchPrompt, /not a request to explain or implement/i);
|
||||
assert.match(deepResearchPrompt, /Do not answer by describing the protocol/i);
|
||||
assert.match(deepResearchPrompt, /degraded mode/i);
|
||||
assert.match(deepResearchPrompt, /Verification: BLOCKED/i);
|
||||
assert.match(deepResearchPrompt, /Never end with only an explanation in chat/i);
|
||||
});
|
||||
|
||||
test("deepresearch citation and review stages are sequential and avoid giant edits", () => {
|
||||
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
|
||||
|
||||
assert.match(deepResearchPrompt, /must complete before any reviewer runs/i);
|
||||
assert.match(deepResearchPrompt, /Do not run the `verifier` and `reviewer` in the same parallel `subagent` call/i);
|
||||
assert.match(deepResearchPrompt, /outputs\/\.drafts\/<slug>-cited\.md/i);
|
||||
assert.match(deepResearchPrompt, /do not issue one giant `edit` tool call/i);
|
||||
assert.match(deepResearchPrompt, /outputs\/\.drafts\/<slug>-revised\.md/i);
|
||||
assert.match(deepResearchPrompt, /The final candidate is `outputs\/\.drafts\/<slug>-revised\.md` if it exists/i);
|
||||
});
|
||||
|
||||
test("deepresearch keeps subagent tool calls small and skips subagents for narrow explainers", () => {
|
||||
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
|
||||
|
||||
assert.match(deepResearchPrompt, /including "what is X" explainers/i);
|
||||
assert.match(deepResearchPrompt, /Make the scale decision before assigning owners/i);
|
||||
assert.match(deepResearchPrompt, /lead-owned direct search tasks only/i);
|
||||
assert.match(deepResearchPrompt, /MUST NOT spawn researcher subagents/i);
|
||||
assert.match(deepResearchPrompt, /Do not inflate a simple explainer into a multi-agent survey/i);
|
||||
assert.match(deepResearchPrompt, /Skip researcher spawning entirely/i);
|
||||
assert.match(deepResearchPrompt, /<slug>-research-direct\.md/i);
|
||||
assert.match(deepResearchPrompt, /Do not call `alpha_get_paper`/i);
|
||||
assert.match(deepResearchPrompt, /do not fetch `\.pdf` URLs/i);
|
||||
assert.match(deepResearchPrompt, /Keep `subagent` tool-call JSON small and valid/i);
|
||||
assert.match(deepResearchPrompt, /write a per-researcher brief first/i);
|
||||
assert.match(deepResearchPrompt, /Do not place multi-paragraph instructions inside the `subagent` JSON/i);
|
||||
assert.match(deepResearchPrompt, /Do not add extra keys such as `artifacts`/i);
|
||||
assert.match(deepResearchPrompt, /always set `failFast: false`/i);
|
||||
assert.match(deepResearchPrompt, /if a PDF parser or paper fetch fails/i);
|
||||
});
|
||||
|
||||
test("workflow prompts do not introduce implicit confirmation gates", () => {
|
||||
const workflowPrompts = [
|
||||
"audit.md",
|
||||
"compare.md",
|
||||
"deepresearch.md",
|
||||
"draft.md",
|
||||
"lit.md",
|
||||
"review.md",
|
||||
"summarize.md",
|
||||
"watch.md",
|
||||
];
|
||||
const bannedConfirmationGates = [
|
||||
/Do you want to proceed/i,
|
||||
/Wait for confirmation/i,
|
||||
/wait for user confirmation/i,
|
||||
/give them a brief chance/i,
|
||||
/request changes before proceeding/i,
|
||||
];
|
||||
|
||||
for (const fileName of workflowPrompts) {
|
||||
const content = readFileSync(join(repoRoot, "prompts", fileName), "utf8");
|
||||
assert.match(content, /continue (immediately|automatically)/i, `${fileName} should keep running after planning`);
|
||||
for (const pattern of bannedConfirmationGates) {
|
||||
assert.doesNotMatch(content, pattern, `${fileName} contains confirmation gate ${pattern}`);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
@@ -7,6 +7,7 @@ import { join } from "node:path";
|
||||
import { resolveInitialPrompt, shouldRunInteractiveSetup } from "../src/cli.js";
|
||||
import { buildModelStatusSnapshotFromRecords, chooseRecommendedModel } from "../src/model/catalog.js";
|
||||
import { resolveModelProviderForCommand, setDefaultModelSpec } from "../src/model/commands.js";
|
||||
import { createModelRegistry } from "../src/model/registry.js";
|
||||
|
||||
function createAuthPath(contents: Record<string, unknown>): string {
|
||||
const root = mkdtempSync(join(tmpdir(), "feynman-auth-"));
|
||||
@@ -26,6 +27,17 @@ test("chooseRecommendedModel prefers the strongest authenticated research model"
|
||||
assert.equal(recommendation?.spec, "anthropic/claude-opus-4-6");
|
||||
});
|
||||
|
||||
test("createModelRegistry overlays new Anthropic Opus model before upstream Pi updates", () => {
|
||||
const authPath = createAuthPath({
|
||||
anthropic: { type: "api_key", key: "anthropic-test-key" },
|
||||
});
|
||||
|
||||
const registry = createModelRegistry(authPath);
|
||||
|
||||
assert.ok(registry.find("anthropic", "claude-opus-4-7"));
|
||||
assert.equal(registry.getAvailable().some((model) => model.provider === "anthropic" && model.id === "claude-opus-4-7"), true);
|
||||
});
|
||||
|
||||
test("setDefaultModelSpec accepts a unique bare model id from authenticated models", () => {
|
||||
const authPath = createAuthPath({
|
||||
openai: { type: "api_key", key: "openai-test-key" },
|
||||
@@ -67,6 +79,24 @@ test("resolveModelProviderForCommand falls back to API-key providers when OAuth
|
||||
assert.equal(resolved?.id, "google");
|
||||
});
|
||||
|
||||
test("resolveModelProviderForCommand supports LM Studio as a first-class local provider", () => {
|
||||
const authPath = createAuthPath({});
|
||||
|
||||
const resolved = resolveModelProviderForCommand(authPath, "lm-studio");
|
||||
|
||||
assert.equal(resolved?.kind, "api-key");
|
||||
assert.equal(resolved?.id, "lm-studio");
|
||||
});
|
||||
|
||||
test("resolveModelProviderForCommand supports LiteLLM as a first-class proxy provider", () => {
|
||||
const authPath = createAuthPath({});
|
||||
|
||||
const resolved = resolveModelProviderForCommand(authPath, "litellm");
|
||||
|
||||
assert.equal(resolved?.kind, "api-key");
|
||||
assert.equal(resolved?.id, "litellm");
|
||||
});
|
||||
|
||||
test("resolveModelProviderForCommand prefers OAuth when a provider supports both auth modes", () => {
|
||||
const authPath = createAuthPath({});
|
||||
|
||||
|
||||
@@ -30,3 +30,45 @@ test("upsertProviderConfig creates models.json and merges provider config", () =
|
||||
assert.equal(parsed.providers.custom.authHeader, true);
|
||||
assert.deepEqual(parsed.providers.custom.models, [{ id: "llama3.1:8b" }]);
|
||||
});
|
||||
|
||||
test("upsertProviderConfig writes LiteLLM proxy config with master key", () => {
|
||||
const dir = mkdtempSync(join(tmpdir(), "feynman-litellm-"));
|
||||
const modelsPath = join(dir, "models.json");
|
||||
|
||||
const result = upsertProviderConfig(modelsPath, "litellm", {
|
||||
baseUrl: "http://localhost:4000/v1",
|
||||
apiKey: "LITELLM_MASTER_KEY",
|
||||
api: "openai-completions",
|
||||
authHeader: true,
|
||||
models: [{ id: "gpt-4o" }],
|
||||
});
|
||||
assert.deepEqual(result, { ok: true });
|
||||
|
||||
const parsed = JSON.parse(readFileSync(modelsPath, "utf8")) as any;
|
||||
assert.equal(parsed.providers.litellm.baseUrl, "http://localhost:4000/v1");
|
||||
assert.equal(parsed.providers.litellm.apiKey, "LITELLM_MASTER_KEY");
|
||||
assert.equal(parsed.providers.litellm.api, "openai-completions");
|
||||
assert.equal(parsed.providers.litellm.authHeader, true);
|
||||
assert.deepEqual(parsed.providers.litellm.models, [{ id: "gpt-4o" }]);
|
||||
});
|
||||
|
||||
test("upsertProviderConfig writes LiteLLM proxy config without master key", () => {
|
||||
const dir = mkdtempSync(join(tmpdir(), "feynman-litellm-"));
|
||||
const modelsPath = join(dir, "models.json");
|
||||
|
||||
const result = upsertProviderConfig(modelsPath, "litellm", {
|
||||
baseUrl: "http://localhost:4000/v1",
|
||||
apiKey: "local",
|
||||
api: "openai-completions",
|
||||
authHeader: false,
|
||||
models: [{ id: "llama3" }],
|
||||
});
|
||||
assert.deepEqual(result, { ok: true });
|
||||
|
||||
const parsed = JSON.parse(readFileSync(modelsPath, "utf8")) as any;
|
||||
assert.equal(parsed.providers.litellm.baseUrl, "http://localhost:4000/v1");
|
||||
assert.equal(parsed.providers.litellm.apiKey, "local");
|
||||
assert.equal(parsed.providers.litellm.api, "openai-completions");
|
||||
assert.equal(parsed.providers.litellm.authHeader, false);
|
||||
assert.deepEqual(parsed.providers.litellm.models, [{ id: "llama3" }]);
|
||||
});
|
||||
|
||||
@@ -6,13 +6,17 @@ import { join, resolve } from "node:path";
|
||||
|
||||
import { installPackageSources, seedBundledWorkspacePackages, updateConfiguredPackages } from "../src/pi/package-ops.js";
|
||||
|
||||
function createBundledWorkspace(appRoot: string, packageNames: string[]): void {
|
||||
function createBundledWorkspace(
|
||||
appRoot: string,
|
||||
packageNames: string[],
|
||||
dependenciesByPackage: Record<string, Record<string, string>> = {},
|
||||
): void {
|
||||
for (const packageName of packageNames) {
|
||||
const packageDir = resolve(appRoot, ".feynman", "npm", "node_modules", packageName);
|
||||
mkdirSync(packageDir, { recursive: true });
|
||||
writeFileSync(
|
||||
join(packageDir, "package.json"),
|
||||
JSON.stringify({ name: packageName, version: "1.0.0" }, null, 2) + "\n",
|
||||
JSON.stringify({ name: packageName, version: "1.0.0", dependencies: dependenciesByPackage[packageName] }, null, 2) + "\n",
|
||||
"utf8",
|
||||
);
|
||||
}
|
||||
@@ -76,6 +80,34 @@ test("seedBundledWorkspacePackages preserves existing installed packages", () =>
|
||||
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), false);
|
||||
});
|
||||
|
||||
test("seedBundledWorkspacePackages repairs broken existing bundled packages", () => {
|
||||
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
|
||||
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
|
||||
const agentDir = resolve(homeRoot, "agent");
|
||||
const existingPackageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", "pi-markdown-preview");
|
||||
|
||||
mkdirSync(agentDir, { recursive: true });
|
||||
createBundledWorkspace(appRoot, ["pi-markdown-preview", "puppeteer-core"], {
|
||||
"pi-markdown-preview": { "puppeteer-core": "^24.0.0" },
|
||||
});
|
||||
mkdirSync(existingPackageDir, { recursive: true });
|
||||
writeFileSync(
|
||||
resolve(existingPackageDir, "package.json"),
|
||||
JSON.stringify({ name: "pi-markdown-preview", version: "broken", dependencies: { "puppeteer-core": "^24.0.0" } }) + "\n",
|
||||
"utf8",
|
||||
);
|
||||
|
||||
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, ["npm:pi-markdown-preview"]);
|
||||
|
||||
assert.deepEqual(seeded, ["npm:pi-markdown-preview"]);
|
||||
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), true);
|
||||
assert.equal(lstatSync(resolve(homeRoot, "npm-global", "lib", "node_modules", "puppeteer-core")).isSymbolicLink(), true);
|
||||
assert.equal(
|
||||
readFileSync(resolve(existingPackageDir, "package.json"), "utf8").includes('"version": "1.0.0"'),
|
||||
true,
|
||||
);
|
||||
});
|
||||
|
||||
test("installPackageSources filters noisy npm chatter but preserves meaningful output", async () => {
|
||||
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||
const workingDir = resolve(root, "project");
|
||||
@@ -156,6 +188,46 @@ test("installPackageSources skips native packages on unsupported Node majors bef
|
||||
}
|
||||
});
|
||||
|
||||
test("installPackageSources disables inherited npm dry-run config for child installs", async () => {
|
||||
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||
const workingDir = resolve(root, "project");
|
||||
const agentDir = resolve(root, "agent");
|
||||
const markerPath = resolve(root, "install-env-ok.txt");
|
||||
mkdirSync(workingDir, { recursive: true });
|
||||
|
||||
const scriptPath = writeFakeNpmScript(root, [
|
||||
`import { writeFileSync } from "node:fs";`,
|
||||
`if (process.env.npm_config_dry_run !== "false" || process.env.NPM_CONFIG_DRY_RUN !== "false") process.exit(42);`,
|
||||
`writeFileSync(${JSON.stringify(markerPath)}, "ok\\n", "utf8");`,
|
||||
"process.exit(0);",
|
||||
].join("\n"));
|
||||
|
||||
writeSettings(agentDir, {
|
||||
npmCommand: [process.execPath, scriptPath],
|
||||
});
|
||||
|
||||
const originalLower = process.env.npm_config_dry_run;
|
||||
const originalUpper = process.env.NPM_CONFIG_DRY_RUN;
|
||||
process.env.npm_config_dry_run = "true";
|
||||
process.env.NPM_CONFIG_DRY_RUN = "true";
|
||||
try {
|
||||
const result = await installPackageSources(workingDir, agentDir, ["npm:test-package"]);
|
||||
assert.deepEqual(result.installed, ["npm:test-package"]);
|
||||
assert.equal(existsSync(markerPath), true);
|
||||
} finally {
|
||||
if (originalLower === undefined) {
|
||||
delete process.env.npm_config_dry_run;
|
||||
} else {
|
||||
process.env.npm_config_dry_run = originalLower;
|
||||
}
|
||||
if (originalUpper === undefined) {
|
||||
delete process.env.NPM_CONFIG_DRY_RUN;
|
||||
} else {
|
||||
process.env.NPM_CONFIG_DRY_RUN = originalUpper;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
test("updateConfiguredPackages batches multiple npm updates into a single install per scope", async () => {
|
||||
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||
const workingDir = resolve(root, "project");
|
||||
@@ -171,6 +243,10 @@ test("updateConfiguredPackages batches multiple npm updates into a single instal
|
||||
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
||||
` process.exit(0);`,
|
||||
`}`,
|
||||
`if (args.length >= 4 && args[0] === "view" && args[2] === "version" && args[3] === "--json") {`,
|
||||
` console.log(JSON.stringify("2.0.0"));`,
|
||||
` process.exit(0);`,
|
||||
`}`,
|
||||
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
||||
"process.exit(0);",
|
||||
].join("\n"));
|
||||
@@ -186,7 +262,7 @@ test("updateConfiguredPackages batches multiple npm updates into a single instal
|
||||
globalThis.fetch = (async () => ({
|
||||
ok: true,
|
||||
json: async () => ({ version: "2.0.0" }),
|
||||
})) as typeof fetch;
|
||||
})) as unknown as typeof fetch;
|
||||
|
||||
try {
|
||||
const result = await updateConfiguredPackages(workingDir, agentDir);
|
||||
@@ -218,6 +294,10 @@ test("updateConfiguredPackages skips native package updates on unsupported Node
|
||||
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
||||
` process.exit(0);`,
|
||||
`}`,
|
||||
`if (args.length >= 4 && args[0] === "view" && args[2] === "version" && args[3] === "--json") {`,
|
||||
` console.log(JSON.stringify("2.0.0"));`,
|
||||
` process.exit(0);`,
|
||||
`}`,
|
||||
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
||||
"process.exit(0);",
|
||||
].join("\n"));
|
||||
@@ -234,7 +314,7 @@ test("updateConfiguredPackages skips native package updates on unsupported Node
|
||||
globalThis.fetch = (async () => ({
|
||||
ok: true,
|
||||
json: async () => ({ version: "2.0.0" }),
|
||||
})) as typeof fetch;
|
||||
})) as unknown as typeof fetch;
|
||||
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
||||
|
||||
try {
|
||||
|
||||
@@ -54,6 +54,7 @@ test("buildPiEnv wires Feynman paths into the Pi environment", () => {
|
||||
assert.equal(env.FEYNMAN_NPM_PREFIX, "/home/.feynman/npm-global");
|
||||
assert.equal(env.NPM_CONFIG_PREFIX, "/home/.feynman/npm-global");
|
||||
assert.equal(env.npm_config_prefix, "/home/.feynman/npm-global");
|
||||
assert.equal(env.FEYNMAN_CODING_AGENT_DIR, "/home/.feynman/agent");
|
||||
assert.equal(env.PI_CODING_AGENT_DIR, "/home/.feynman/agent");
|
||||
assert.ok(
|
||||
env.PATH?.startsWith(
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import test from "node:test";
|
||||
import assert from "node:assert/strict";
|
||||
|
||||
import { patchPiSubagentsSource } from "../scripts/lib/pi-subagents-patch.mjs";
|
||||
import { patchPiSubagentsSource, stripPiSubagentBuiltinModelSource } from "../scripts/lib/pi-subagents-patch.mjs";
|
||||
|
||||
const CASES = [
|
||||
{
|
||||
@@ -83,7 +83,7 @@ for (const scenario of CASES) {
|
||||
const patched = patchPiSubagentsSource(scenario.file, scenario.input);
|
||||
|
||||
assert.match(patched, /function resolvePiAgentDir\(\): string \{/);
|
||||
assert.match(patched, /process\.env\.PI_CODING_AGENT_DIR\?\.trim\(\)/);
|
||||
assert.match(patched, /process\.env\.FEYNMAN_CODING_AGENT_DIR\?\.trim\(\) \|\| process\.env\.PI_CODING_AGENT_DIR\?\.trim\(\)/);
|
||||
assert.ok(patched.includes(scenario.expected));
|
||||
assert.ok(!patched.includes(scenario.original));
|
||||
});
|
||||
@@ -140,3 +140,155 @@ test("patchPiSubagentsSource rewrites modern agents.ts discovery paths", () => {
|
||||
assert.ok(!patched.includes('loadChainsFromDir(userDirNew, "user")'));
|
||||
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
|
||||
});
|
||||
|
||||
test("patchPiSubagentsSource preserves output on top-level parallel tasks", () => {
|
||||
const input = [
|
||||
"interface TaskParam {",
|
||||
"\tagent: string;",
|
||||
"\ttask: string;",
|
||||
"\tcwd?: string;",
|
||||
"\tcount?: number;",
|
||||
"\tmodel?: string;",
|
||||
"\tskill?: string | string[] | boolean;",
|
||||
"}",
|
||||
"function run(params: { tasks: TaskParam[] }) {",
|
||||
"\tconst modelOverrides = params.tasks.map(() => undefined);",
|
||||
"\tconst skillOverrides = params.tasks.map(() => undefined);",
|
||||
"\tconst parallelTasks = params.tasks.map((task, index) => ({",
|
||||
"\t\tagent: task.agent,",
|
||||
"\t\ttask: params.context === \"fork\" ? wrapForkTask(task.task) : task.task,",
|
||||
"\t\tcwd: task.cwd,",
|
||||
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||
"\t\t...(skillOverrides[index] !== undefined ? { skill: skillOverrides[index] } : {}),",
|
||||
"\t}));",
|
||||
"}",
|
||||
].join("\n");
|
||||
|
||||
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||
|
||||
assert.match(patched, /output\?: string \| false;/);
|
||||
assert.match(patched, /\n\t\toutput: task\.output,/);
|
||||
assert.doesNotMatch(patched, /resolvePiAgentDir/);
|
||||
});
|
||||
|
||||
test("patchPiSubagentsSource preserves output in async parallel task handoff", () => {
|
||||
const input = [
|
||||
"function run(tasks: TaskParam[]) {",
|
||||
"\tconst modelOverrides = tasks.map(() => undefined);",
|
||||
"\tconst skillOverrides = tasks.map(() => undefined);",
|
||||
"\tconst parallelTasks = tasks.map((t, i) => ({",
|
||||
"\t\tagent: t.agent,",
|
||||
"\t\ttask: params.context === \"fork\" ? wrapForkTask(taskTexts[i]!) : taskTexts[i]!,",
|
||||
"\t\tcwd: t.cwd,",
|
||||
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||
"\t\t...(skillOverrides[i] !== undefined ? { skill: skillOverrides[i] } : {}),",
|
||||
"\t}));",
|
||||
"}",
|
||||
].join("\n");
|
||||
|
||||
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||
|
||||
assert.match(patched, /\n\t\toutput: t\.output,/);
|
||||
});
|
||||
|
||||
test("patchPiSubagentsSource uses task output when resolving foreground parallel behavior", () => {
|
||||
const input = [
|
||||
"async function run(tasks: TaskParam[]) {",
|
||||
"\tconst skillOverrides = tasks.map((t) => normalizeSkillInput(t.skill));",
|
||||
"\tif (params.clarify === true && ctx.hasUI) {",
|
||||
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
|
||||
"\t\t\tresolveStepBehavior(c, { skills: skillOverrides[i] }),",
|
||||
"\t\t);",
|
||||
"\t}",
|
||||
"\tconst behaviors = agentConfigs.map((config) => resolveStepBehavior(config, {}));",
|
||||
"}",
|
||||
].join("\n");
|
||||
|
||||
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||
|
||||
assert.match(patched, /resolveStepBehavior\(c, \{ output: tasks\[i\]\?\.output, skills: skillOverrides\[i\] \}\)/);
|
||||
assert.match(patched, /resolveStepBehavior\(config, \{ output: tasks\[i\]\?\.output, skills: skillOverrides\[i\] \}\)/);
|
||||
assert.doesNotMatch(patched, /resolveStepBehavior\(config, \{\}\)/);
|
||||
});
|
||||
|
||||
test("patchPiSubagentsSource passes foreground parallel output paths into runSync", () => {
|
||||
const input = [
|
||||
"async function runForegroundParallelTasks(input: ForegroundParallelRunInput): Promise<SingleResult[]> {",
|
||||
"\treturn mapConcurrent(input.tasks, input.concurrencyLimit, async (task, index) => {",
|
||||
"\t\tconst overrideSkills = input.skillOverrides[index];",
|
||||
"\t\tconst effectiveSkills = overrideSkills === undefined ? input.behaviors[index]?.skills : overrideSkills;",
|
||||
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
|
||||
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, input.taskTexts[index]!, {",
|
||||
"\t\t\tcwd: taskCwd,",
|
||||
"\t\t\tsignal: input.signal,",
|
||||
"\t\t\tmaxOutput: input.maxOutput,",
|
||||
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
|
||||
"\t\t});",
|
||||
"\t});",
|
||||
"}",
|
||||
].join("\n");
|
||||
|
||||
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||
|
||||
assert.match(patched, /const outputPath = typeof input\.behaviors\[index\]\?\.output === "string"/);
|
||||
assert.match(patched, /const taskText = injectSingleOutputInstruction\(input\.taskTexts\[index\]!, outputPath\)/);
|
||||
assert.match(patched, /runSync\(input\.ctx\.cwd, input\.agents, task\.agent, taskText, \{/);
|
||||
assert.match(patched, /\n\t\t\toutputPath,/);
|
||||
});
|
||||
|
||||
test("patchPiSubagentsSource documents output in top-level task schema", () => {
|
||||
const input = [
|
||||
"export const TaskItem = Type.Object({ ",
|
||||
"\tagent: Type.String(), ",
|
||||
"\ttask: Type.String(), ",
|
||||
"\tcwd: Type.Optional(Type.String()),",
|
||||
"\tcount: Type.Optional(Type.Integer({ minimum: 1, description: \"Repeat this parallel task N times with the same settings.\" })),",
|
||||
"\tmodel: Type.Optional(Type.String({ description: \"Override model for this task (e.g. 'google/gemini-3-pro')\" })),",
|
||||
"\tskill: Type.Optional(SkillOverride),",
|
||||
"});",
|
||||
"export const SubagentParams = Type.Object({",
|
||||
"\ttasks: Type.Optional(Type.Array(TaskItem, { description: \"PARALLEL mode: [{agent, task, count?}, ...]\" })),",
|
||||
"});",
|
||||
].join("\n");
|
||||
|
||||
const patched = patchPiSubagentsSource("schemas.ts", input);
|
||||
|
||||
assert.match(patched, /output: Type\.Optional\(Type\.Any/);
|
||||
assert.match(patched, /count\?, output\?/);
|
||||
assert.doesNotMatch(patched, /resolvePiAgentDir/);
|
||||
});
|
||||
|
||||
test("patchPiSubagentsSource documents output in top-level parallel help", () => {
|
||||
const input = [
|
||||
'import * as os from "node:os";',
|
||||
'import * as path from "node:path";',
|
||||
"const help = `",
|
||||
"• PARALLEL: { tasks: [{agent,task,count?}, ...], concurrency?: number, worktree?: true } - concurrent execution (worktree: isolate each task in a git worktree)",
|
||||
"`;",
|
||||
].join("\n");
|
||||
|
||||
const patched = patchPiSubagentsSource("index.ts", input);
|
||||
|
||||
assert.match(patched, /output\?/);
|
||||
assert.match(patched, /per-task file target/);
|
||||
assert.doesNotMatch(patched, /function resolvePiAgentDir/);
|
||||
});
|
||||
|
||||
test("stripPiSubagentBuiltinModelSource removes built-in model pins", () => {
|
||||
const input = [
|
||||
"---",
|
||||
"name: researcher",
|
||||
"description: Web researcher",
|
||||
"model: anthropic/claude-sonnet-4-6",
|
||||
"tools: read, web_search",
|
||||
"---",
|
||||
"",
|
||||
"Body",
|
||||
].join("\n");
|
||||
|
||||
const patched = stripPiSubagentBuiltinModelSource(input);
|
||||
|
||||
assert.ok(!patched.includes("model: anthropic/claude-sonnet-4-6"));
|
||||
assert.match(patched, /name: researcher/);
|
||||
assert.match(patched, /tools: read, web_search/);
|
||||
});
|
||||
|
||||
12
website/package-lock.json
generated
12
website/package-lock.json
generated
@@ -1544,9 +1544,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@hono/node-server": {
|
||||
"version": "1.19.13",
|
||||
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.13.tgz",
|
||||
"integrity": "sha512-TsQLe4i2gvoTtrHje625ngThGBySOgSK3Xo2XRYOdqGN1teR8+I7vchQC46uLJi8OF62YTYA3AhSpumtkhsaKQ==",
|
||||
"version": "1.19.14",
|
||||
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.14.tgz",
|
||||
"integrity": "sha512-GwtvgtXxnWsucXvbQXkRgqksiH2Qed37H9xHZocE5sA3N8O8O8/8FA3uclQXxXVzc9XBZuEOMK7+r02FmSpHtw==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=18.14.1"
|
||||
@@ -7998,9 +7998,9 @@
|
||||
}
|
||||
},
|
||||
"node_modules/hono": {
|
||||
"version": "4.12.12",
|
||||
"resolved": "https://registry.npmjs.org/hono/-/hono-4.12.12.tgz",
|
||||
"integrity": "sha512-p1JfQMKaceuCbpJKAPKVqyqviZdS0eUxH9v82oWo1kb9xjQ5wA6iP3FNVAPDFlz5/p7d45lO+BpSk1tuSZMF4Q==",
|
||||
"version": "4.12.14",
|
||||
"resolved": "https://registry.npmjs.org/hono/-/hono-4.12.14.tgz",
|
||||
"integrity": "sha512-am5zfg3yu6sqn5yjKBNqhnTX7Cv+m00ox+7jbaKkrLMRJ4rAdldd1xPd/JzbBWspqaQv6RSTrgFN95EsfhC+7w==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=16.9.0"
|
||||
|
||||
@@ -36,8 +36,8 @@
|
||||
},
|
||||
"overrides": {
|
||||
"@modelcontextprotocol/sdk": {
|
||||
"@hono/node-server": "1.19.13",
|
||||
"hono": "4.12.12"
|
||||
"@hono/node-server": "1.19.14",
|
||||
"hono": "4.12.14"
|
||||
},
|
||||
"router": {
|
||||
"path-to-regexp": "8.4.2"
|
||||
|
||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
||||
Workarounds:
|
||||
- try again after the release finishes publishing
|
||||
- pass the latest published version explicitly, e.g.:
|
||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.18
|
||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
||||
Workarounds:
|
||||
- try again after the release finishes publishing
|
||||
- pass the latest published version explicitly, e.g.:
|
||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.18
|
||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
|
||||
"@
|
||||
}
|
||||
|
||||
|
||||
@@ -117,13 +117,13 @@ These installers download the bundled `skills/` and `prompts/` trees plus the re
|
||||
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.18
|
||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
|
||||
```
|
||||
|
||||
On Windows:
|
||||
|
||||
```powershell
|
||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.18
|
||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
|
||||
```
|
||||
|
||||
## Post-install setup
|
||||
|
||||
@@ -52,9 +52,41 @@ Amazon Bedrock (AWS credential chain)
|
||||
|
||||
Feynman verifies the same AWS credential chain Pi uses at runtime, including `AWS_PROFILE`, `~/.aws` credentials/config, SSO, ECS/IRSA, and EC2 instance roles. Once that check passes, Bedrock models become available in `feynman model list` without needing a traditional API key.
|
||||
|
||||
### Local models: Ollama, LM Studio, vLLM
|
||||
### Local models: LM Studio, LiteLLM, Ollama, vLLM
|
||||
|
||||
If you want to use a model running locally, choose the API-key flow and then select:
|
||||
If you want to use LM Studio, start the LM Studio local server, load a model, choose the API-key flow, and then select:
|
||||
|
||||
```text
|
||||
LM Studio (local OpenAI-compatible server)
|
||||
```
|
||||
|
||||
The default settings are:
|
||||
|
||||
```text
|
||||
Base URL: http://localhost:1234/v1
|
||||
Authorization header: No
|
||||
API key: lm-studio
|
||||
```
|
||||
|
||||
Feynman attempts to read LM Studio's `/models` endpoint and prefill the loaded model id.
|
||||
|
||||
For LiteLLM, start the proxy, choose the API-key flow, and then select:
|
||||
|
||||
```text
|
||||
LiteLLM Proxy (OpenAI-compatible gateway)
|
||||
```
|
||||
|
||||
The default settings are:
|
||||
|
||||
```text
|
||||
Base URL: http://localhost:4000/v1
|
||||
API mode: openai-completions
|
||||
Master key: optional, read from LITELLM_MASTER_KEY
|
||||
```
|
||||
|
||||
Feynman attempts to read LiteLLM's `/models` endpoint and prefill model ids from the proxy config.
|
||||
|
||||
For Ollama, vLLM, or another OpenAI-compatible local server, choose:
|
||||
|
||||
```text
|
||||
Custom provider (baseUrl + API key)
|
||||
@@ -70,7 +102,7 @@ Model ids: llama3.1:8b
|
||||
API key: local
|
||||
```
|
||||
|
||||
That same custom-provider flow also works for other OpenAI-compatible local servers such as LM Studio or vLLM. After saving the provider, run:
|
||||
After saving the provider, run:
|
||||
|
||||
```bash
|
||||
feynman model list
|
||||
|
||||
@@ -22,7 +22,9 @@ These are installed by default with every Feynman installation. They provide the
|
||||
| `pi-mermaid` | Render Mermaid diagrams in the terminal UI |
|
||||
| `@aliou/pi-processes` | Manage long-running experiments, background tasks, and log tailing |
|
||||
| `pi-zotero` | Integration with Zotero for citation library management |
|
||||
| `@kaiserlich-dev/pi-session-search` | Indexed session recall with summarize and resume UI. Powers session lookup |
|
||||
| `pi-schedule-prompt` | Schedule recurring and deferred research jobs. Powers the `/watch` workflow |
|
||||
| `@samfp/pi-memory` | Pi-managed preference and correction memory across sessions |
|
||||
| `@tmustier/pi-ralph-wiggum` | Long-running agent loops for iterative development. Powers `/autoresearch` |
|
||||
|
||||
These packages are updated together when you run `feynman update`. You do not need to install them individually.
|
||||
@@ -34,8 +36,6 @@ Install on demand with `feynman packages install <preset>`. These extend Feynman
|
||||
| Package | Preset | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `pi-generative-ui` | `generative-ui` | Interactive HTML-style widgets for rich output |
|
||||
| `@kaiserlich-dev/pi-session-search` | `session-search` | Indexed session recall with summarize and resume UI. Powers `/search` |
|
||||
| `@samfp/pi-memory` | `memory` | Automatic preference and correction memory across sessions |
|
||||
|
||||
## Installing and managing packages
|
||||
|
||||
@@ -48,17 +48,9 @@ feynman packages list
|
||||
Install a specific optional preset:
|
||||
|
||||
```bash
|
||||
feynman packages install session-search
|
||||
feynman packages install memory
|
||||
feynman packages install generative-ui
|
||||
```
|
||||
|
||||
Install all optional packages at once:
|
||||
|
||||
```bash
|
||||
feynman packages install all-extras
|
||||
```
|
||||
|
||||
## Updating packages
|
||||
|
||||
Update all installed packages to their latest versions:
|
||||
|
||||
@@ -35,6 +35,8 @@ When working from existing session context (after a deep research or literature
|
||||
|
||||
The writer pays attention to academic conventions: claims are attributed to their sources with inline citations, methodology sections describe procedures precisely, and limitations are discussed honestly. The draft includes placeholder sections for any content the writer cannot generate from available sources, clearly marking what needs human input.
|
||||
|
||||
Drafts follow Feynman's system-wide provenance rules: unsupported results, figures, images, tables, or benchmark data should become clearly labeled gaps or TODOs, not plausible-looking claims.
|
||||
|
||||
## Output format
|
||||
|
||||
The draft follows standard academic structure:
|
||||
|
||||
Reference in New Issue
Block a user