51 Commits

Author SHA1 Message Date
Advait Paliwal
66a7978582 Require multiple search terms in deepresearch 2026-04-17 21:44:31 -07:00
Advait Paliwal
3d46b581e0 Make deepresearch execute reliably over RPC 2026-04-17 18:52:57 -07:00
Advait Paliwal
40939859b9 Fix subagent output paths and deepresearch robustness 2026-04-17 18:00:24 -07:00
Advait Paliwal
6f3eeea75b Fix Feynman runtime auth env 2026-04-17 15:42:30 -07:00
Advait Paliwal
1b53e3b7f1 Fix Pi subagent task outputs 2026-04-17 14:16:57 -07:00
Advait Paliwal
ec4cbfb57e Update Pi runtime packages 2026-04-17 13:45:16 -07:00
Advait Paliwal
1cd1a147f2 Remove runtime hygiene extension bloat 2026-04-17 11:47:18 -07:00
Advait Paliwal
92914acff7 Add Pi event guards for workflow state 2026-04-17 11:13:57 -07:00
Advait Paliwal
f0bbb25910 Use Pi runtime hooks for research context hygiene 2026-04-17 10:38:42 -07:00
Advait Paliwal
9841342866 Fix workflow continuation and provider setup gaps 2026-04-17 09:47:38 -07:00
Advait Paliwal
d30506c82a Link bundled runtime dependencies for core packages 2026-04-16 15:56:53 -07:00
Advait Paliwal
c3f7f6ec08 Add LM Studio setup and blocked research artifacts 2026-04-16 15:39:01 -07:00
Advait Paliwal
d2570188f9 Add first-class LM Studio setup 2026-04-16 15:34:32 -07:00
Advait Paliwal
ca559dfd91 Fix extension repair and add Opus 4.7 overlay 2026-04-16 14:05:17 -07:00
Advait Paliwal
46b2aa93d0 Skip release when npm version already exists 2026-04-15 23:15:27 -07:00
Advait Paliwal
043e241464 Deduplicate fabricated-results guardrails 2026-04-15 22:53:38 -07:00
Advait Paliwal
501364da45 Deduplicate draft guardrails under system prompt 2026-04-15 22:50:04 -07:00
Advait Paliwal
fe24224965 Add system-wide guardrails against fabricated results 2026-04-15 22:45:04 -07:00
Advait Paliwal
9bc59dad53 Forbid fabricated draft results 2026-04-15 22:38:51 -07:00
Advait Paliwal
7fd94c028e Add star history chart to README 2026-04-15 18:40:54 -07:00
Advait Paliwal
080bf8ad2c Simplify publish workflow and restore auto release 2026-04-15 18:17:28 -07:00
Advait Paliwal
82cafd10cc Fix publish workflow dispatch context 2026-04-15 18:15:20 -07:00
Advait Paliwal
419bcea3d1 Prepare 0.2.18 release automation 2026-04-15 18:10:56 -07:00
Advait Paliwal
d5b6f9cd00 Commit guided setup and clack dependency updates 2026-04-15 17:58:10 -07:00
Advait Paliwal
8fade18b98 Add regression coverage for package update reliability 2026-04-15 17:48:22 -07:00
Advait Paliwal
66f1fe5ffc Filter noisy package updates and skip native installs 2026-04-15 17:37:04 -07:00
Advait Paliwal
01c2808606 Fix package runtime and subagent reliability 2026-04-15 13:51:06 -07:00
Advait Paliwal
dd3c07633b Fix Feynman onboarding and local install reliability 2026-04-15 13:46:12 -07:00
Advait Paliwal
fa259f5cea Add npm install option to website 2026-04-15 12:13:48 -07:00
Advait Paliwal
8fc7c0488c Add Vercel analytics to website 2026-04-15 10:53:10 -07:00
Advait Paliwal
455de783dc feat: restore summarize workflow 2026-04-14 13:34:03 -07:00
Advait Paliwal
01155cadbe fix: replace deprecated astro transitions component 2026-04-14 10:41:35 -07:00
Advait Paliwal
59af81c613 fix: address review findings and clear root audit 2026-04-14 09:48:36 -07:00
Advait Paliwal
0995f5cc22 fix: tighten workflow prompts and search defaults 2026-04-14 09:30:15 -07:00
Advait Paliwal
af6486312d chore: log belgium deepresearch validation 2026-04-14 09:16:09 -07:00
Advait Paliwal
8de8054e4f fix: improve WSL login fallback 2026-04-14 09:01:44 -07:00
Advait Paliwal
5d10285372 fix: require final research artifacts before exit 2026-04-12 13:17:55 -07:00
Advait Paliwal
4f6574f233 fix: unblock unattended research workflows 2026-04-12 13:15:45 -07:00
Advait Paliwal
aa96b5ee14 fix: update Pi and model provider flows 2026-04-12 13:02:16 -07:00
Advait Paliwal
b3a82d4a92 switch release workflow to binary only 2026-04-10 11:02:50 -07:00
Advait Paliwal
790824af20 verify rpc and website gates 2026-04-10 10:49:54 -07:00
Advait Paliwal
4137a29507 remove stale web access override 2026-04-10 10:20:31 -07:00
Advait Paliwal
5b9362918e document local model setup 2026-04-09 13:45:19 -07:00
Advait Paliwal
bfa538fa00 triage remaining tracker fixes 2026-04-09 10:34:29 -07:00
Advait Paliwal
96234425ba harden installers rendering and dependency hygiene 2026-04-09 10:27:23 -07:00
Advait Paliwal
3148f2e62b fix startup packaging and content guardrails 2026-04-09 10:09:05 -07:00
Advait Paliwal
554350cc0e Finish backlog cleanup for Pi integration 2026-03-31 11:02:07 -07:00
Advait Paliwal
d9812cf4f2 Fix Pi package updates and merge feynman-model 2026-03-31 09:18:05 -07:00
Advait Paliwal
aed607ce62 release: bump to 0.2.16 2026-03-28 21:46:57 -07:00
Advait Paliwal
ab8a284c74 fix: respect feynman agent dir in vendored pi-subagents 2026-03-28 21:44:50 -07:00
Advait Paliwal
62d63be1d8 chore: remove valichord integration 2026-03-28 13:56:48 -07:00
109 changed files with 7238 additions and 1531 deletions

View File

@@ -15,6 +15,8 @@ Operating rules:
- Never answer a latest/current question from arXiv or alpha-backed paper search alone.
- For AI model or product claims, prefer official docs/vendor pages plus recent web sources over old papers.
- Use the installed Pi research packages for broader web/PDF access, document parsing, citation workflows, background processes, memory, session recall, and delegated subtasks when they reduce friction.
- You are running inside the Feynman/Pi runtime with filesystem tools, package tools, and configured extensions. Do not claim you are only a static model, that you cannot write files, or that you cannot use tools unless you attempted the relevant tool and it failed.
- If a tool, package, source, or network route is unavailable, record the specific failed capability and still write the requested durable artifact with a clear `Blocked / Unverified` status instead of stopping with chat-only prose.
- Feynman ships project subagents for research work. Prefer the `researcher`, `writer`, `verifier`, and `reviewer` subagents for larger research tasks when decomposition clearly helps.
- Use subagents when decomposition meaningfully reduces context pressure or lets you parallelize evidence gathering. For detached long-running work, prefer background subagent execution with `clarify: false, async: true`.
- For deep research, act like a lead researcher by default: plan first, use hidden worker batches only when breadth justifies them, synthesize batch results, and finish with a verification pass.
@@ -24,6 +26,8 @@ Operating rules:
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
- Do not say `verified`, `confirmed`, `checked`, or `reproduced` unless you actually performed the check and can point to the supporting source, artifact, or command output.
- Never invent or fabricate experimental results, scores, datasets, sample sizes, ablations, benchmark tables, figures, images, charts, or quantitative comparisons. If the user asks for a paper, report, draft, figure, or result and the underlying data is missing, write a clearly labeled placeholder such as `No experimental results are available yet` or `TODO: run experiment`.
- Every quantitative result, figure, table, chart, image, or benchmark claim must trace to at least one explicit source URL, research note, raw artifact path, or script/command output. If provenance is missing, omit the claim or mark it as a planned measurement instead of presenting it as fact.
- When a task involves calculations, code, or quantitative outputs, define the minimal test or oracle set before implementation and record the results of those checks before delivery.
- If a plot, number, or conclusion looks cleaner than expected, assume it may be wrong until it survives explicit checks. Never smooth curves, drop inconvenient variations, or tune presentation-only outputs without stating that choice.
- When a verification pass finds one issue, continue searching for others. Do not stop after the first error unless the whole branch is blocked.
@@ -42,6 +46,7 @@ Operating rules:
- When citing papers from alpha-backed tools, prefer direct arXiv or alphaXiv links and include the arXiv ID.
- Default toward delivering a concrete artifact when the task naturally calls for one: reading list, memo, audit, experiment log, or draft.
- For user-facing workflows, produce exactly one canonical durable Markdown artifact unless the user explicitly asks for multiple deliverables.
- If a workflow requests a durable artifact, verify the file exists on disk before the final response. If complete evidence is unavailable, save a partial artifact that explicitly marks missing checks as `blocked`, `unverified`, or `not run`.
- Do not create extra user-facing intermediate markdown files just because the workflow has multiple reasoning stages.
- Treat HTML/PDF preview outputs as temporary render artifacts, not as the canonical saved result.
- Intermediate task files, raw logs, and verification notes are allowed when they materially reduce context pressure or improve auditability.

View File

@@ -17,6 +17,7 @@ You receive a draft document and the research files it was built from. Your job
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
5. **Verify meaning, not just topic overlap.** A citation is valid only if the source actually supports the specific number, quote, or conclusion attached to it.
6. **Refuse fake certainty.** Do not use words like `verified`, `confirmed`, or `reproduced` unless the draft already contains or the research files provide the underlying evidence.
7. **Enforce the system prompt's provenance rule.** Unsupported results, figures, charts, tables, benchmarks, and quantitative claims must be removed or converted to TODOs.
## Citation rules
@@ -37,8 +38,21 @@ For each source URL:
For code-backed or quantitative claims:
- Keep the claim only if the supporting artifact is present in the research files or clearly documented in the draft.
- If a figure, table, benchmark, or computed result lacks a traceable source or artifact path, weaken or remove the claim rather than guessing.
- Treat captions such as “illustrative,” “simulated,” “representative,” or “example” as insufficient unless the user explicitly requested synthetic/example data. Otherwise remove the visual and mark the missing experiment.
- Do not preserve polished summaries that outrun the raw evidence.
## Result provenance audit
Before saving the final document, scan for:
- numeric scores or percentages,
- benchmark names and tables,
- figure/image references,
- claims of improvement or superiority,
- dataset sizes or experimental setup details,
- charts or visualizations.
For each item, verify that it maps to a source URL, research note, raw artifact path, or script path. If not, remove it or replace it with a TODO. Add a short `Removed Unsupported Claims` section only when you remove material.
## Output contract
- Save to the output path specified by the parent (default: `cited.md`).
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.

View File

@@ -15,6 +15,7 @@ You are Feynman's writing subagent.
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
4. **Do not promote draft text into fact.** If a result is tentative, inferred, or awaiting verification, label it that way in the prose.
5. **No aesthetic laundering.** Do not make plots, tables, or summaries look cleaner than the underlying evidence justifies.
6. **Follow the system prompt's provenance rule.** Missing results become gaps or TODOs, never plausible-looking data.
## Output structure
@@ -36,9 +37,10 @@ Unresolved issues, disagreements between sources, gaps in evidence.
## Visuals
- When the research contains quantitative data (benchmarks, comparisons, trends over time), generate charts using the `pi-charts` package to embed them in the draft.
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams.
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui`.
- Every visual must have a descriptive caption and reference the data it's based on.
- Do not create charts from invented or example data. If values are missing, describe the planned measurement instead.
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams only when the structure is supported by the supplied evidence.
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui` only for source-backed data.
- Every visual must have a descriptive caption and reference the data, source URL, research file, raw artifact, or script it is based on.
- Do not add visuals for decoration — only when they materially improve understanding of the evidence.
## Operating rules
@@ -48,6 +50,7 @@ Unresolved issues, disagreements between sources, gaps in evidence.
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
- Do NOT add a Sources section — the verifier agent builds that.
- Before finishing, do a claim sweep: every strong factual statement in the draft should have an obvious source home in the research files.
- Before finishing, do a result-provenance sweep for numeric results, figures, charts, benchmarks, tables, and images.
## Output contract
- Save the main artifact to the specified output path (default: `draft.md`).

View File

@@ -10,67 +10,89 @@ on:
jobs:
version-check:
runs-on: blacksmith-4vcpu-ubuntu-2404
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
version: ${{ steps.version.outputs.version }}
should_publish: ${{ steps.version.outputs.should_publish }}
should_release: ${{ steps.version.outputs.should_release }}
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v5
- uses: actions/setup-node@v6
with:
node-version: 24.14.0
node-version: 24
registry-url: "https://registry.npmjs.org"
- id: version
shell: bash
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
CURRENT=$(npm view @companion-ai/feynman version 2>/dev/null || echo "0.0.0")
LOCAL=$(node -p "require('./package.json').version")
echo "version=$LOCAL" >> "$GITHUB_OUTPUT"
if [ "$CURRENT" != "$LOCAL" ]; then
echo "should_publish=true" >> "$GITHUB_OUTPUT"
PUBLISHED=$(npm view @companion-ai/feynman version 2>/dev/null || true)
if [ "$PUBLISHED" = "$LOCAL" ] || gh release view "v$LOCAL" >/dev/null 2>&1; then
echo "should_release=false" >> "$GITHUB_OUTPUT"
else
echo "should_publish=false" >> "$GITHUB_OUTPUT"
echo "should_release=true" >> "$GITHUB_OUTPUT"
fi
publish-npm:
verify:
needs: version-check
if: needs.version-check.outputs.should_publish == 'true'
runs-on: blacksmith-4vcpu-ubuntu-2404
if: needs.version-check.outputs.should_release == 'true'
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v5
- uses: actions/setup-node@v6
with:
node-version: 24.14.0
registry-url: https://registry.npmjs.org
- run: npm ci --ignore-scripts
- run: npm run build
node-version: 24
registry-url: "https://registry.npmjs.org"
- run: npm ci
- run: npm test
- run: npm publish --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
- run: npm pack
publish-npm:
needs:
- version-check
- verify
if: needs.version-check.outputs.should_release == 'true' && needs.verify.result == 'success'
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v6
with:
node-version: 24
registry-url: "https://registry.npmjs.org"
- run: npm ci
- run: npm publish --provenance --access public
build-native-bundles:
needs: version-check
if: needs.version-check.outputs.should_publish == 'true'
if: needs.version-check.outputs.should_release == 'true'
strategy:
fail-fast: false
matrix:
include:
- id: linux-x64
os: blacksmith-4vcpu-ubuntu-2404
os: ubuntu-latest
- id: darwin-x64
os: macos-15-intel
- id: darwin-arm64
os: macos-14
- id: win32-x64
os: blacksmith-4vcpu-windows-2025
os: windows-latest
runs-on: ${{ matrix.os }}
permissions:
contents: read
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v5
- uses: actions/setup-node@v6
with:
node-version: 24.14.0
node-version: 24
- run: npm ci --ignore-scripts
- run: npm run build
- run: npm run build:native-bundle
@@ -103,8 +125,8 @@ jobs:
- version-check
- publish-npm
- build-native-bundles
if: needs.version-check.outputs.should_publish == 'true' && needs.build-native-bundles.result == 'success' && needs.publish-npm.result == 'success'
runs-on: blacksmith-4vcpu-ubuntu-2404
if: needs.version-check.outputs.should_release == 'true' && needs.publish-npm.result == 'success' && needs.build-native-bundles.result == 'success'
runs-on: ubuntu-latest
permissions:
contents: write
steps:
@@ -112,7 +134,8 @@ jobs:
with:
path: release-assets
merge-multiple: true
- shell: bash
- name: Create GitHub release
shell: bash
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -124,7 +147,7 @@ jobs:
--title "v$VERSION" \
--notes "Standalone Feynman bundles for native installation." \
--draft=false \
--target "$GITHUB_SHA"
--latest
else
gh release create "v$VERSION" release-assets/* \
--title "v$VERSION" \

2
.nvmrc
View File

@@ -1 +1 @@
20.19.0
22

View File

@@ -15,6 +15,78 @@ Use this file to track chronology, not release notes. Keep entries short, factua
- Blockers: ...
- Next: ...
### 2026-04-12 00:00 local — capital-france
- Objective: Run an unattended deep-research workflow for the question "What is the capital of France?"
- Changed: Created plan artifact at `outputs/.plans/capital-france.md`; scoped the workflow as a narrow fact-verification run with direct lead-agent evidence gathering instead of researcher subagents.
- Verified: Read existing `CHANGELOG.md` and recalled prior saved plan memory for `capital-france` before finalizing the new run plan.
- Failed / learned: None yet.
- Blockers: Need at least two current independent authoritative sources and a quick ambiguity check before drafting.
- Next: Collect current official/public sources, resolve any legal nuance, then draft and verify the brief.
### 2026-04-12 00:20 local — capital-france
- Objective: Complete evidence gathering and ambiguity check for the capital-of-France workflow.
- Changed: Wrote `notes/capital-france-research-web.md` and `notes/capital-france-legal-context.md`; identified Insee (2024) and a Sénat report as the two main corroborating sources.
- Verified: Cross-read current public French sources that explicitly describe Paris as the capital/capital city of France; found no current contradiction.
- Failed / learned: The Presidency homepage was useful contextual support but not explicit enough to carry the core claim alone.
- Blockers: Need citation pass and final review pass before promotion.
- Next: Draft the brief, then run verifier and reviewer passes.
### 2026-04-12 00:35 local — capital-france
- Objective: Move from gathered evidence to a citable draft.
- Changed: Wrote `outputs/.drafts/capital-france-draft.md` and updated the plan ledger to mark drafting complete.
- Verified: Kept the core claim narrowly scoped to what the Insee and Sénat sources explicitly support; treated the Élysée page as contextual only.
- Failed / learned: None.
- Blockers: Need verifier URL/citation pass and reviewer verification pass before final promotion.
- Next: Run verifier on the draft, then review and promote the final brief.
### 2026-04-12 00:50 local — capital-france
- Objective: Complete citation, verification, and final promotion for the capital-of-France workflow.
- Changed: Produced `outputs/capital-france-brief.md`, ran verification into `notes/capital-france-verification.md`, promoted the final brief to `outputs/capital-france.md`, and wrote `outputs/capital-france.provenance.md`.
- Verified: Reviewer found no FATAL or MAJOR issues. Core claim remains backed by two independent French public-institution sources, with Insee as the primary explicit source and the Sénat report as corroboration.
- Failed / learned: The runtime did not expose a named `verifier` subagent, so I used an available worker in a verifier-equivalent role and recorded that deviation in the plan.
- Blockers: None.
- Next: If needed, extend the brief with deeper legal-historical sourcing, but the narrow factual question is sufficiently answered.
### 2026-04-12 10:05 local — capital-france
- Objective: Run the citation-verification pass on the capital-of-France draft and promote a final cited brief.
- Changed: Verified the three draft source URLs were live (HTTP 200 at check time), added numbered inline citations, downgraded unsupported phrasing around the Élysée/context and broad ambiguity claims, and wrote `outputs/capital-france-brief.md`.
- Verified: Confirmed Insee explicitly says Paris is the capital of France; confirmed the Sénat report describes Pariss capital status and the presence of national institutions; confirmed the Élysée homepage is contextual only and not explicit enough to carry the core claim.
- Failed / learned: The draft wording about the Presidency being seated in Paris was not directly supported by the cited homepage, so it was removed rather than carried forward.
- Blockers: Reviewer pass still pending if the workflow requires an adversarial final check.
- Next: If needed, run a final reviewer pass; otherwise use `outputs/capital-france-brief.md` as the canonical brief.
### 2026-04-12 10:20 local — capital-france
- Objective: Close the workflow with final review, final artifact promotion, and provenance.
- Changed: Ran a reviewer pass recorded in `notes/capital-france-verification.md`; promoted the cited brief into `outputs/capital-france.md`; wrote `outputs/capital-france.provenance.md`; updated the run plan to mark all tasks complete.
- Verified: Reviewer verdict was PASS WITH MINOR REVISIONS only; those minor wording fixes were applied before delivery.
- Failed / learned: The runtime did not expose a project-named `verifier` agent, so the citation pass used an available worker agent as a verifier-equivalent step.
- Blockers: None.
- Next: Optional only — produce a legal memorandum on the basis of Paris's capital status if requested.
### 2026-04-14 12:00 local — capital-belgium
- Objective: Run a deep-research workflow for the question "What is the capital of Belgium?"
- Changed: Created plan artifact at `outputs/.plans/capital-belgium.md`; gathered evidence into `notes/capital-belgium-research-web.md` from Belgium.be, FPS Foreign Affairs, Britannica, and a Belgian Senate constitution check.
- Verified: Found two explicit current Belgian government statements that Brussels is the federal capital of Belgium, plus independent Britannica corroboration; no conflicting nuance surfaced in the consulted legal text.
- Failed / learned: This is narrow enough that researcher subagents would add overhead without increasing evidence quality.
- Blockers: Need draft, citation/URL verification pass, final review pass, and promotion.
- Next: Draft the brief, run verifier-equivalent and reviewer passes, then promote final output with provenance.
### 2026-04-14 12:25 local — capital-belgium
- Objective: Complete citation, verification, and final promotion for the capital-of-Belgium workflow.
- Changed: Wrote `outputs/.drafts/capital-belgium-draft.md`; produced cited brief `outputs/capital-belgium-brief.md`; ran verification into `notes/capital-belgium-verification.md`; promoted final output to `outputs/capital-belgium.md`; wrote `outputs/capital-belgium.provenance.md`; updated the plan ledger and verification log.
- Verified: Core claim is now backed by Belgium.be, Belgian Foreign Affairs, Britannica, and direct constitutional text from Senate-hosted Article 194 stating that Brussels is the capital of Belgium and the seat of the federal government.
- Failed / learned: The runtime did not expose a named `verifier` subagent, so a worker performed a verifier-equivalent citation/URL check; reviewer surfaced a stronger constitutional source than the first draft had emphasized.
- Blockers: None.
- Next: Optional only — if requested, expand this into a legal-historical note on Brusselss capital status and the distinction between city, region, and federal institutions.
### 2026-03-25 00:00 local — scaling-laws
- Objective: Set up a deep research workflow for scaling laws.
@@ -77,3 +149,166 @@ Use this file to track chronology, not release notes. Keep entries short, factua
- Failed / learned: The open subagent issue is fixed on `main` but still user-visible on tagged installs until a fresh release is cut.
- Blockers: Need the GitHub publish workflow to finish successfully before the issue can be honestly closed as released.
- Next: Push `0.2.15`, monitor the publish workflow, then update and close the relevant GitHub issue/PR once the release is live.
### 2026-03-28 15:15 PDT — pi-subagents-agent-dir-compat
- Objective: Debug why tagged installs can still fail subagent/auth flows after `0.2.15` when users are not on Anthropic.
- Changed: Added `scripts/lib/pi-subagents-patch.mjs` plus type declarations and wired `scripts/patch-embedded-pi.mjs` to rewrite vendored `pi-subagents` runtime files so they resolve user-scoped paths from `PI_CODING_AGENT_DIR` instead of hardcoded `~/.pi/agent`; added `tests/pi-subagents-patch.test.ts`.
- Verified: Materialized `.feynman/npm`, inspected the shipped `pi-subagents@0.11.11` sources, confirmed the hardcoded `~/.pi/agent` paths in `index.ts`, `agents.ts`, `artifacts.ts`, `run-history.ts`, `skills.ts`, and `chain-clarify.ts`; ran `node scripts/patch-embedded-pi.mjs`; ran `npm test`, `npm run typecheck`, and `npm run build`.
- Failed / learned: The earlier `0.2.15` fix only proved that Feynman exported `PI_CODING_AGENT_DIR` to the top-level Pi child; it did not cover vendored extension code that still hardcoded `.pi` paths internally.
- Blockers: Users still need a release containing this patch before tagged installs benefit from it.
- Next: Cut the next release and verify a tagged install exercises subagents without reading from `~/.pi/agent`.
### 2026-03-28 21:46 PDT — release-0.2.16
- Objective: Ship the vendored `pi-subagents` agent-dir compatibility fix to tagged installs.
- Changed: Bumped the package version from `0.2.15` to `0.2.16` in `package.json` and `package-lock.json`; updated pinned installer examples in `README.md` and `website/src/content/docs/getting-started/installation.md`.
- Verified: Re-ran `npm test`, `npm run typecheck`, and `npm run build`; ran `cd website && npm run build`; ran `npm pack` and confirmed the `0.2.16` tarball includes the new `scripts/lib/pi-subagents-patch.*` files.
- Failed / learned: An initial local `build:native-bundle` check failed because `npm pack` and `build:native-bundle` were run in parallel, and `prepack` intentionally removes `dist/release`; rerunning `npm run build:native-bundle` sequentially succeeded.
- Blockers: None in the repo; publishing still depends on the GitHub workflow running on the bumped version.
- Next: Push the `0.2.16` release bump and monitor npm/GitHub release publication.
### 2026-03-31 10:45 PDT — pi-maintenance-issues-prs
- Objective: Triage open Pi-related issues/PRs, fix the concrete package update regression, and refresh Pi dependencies against current upstream releases.
- Changed: Pinned direct package-manager operations (`feynman update`, `feynman packages install`) to Feynman's npm prefix by exporting `FEYNMAN_NPM_PREFIX`, `NPM_CONFIG_PREFIX`, and `npm_config_prefix` before invoking Pi's `DefaultPackageManager`; bumped `@mariozechner/pi-ai` and `@mariozechner/pi-coding-agent` from `0.62.0` to `0.64.0`; adapted `src/model/registry.ts` to the new `ModelRegistry.create(...)` factory; integrated PR #15's `/feynman-model` command on top of current `main`.
- Verified: Ran `npm test`, `npm run typecheck`, and `npm run build` successfully after the dependency bump and PR integration; confirmed upstream `pi-coding-agent@0.64.0` still uses `npm install -g` for user-scope package updates, so the Feynman-side prefix fix is still required.
- Failed / learned: PR #14 is a stale branch with no clean merge path against current `main`; the only user-facing delta is the ValiChord prompt/skill addition, and the branch also carries unrelated release churn plus demo-style material, so it was not merged in this pass.
- Blockers: None in the local repo state; remote merge/push still depends on repository credentials and branch policy.
- Next: If remote write access is available, commit and push the validated maintenance changes, then close issue #22 and resolve PR #15 as merged while leaving PR #14 unmerged pending a cleaned-up, non-promotional resubmission.
### 2026-03-31 12:05 PDT — pi-backlog-cleanup-round-2
- Objective: Finish the remaining high-confidence open tracker items after the Pi 0.64.0 upgrade instead of leaving the issue list half-reconciled.
- Changed: Added a Windows extension-loader patch helper so Feynman rewrites Pi extension imports to `file://` URLs on Windows before interactive startup; added `/commands`, `/tools`, and `/capabilities` discovery commands and surfaced `/hotkeys` plus `/service-tier` in help metadata; added explicit service-tier support via `feynman model tier`, `--service-tier`, status/doctor output, and a provider-payload hook that passes `service_tier` only to supported OpenAI/OpenAI Codex/Anthropic models; added Exa provider recognition to Feynman's web-search status layer and vendored `pi-web-access`.
- Verified: Ran `npm test`, `npm run typecheck`, and `npm run build`; smoke-imported the modified vendored `pi-web-access` modules with `node --import tsx`.
- Failed / learned: The remaining ValiChord PR is still stale and mixes a real prompt/skill update with unrelated branch churn; it is a review/triage item, not a clean merge candidate.
- Blockers: No local build blockers remain; issue/PR closure still depends on the final push landing on `main`.
- Next: Push the verified cleanup commit, then close issues fixed by the dependency bump plus the new discoverability/service-tier/Windows patches, and close the stale ValiChord PR explicitly instead of leaving it open indefinitely.
### 2026-04-09 09:37 PDT — windows-startup-import-specifiers
- Objective: Fix Windows startup failures where `feynman` exits before the Pi child process initializes.
- Changed: Converted the Node preload module paths passed via `node --import` in `src/pi/launch.ts` to `file://` specifiers using a new `toNodeImportSpecifier(...)` helper in `src/pi/runtime.ts`; expanded `scripts/patch-embedded-pi.mjs` so it also patches the bundled workspace copy of Pi's extension loader when present.
- Verified: Added a regression test in `tests/pi-runtime.test.ts` covering absolute-path to `file://` conversion for preload imports; ran `npm test`, `npm run typecheck`, and `npm run build`.
- Failed / learned: The raw Windows `ERR_UNSUPPORTED_ESM_URL_SCHEME` stack is more consistent with Node rejecting the child-process `--import C:\\...` preload before Pi starts than with a normal in-app extension load failure.
- Blockers: Windows runtime execution was not available locally, so the fix is verified by code path inspection and automated tests rather than an actual Windows shell run.
- Next: Ask the affected user to reinstall or update to the next published package once released, and confirm the Windows REPL now starts from a normal PowerShell session.
### 2026-04-09 11:02 PDT — tracker-hardening-pass
- Objective: Triage the open repo backlog, land the highest-signal fixes locally, and add guardrails against stale promotional workflow content.
- Changed: Hardened Windows launch paths in `bin/feynman.js`, `scripts/build-native-bundle.mjs`, and `scripts/install/install.ps1`; set npm prefix overrides earlier in `scripts/patch-embedded-pi.mjs`; added a `pi-web-access` runtime patch helper plus `FEYNMAN_WEB_SEARCH_CONFIG` env wiring so bundled web search reads the same `~/.feynman/web-search.json` that doctor/status report; taught `src/pi/web-access.ts` to honor the legacy `route` key; fixed bundled skill references and expanded the skills-only installers/docs to ship the prompt and guidance files those skills reference; added regression tests for config paths, catalog snapshot edges, skill-path packaging, `pi-web-access` patching, and blocked promotional content.
- Verified: Ran `npm test`, `npm run typecheck`, and `npm run build` successfully after the full maintenance pass.
- Failed / learned: The skills-only install issue was not just docs drift; the shipped `SKILL.md` files referenced prompt paths that only made sense after installation, so the repo needed both path normalization and packaging changes.
- Blockers: Remote issue/PR closure and merge actions still depend on the final reviewed branch state being pushed.
- Next: Push the validated fixes, close the duplicate Windows/reporting issues they supersede, reject the promotional ValiChord PR explicitly, and then review whether the remaining docs-only or feature PRs should be merged separately.
### 2026-04-09 10:28 PDT — verification-and-security-pass
- Objective: Run a deeper install/security verification pass against the post-cleanup `0.2.17` tree instead of assuming the earlier targeted fixes covered the shipped artifacts.
- Changed: Reworked `extensions/research-tools/header.ts` to use `@mariozechner/pi-tui` width-aware helpers for truncation/wrapping so wide Unicode text does not overflow custom header rows; changed `src/pi/launch.ts` to stop mirroring child crash signals back onto the parent process and instead emit a conventional exit code; added `FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL` overrides to the skills installers for pre-release smoke testing; aligned root and website dependency trees with patched transitive versions using npm `overrides`; fixed `src/pi/web-access.ts` so `search status` respects `FEYNMAN_HOME` semantics instead of hardcoding the current shell home directory; added `tests/pi-launch.test.ts`.
- Verified: Ran `npm test`, `npm run typecheck`, `npm run build`, `cd website && npm run build`, `npm run build:native-bundle`; smoke-tested `scripts/install/install.sh` against a locally served `dist/release/feynman-0.2.17-darwin-arm64.tar.gz`; smoke-tested `scripts/install/install-skills.sh` against a local source archive; confirmed installed `feynman --version`, `feynman --help`, `feynman doctor`, and packaged `feynman search status` work from the installed bundle; `npm audit --omit=dev` is clean in the root app and website after overrides.
- Failed / learned: The first packaged `search status` smoke test still showed the user home path because the native bundle had been built before the `FEYNMAN_HOME` path fix; rebuilding the native bundle resolved that mismatch.
- Blockers: PowerShell runtime was unavailable locally, so Windows installer execution remained code-path validated rather than actually executed.
- Next: Push the second-pass hardening commit, then keep issue `#46` and issue `#47` open until users on the affected Linux/CJK environments confirm whether the launcher/header fixes fully resolve them.
### 2026-04-09 10:36 PDT — remaining-tracker-triage-pass
- Objective: Reduce the remaining open tracker items by landing the lowest-risk missing docs/catalog updates and a targeted Cloud Code Assist compatibility patch instead of only hand-triaging them.
- Changed: Added MiniMax M2.7 recommendation preferences in `src/model/catalog.ts`; documented model switching, authenticated-provider visibility, and `/feynman-model` subagent overrides in `website/src/content/docs/getting-started/configuration.md` and `website/src/content/docs/reference/slash-commands.md`; added a runtime patch helper in `scripts/lib/pi-google-legacy-schema-patch.mjs` and wired `scripts/patch-embedded-pi.mjs` to normalize JSON Schema `const` into `enum` for the legacy `parameters` field used by Cloud Code Assist Claude models.
- Verified: Ran `npm test`, `npm run typecheck`, `npm run build`, and `cd website && npm run build` after the patch/helper/docs changes.
- Failed / learned: The MiniMax provider catalog in Pi already uses canonical IDs like `MiniMax-M2.7`, so the only failure during validation was a test assertion using the wrong casing rather than a runtime bug.
- Blockers: The Cloud Code Assist fix is validated by targeted patch tests and code-path review rather than an end-to-end Google account repro in this environment.
- Next: Push the tracker-triage commit, close the docs/MiniMax PRs as superseded by main, close the support-style model issues against the new docs, and decide whether the remaining feature requests should be left open or closed as not planned/upstream-dependent.
### 2026-04-10 10:22 PDT — web-access-stale-override-fix
- Objective: Fix the new `ctx.modelRegistry.getApiKeyAndHeaders is not a function` / stale `search-filter.js` report without reintroducing broad vendor drift.
- Changed: Removed the stale `.feynman/vendor-overrides/pi-web-access/*` files and removed `syncVendorOverride` from `scripts/patch-embedded-pi.mjs`; kept the targeted `pi-web-access` runtime config-path patch; added `feynman search set <provider> [api-key]` and `feynman search clear` commands with a shared save path in `src/pi/web-access.ts`.
- Verified: Ran `npm test`, `npm run typecheck`, `npm run build`; ran `node scripts/patch-embedded-pi.mjs`, confirmed the installed `pi-web-access/index.ts` has no `search-filter` / condense helper references, and smoke-imported `./.feynman/npm/node_modules/pi-web-access/index.ts`; ran `npm pack --dry-run` and confirmed stale `vendor-overrides` files are no longer in the package tarball.
- Failed / learned: The public Linux installer Docker test was attempted but Docker Desktop became unresponsive even for simple `docker run node:22-bookworm node -v` commands; the earlier Linux npm-artifact container smoke remains valid, but this specific public-installer run is blocked by the local Docker daemon.
- Blockers: Issue `#54` is too underspecified to fix directly without logs; public Linux installer behavior still needs a stable Docker daemon or a real Linux shell to reproduce the user's exact npm errors.
- Next: Push the stale-override fix, close PR `#52` and PR `#53` as superseded/merged-by-main once pushed, and ask for logs on issue `#54` instead of guessing.
### 2026-04-10 10:49 PDT — rpc-and-website-verification-pass
- Objective: Exercise the Feynman wrapper's RPC mode and the website quality gates that were not fully covered by the prior passes.
- Changed: Added `--mode <text|json|rpc>` pass-through support in the Feynman wrapper and skipped terminal clearing in RPC mode; added `@astrojs/check` to the website dev dependencies, fixed React Refresh lint violations in the generated UI components by exporting only components, and added safe website dependency overrides for dev-audit findings.
- Verified: Ran a JSONL RPC smoke test through `node bin/feynman.js --mode rpc` with `get_state`; ran `npm test`, `npm run typecheck`, `npm run build`, `cd website && npm run lint`, `cd website && npm run typecheck`, `cd website && npm run build`, full root `npm audit`, full website `npm audit`, and `npm run build:native-bundle`.
- Failed / learned: Website typecheck was previously a no-op prompt because `@astrojs/check` was missing; installing it exposed dev-audit findings that needed explicit overrides before the full website audit was clean.
- Blockers: Docker Desktop remained unreliable after restart attempts, so this pass still does not include a second successful public-installer Linux Docker run.
- Next: Push the RPC/website verification commit and keep future Docker/public-installer validation separate from repo correctness unless Docker is stable.
### 2026-04-12 09:32 PDT — pi-0.66.1-upgrade-pass
- Objective: Update Feynman from Pi `0.64.0` to the current `0.66.1` packages and absorb any downstream SDK/runtime compatibility changes instead of leaving the repo pinned behind upstream.
- Changed: Bumped `@mariozechner/pi-ai` and `@mariozechner/pi-coding-agent` to `0.66.1` plus `@companion-ai/alpha-hub` to `0.1.3` in `package.json` and `package-lock.json`; updated `extensions/research-tools.ts` to stop listening for the removed `session_switch` extension event and rely on `session_start`, which now carries startup/reload/new/resume/fork reasons in Pi `0.66.x`.
- Verified: Ran `npm test`, `npm run typecheck`, and `npm run build` successfully after the upgrade; smoke-ran `node bin/feynman.js --version`, `node bin/feynman.js doctor`, and `node bin/feynman.js status` successfully; checked upstream package diffs and confirmed the breaking change that affected this repo was the typed extension lifecycle change in `pi-coding-agent`, while `pi-ai` mainly brought refreshed provider/model catalog code including Bedrock/OpenAI provider updates and new generated model entries.
- Failed / learned: `ctx7` resolved Pi correctly to `/badlogic/pi-mono`, but its docs snapshot was not release-note oriented; the concrete downstream-impact analysis came from the actual `0.64.0``0.66.1` package diffs and local validation, not from prose docs alone.
- Failed / learned: The first post-upgrade CLI smoke test failed before Feynman startup because `@companion-ai/alpha-hub@0.1.2` shipped a zero-byte `src/lib/auth.js`; bumping to `0.1.3` fixed that adjacent runtime blocker.
- Blockers: `npm install` reports two high-severity vulnerabilities remain in the dependency tree; this pass focused on the Pi upgrade and did not remediate unrelated audit findings.
- Next: Push the Pi upgrade, then decide whether to layer the pending model-command fixes on top of this branch or land them separately to keep the dependency bump easy to review.
### 2026-04-12 13:00 PDT — model-command-and-bedrock-fix-pass
- Objective: Finish the remaining user-facing model-management regressions instead of stopping at the Pi dependency bump.
- Changed: Updated `src/model/commands.ts` so `feynman model login <provider>` resolves both OAuth and API-key providers; `feynman model logout <provider>` clears either auth mode; `feynman model set` accepts both `provider/model` and `provider:model`; ambiguous bare model IDs now prefer explicitly configured providers from auth storage; added an `amazon-bedrock` setup path that validates the AWS credential chain with the AWS SDK and stores Pi's `<authenticated>` sentinel so Bedrock models appear in `model list`; synced `src/cli.ts`, `metadata/commands.mjs`, `README.md`, and the website docs to the new behavior.
- Verified: Added regression tests in `tests/model-harness.test.ts` for `provider:model`, API-key provider resolution, and ambiguous bare-ID handling; ran `npm test`, `npm run typecheck`, `npm run build`, and `cd website && npm run build`; exercised command-level flows against throwaway `FEYNMAN_HOME` directories: interactive `node bin/feynman.js model login google`, `node bin/feynman.js model set google:gemini-3-pro-preview`, `node bin/feynman.js model set gpt-5.4` with only OpenAI configured, and `node bin/feynman.js model login amazon-bedrock`; confirmed `model list` shows Bedrock models after the new setup path; ran a live one-shot prompt `node bin/feynman.js --prompt "Reply with exactly OK"` and got `OK`.
- Failed / learned: The website build still emits duplicate-id warnings for a handful of docs pages, but it completes successfully; those warnings predate this pass and were not introduced by the model-command edits.
- Blockers: The Bedrock path is verified with the current shell's AWS credential chain, not with a fresh machine lacking AWS config; broader upstream Pi behavior around IMDS/default-profile autodiscovery without the sentinel is still outside this repo.
- Next: Commit and push the combined Pi/model/docs maintenance branch, then decide whether to tackle the deeper search/deepresearch hang issues separately or leave them for focused repro work.
### 2026-04-12 13:35 PDT — workflow-unattended-and-search-curator-fix-pass
- Objective: Fix the remaining workflow deadlocks instead of leaving `deepresearch` and terminal web search half-functional after the maintenance push.
- Changed: Updated the built-in research workflow prompts (`deepresearch`, `lit`, `review`, `audit`, `compare`, `draft`, `watch`) so they present the plan and continue automatically rather than blocking for approval; extended the `pi-web-access` runtime patch so Feynman rewrites its default workflow from browser-based `summary-review` to `none`; added explicit `workflow: "none"` persistence in `src/search/commands.ts` and `src/pi/web-access.ts`, plus surfaced the workflow in doctor/status-style output.
- Verified: Reproduced the original `deepresearch` failure mode in print mode, where the run created `outputs/.plans/capital-france.md` and then stopped waiting for user confirmation; after the prompt changes, reran `deepresearch "What is the capital of France?"` and confirmed it progressed beyond planning and produced `outputs/.drafts/capital-france-draft.md`; inspected `pi-web-access@0.10.6` and confirmed the exact `waiting for summary approval...` string and `summary-review` default live in that package; added regression tests for the new `pi-web-access` patch and workflow-none status handling; reran `npm test`, `npm run typecheck`, and `npm run build`; smoke-tested `feynman search set exa exa_test_key` under a throwaway `FEYNMAN_HOME` and confirmed it writes `"workflow": "none"` to `web-search.json`.
- Failed / learned: The long-running deepresearch session still spends substantial time in later reasoning/writing steps for even a narrow query, but the plan-confirmation deadlock itself is resolved; the remaining slowness is model/workflow behavior, not the original stop-after-plan bug.
- Blockers: I did not install and execute the full optional `pi-session-search` package locally, so the terminal `summary approval` fix is validated by source inspection plus the Feynman patch path and config persistence rather than a local end-to-end package install.
- Next: Commit and push the workflow/search fix pass, then close or answer the remaining deepresearch/search issues with the specific root causes and shipped fixes.
### 2026-04-12 14:05 PDT — final-artifact-hardening-pass
- Objective: Reduce the chance of unattended research workflows stopping at intermediate artifacts like `<slug>-brief.md` without promoting the final deliverable and provenance sidecar.
- Changed: Tightened `prompts/deepresearch.md` so the agent must verify on disk that the plan, draft, cited brief, promoted final output, and provenance sidecar all exist before stopping; tightened `prompts/lit.md` so it explicitly checks for the final output plus provenance sidecar instead of stopping at an intermediate cited draft.
- Verified: Cross-read the current deepresearch/lit deliver steps after the earlier unattended-run reproductions and confirmed the missing enforcement point was the final on-disk artifact check, not the naming convention itself.
- Failed / learned: This is still prompt-level enforcement rather than a deterministic post-processing hook, so it improves completion reliability but does not provide the same guarantees as a dedicated artifact-finalization wrapper.
- Blockers: I did not rerun a full broad deepresearch workflow end-to-end after this prompt-only hardening because those runs are materially longer and more expensive than the narrow reproductions already used to isolate the earlier deadlocks.
- Next: Commit and push the prompt hardening, then, if needed, add a deterministic wrapper around final artifact promotion instead of relying only on prompt adherence.
### 2026-04-14 09:30 PDT — wsl-login-and-uninstall-docs-pass
- Objective: Fix the remaining WSL setup blocker and close the last actionable support issue instead of leaving the tracker open after the earlier workflow/model fixes.
- Changed: Added a dedicated alpha-hub auth patch helper and tests; extended the alphaXiv login patch so WSL uses `wslview` when available and falls back to `cmd.exe /c start`, while also printing the auth URL explicitly for manual copy/paste if browser launch still fails; documented standalone uninstall steps in `README.md` and `website/src/content/docs/getting-started/installation.md`.
- Verified: Added regression tests for the alpha-hub auth patch, reran `npm test`, `npm run typecheck`, and `npm run build`, and smoke-checked the patched alpha-hub source rewrite to confirm it injects both the WSL browser path and the explicit auth URL logging.
- Failed / learned: This repo can patch alpha-hub's login UX reliably, but it still does not ship a destructive `feynman uninstall` command; the practical fix for the support issue is documented uninstall steps rather than a rushed cross-platform remover.
- Blockers: I did not run a true WSL shell here, so the WSL fix is validated by the deterministic source patch plus tests rather than an actual Windows-hosted browser-launch repro.
- Next: Push the WSL/login pass and close the stale issues and PRs that are already superseded by `main`.
### 2026-04-14 09:35 PDT — review-findings-and-audit-cleanup
- Objective: Fix the remaining concrete issues found in the deeper review pass instead of stopping at tracker cleanup.
- Changed: Updated the `pi-web-access` patch so Feynman defaults search workflow to `none` without disabling explicit `summary-review`; softened the research workflow prompts so only unattended/one-shot runs auto-continue while interactive users still get a chance to request plan changes; corrected uninstall docs to mention `~/.ahub` alongside `~/.feynman`; bumped the root `basic-ftp` override from `5.2.1` to `5.2.2`.
- Verified: Ran `npm test`, `npm run typecheck`, `npm run build`, `cd website && npm run build`, and `npm audit`; root audit is now clean.
- Failed / learned: Astro still emits a duplicate-content-id warning for `website/src/content/docs/getting-started/installation.md`, but the website build succeeds and I did not identify a low-risk repo-side fix for that warning in this pass.
- Blockers: The duplicate-id warning remains as a build warning only, not a failing correctness gate.
- Next: If desired, isolate the Astro duplicate-id warning separately with a minimal reproduction rather than mixing it into runtime/CLI maintenance.
### 2026-04-14 10:55 PDT — summarize-workflow-restore
- Objective: Restore the useful summarization workflow that had been closed in PR `#69` without being merged.
- Changed: Added `prompts/summarize.md` as a top-level CLI workflow so `feynman summarize <source>` is available again; kept the RLM-based tiering approach from the original proposal and aligned Tier 3 confirmation behavior with the repo's unattended-run conventions.
- Verified: Confirmed `feynman summarize <source>` appears in CLI help; ran `node bin/feynman.js summarize /tmp/feynman-summary-smoke.txt` against a local smoke file and verified it produced `outputs/feynman-summary-smoke-summary.md` plus the raw fetched note artifact under `outputs/.notes/`.
- Failed / learned: None in the restored Tier 1 path; broader Tier 2/Tier 3 behavior still depends on runtime/model/tool availability, just like the other prompt-driven workflows.
- Blockers: None for the prompt restoration itself.
- Next: If desired, add dedicated docs for `summarize` and decide whether to reopen PR `#69` for historical continuity or leave it closed as superseded by the landed equivalent on `main`.
### 2026-04-12 13:20 PDT — capital-france (citation verification brief)
- Objective: Verify citations in the capital-of-France draft and produce a cited verifier brief.
- Changed: Read `outputs/.drafts/capital-france-draft.md`, `notes/capital-france-research-web.md`, and `notes/capital-france-legal-context.md`; fetched the three draft URLs directly; wrote `notes/capital-france-brief.md` with inline numbered citations and a numbered direct-URL sources list.
- Verified: Confirmed the Insee, Sénat, and Élysée URLs were reachable on 2026-04-12; confirmed Insee and Sénat support the core claim that Paris is the capital of France; marked the Élysée homepage as contextual-only support.
- Failed / learned: The Élysée homepage does not explicitly state the core claim, so it should not be used as sole evidence for capital status.
- Blockers: None for the verifier brief; any stronger legal memo would still need a more direct constitutional/statutory basis if that specific question is asked.
- Next: Promote the brief into the final output or downgrade/remove any claim that leans on the Élysée URL alone.

View File

@@ -24,7 +24,7 @@ If you need to change how bundled subagents behave, edit `.feynman/agents/*.md`.
## Before You Open a PR
1. Start from the latest `main`.
2. Use Node.js `20.19.0` or newer. The repo expects `.nvmrc`, `package.json` engines, `website/package.json` engines, and the runtime version guard to stay aligned.
2. Use Node.js `22.x` for local development. The supported runtime range is Node.js `20.19.0` through `24.x`; `.nvmrc` pins the preferred local version while `package.json`, `website/package.json`, and the runtime version guard define the broader supported range.
3. Install dependencies from the repo root:
```bash
@@ -59,6 +59,7 @@ npm run build
- Avoid refactor-only PRs unless they are necessary to unblock a real fix or requested by a maintainer.
- Do not silently change release behavior, installer behavior, or runtime defaults without documenting the reason in the PR.
- Use American English in docs, comments, prompts, UI copy, and examples.
- Do not add bundled prompts, skills, or docs whose primary purpose is to market, endorse, or funnel users toward a third-party product or service. Product integrations must be justified by user-facing utility and written in neutral language.
## Repo-Specific Checks

View File

@@ -25,9 +25,15 @@ curl -fsSL https://feynman.is/install | bash
irm https://feynman.is/install.ps1 | iex
```
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.15`.
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.31`.
If you install via `pnpm` or `bun` instead of the standalone bundle, Feynman requires Node.js `20.19.0` or newer.
The installer downloads a standalone native bundle with its own Node.js runtime.
To upgrade the standalone app later, rerun the installer. `feynman update` only refreshes installed Pi packages inside Feynman's environment; it does not replace the standalone runtime bundle itself.
To uninstall the standalone app, remove the launcher and runtime bundle, then optionally remove `~/.feynman` if you also want to delete settings, sessions, and installed package state. If you also want to delete alphaXiv login state, remove `~/.ahub`. See the installation guide for platform-specific paths.
Local models are supported through the setup flow. For LM Studio, run `feynman setup`, choose `LM Studio`, and keep the default `http://localhost:1234/v1` unless you changed the server port. For LiteLLM, choose `LiteLLM Proxy` and keep the default `http://localhost:4000/v1`. For Ollama or vLLM, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at the local `/v1` endpoint.
### Skills Only
@@ -63,6 +69,8 @@ curl -fsSL https://feynman.is/install-skills | bash -s -- --repo
That installs into `.agents/skills/feynman` under the current repository.
These installers download the bundled `skills/` and `prompts/` trees plus the repo guidance files referenced by those skills. They do not install the Feynman terminal, bundled Node runtime, auth storage, or Pi packages.
---
### What you type → what happens
@@ -82,9 +90,6 @@ $ feynman audit 2401.12345
$ feynman replicate "chain-of-thought improves math"
→ Replicates experiments on local or cloud GPUs
$ feynman valichord "study-id-or-topic"
→ Runs the ValiChord reproducibility workflow or checks existing Harmony Records
```
---
@@ -100,7 +105,6 @@ Ask naturally or use slash commands as shortcuts.
| `/review <artifact>` | Simulated peer review with severity and revision plan |
| `/audit <item>` | Paper vs. codebase mismatch audit |
| `/replicate <paper>` | Replicate experiments on local or cloud GPUs |
| `/valichord <study-or-topic>` | Reproducibility attestation workflow and Harmony Record lookup |
| `/compare <topic>` | Source comparison matrix |
| `/draft <topic>` | Paper-style draft from research findings |
| `/autoresearch <idea>` | Autonomous experiment loop |
@@ -138,6 +142,18 @@ Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alpha
---
### Star History
<a href="https://www.star-history.com/?repos=getcompanion-ai%2Ffeynman&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
<img alt="Star History Chart" src="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
</picture>
</a>
---
### Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full contributor guide.

View File

@@ -1,5 +1,10 @@
#!/usr/bin/env node
import { resolve } from "node:path";
import { pathToFileURL } from "node:url";
const MIN_NODE_VERSION = "20.19.0";
const MAX_NODE_MAJOR = 24;
const PREFERRED_NODE_MAJOR = 22;
function parseNodeVersion(version) {
const [major = "0", minor = "0", patch = "0"] = version.replace(/^v/, "").split(".");
@@ -16,16 +21,21 @@ function compareNodeVersions(left, right) {
return left.patch - right.patch;
}
if (compareNodeVersions(parseNodeVersion(process.versions.node), parseNodeVersion(MIN_NODE_VERSION)) < 0) {
const parsedNodeVersion = parseNodeVersion(process.versions.node);
if (compareNodeVersions(parsedNodeVersion, parseNodeVersion(MIN_NODE_VERSION)) < 0 || parsedNodeVersion.major > MAX_NODE_MAJOR) {
const isWindows = process.platform === "win32";
console.error(`feynman requires Node.js ${MIN_NODE_VERSION} or later (detected ${process.versions.node}).`);
console.error(isWindows
? "Install a newer Node.js from https://nodejs.org, or use the standalone installer:"
: "Switch to Node 20 with `nvm install 20 && nvm use 20`, or use the standalone installer:");
console.error(`feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected ${process.versions.node}).`);
console.error(parsedNodeVersion.major > MAX_NODE_MAJOR
? "This newer Node release is not supported yet because native Pi packages may fail to build."
: isWindows
? "Install a supported Node.js release from https://nodejs.org, or use the standalone installer:"
: `Switch to a supported Node release with \`nvm install ${PREFERRED_NODE_MAJOR} && nvm use ${PREFERRED_NODE_MAJOR}\`, or use the standalone installer:`);
console.error(isWindows
? "irm https://feynman.is/install.ps1 | iex"
: "curl -fsSL https://feynman.is/install | bash");
process.exit(1);
}
await import(new URL("../scripts/patch-embedded-pi.mjs", import.meta.url).href);
await import(new URL("../dist/index.js", import.meta.url).href);
const here = import.meta.dirname;
await import(pathToFileURL(resolve(here, "..", "scripts", "patch-embedded-pi.mjs")).href);
await import(pathToFileURL(resolve(here, "..", "dist", "index.js")).href);

View File

@@ -1,23 +1,26 @@
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { registerAlphaTools } from "./research-tools/alpha.js";
import { registerDiscoveryCommands } from "./research-tools/discovery.js";
import { registerFeynmanModelCommand } from "./research-tools/feynman-model.js";
import { installFeynmanHeader } from "./research-tools/header.js";
import { registerHelpCommand } from "./research-tools/help.js";
import { registerInitCommand, registerOutputsCommand } from "./research-tools/project.js";
import { registerServiceTierControls } from "./research-tools/service-tier.js";
export default function researchTools(pi: ExtensionAPI): void {
const cache: { agentSummaryPromise?: Promise<{ agents: string[]; chains: string[] }> } = {};
// Pi 0.66.x folds post-switch/resume lifecycle into session_start.
pi.on("session_start", async (_event, ctx) => {
await installFeynmanHeader(pi, ctx, cache);
});
pi.on("session_switch", async (_event, ctx) => {
await installFeynmanHeader(pi, ctx, cache);
});
registerAlphaTools(pi);
registerDiscoveryCommands(pi);
registerFeynmanModelCommand(pi);
registerHelpCommand(pi);
registerInitCommand(pi);
registerOutputsCommand(pi);
registerServiceTierControls(pi);
}

View File

@@ -0,0 +1,130 @@
import { existsSync, readFileSync } from "node:fs";
import { homedir } from "node:os";
import { resolve } from "node:path";
import type { ExtensionAPI, SlashCommandInfo, ToolInfo } from "@mariozechner/pi-coding-agent";
function resolveFeynmanSettingsPath(): string {
const configured = process.env.PI_CODING_AGENT_DIR?.trim();
const agentDir = configured
? configured.startsWith("~/")
? resolve(homedir(), configured.slice(2))
: resolve(configured)
: resolve(homedir(), ".feynman", "agent");
return resolve(agentDir, "settings.json");
}
function readConfiguredPackages(): string[] {
const settingsPath = resolveFeynmanSettingsPath();
if (!existsSync(settingsPath)) return [];
try {
const parsed = JSON.parse(readFileSync(settingsPath, "utf8")) as { packages?: unknown[] };
return Array.isArray(parsed.packages)
? parsed.packages
.map((entry) => {
if (typeof entry === "string") return entry;
if (!entry || typeof entry !== "object") return undefined;
const record = entry as { source?: unknown };
return typeof record.source === "string" ? record.source : undefined;
})
.filter((entry): entry is string => Boolean(entry))
: [];
} catch {
return [];
}
}
function formatSourceLabel(sourceInfo: { source: string; path: string }): string {
if (sourceInfo.source === "local") {
if (sourceInfo.path.includes("/prompts/")) return "workflow";
if (sourceInfo.path.includes("/extensions/")) return "extension";
return "local";
}
return sourceInfo.source.replace(/^npm:/, "").replace(/^git:/, "");
}
function formatCommandLine(command: SlashCommandInfo): string {
const source = formatSourceLabel(command.sourceInfo);
return `/${command.name}${command.description ?? ""} [${source}]`;
}
function summarizeToolParameters(tool: ToolInfo): string {
const properties =
tool.parameters &&
typeof tool.parameters === "object" &&
"properties" in tool.parameters &&
tool.parameters.properties &&
typeof tool.parameters.properties === "object"
? Object.keys(tool.parameters.properties as Record<string, unknown>)
: [];
return properties.length > 0 ? properties.join(", ") : "no parameters";
}
function formatToolLine(tool: ToolInfo): string {
const source = formatSourceLabel(tool.sourceInfo);
return `${tool.name}${tool.description ?? ""} [${source}]`;
}
export function registerDiscoveryCommands(pi: ExtensionAPI): void {
pi.registerCommand("commands", {
description: "Browse all available slash commands, including package and built-in commands.",
handler: async (_args, ctx) => {
const commands = pi
.getCommands()
.slice()
.sort((left, right) => left.name.localeCompare(right.name));
const items = commands.map((command) => formatCommandLine(command));
const selected = await ctx.ui.select("Slash Commands", items);
if (!selected) return;
ctx.ui.setEditorText(selected.split(" — ")[0] ?? "");
ctx.ui.notify(`Prefilled ${selected.split(" — ")[0]}`, "info");
},
});
pi.registerCommand("tools", {
description: "Browse all callable tools with their source and parameter summary.",
handler: async (_args, ctx) => {
const tools = pi
.getAllTools()
.slice()
.sort((left, right) => left.name.localeCompare(right.name));
const selected = await ctx.ui.select("Tools", tools.map((tool) => formatToolLine(tool)));
if (!selected) return;
const toolName = selected.split(" — ")[0] ?? selected;
const tool = tools.find((entry) => entry.name === toolName);
if (!tool) return;
ctx.ui.notify(`${tool.name}: ${summarizeToolParameters(tool)}`, "info");
},
});
pi.registerCommand("capabilities", {
description: "Show installed packages, discovery entrypoints, and high-level runtime capability counts.",
handler: async (_args, ctx) => {
const commands = pi.getCommands();
const tools = pi.getAllTools();
const workflows = commands.filter((command) => formatSourceLabel(command.sourceInfo) === "workflow");
const packages = readConfiguredPackages();
const items = [
`Commands: ${commands.length}`,
`Workflows: ${workflows.length}`,
`Tools: ${tools.length}`,
`Packages: ${packages.length}`,
"--- Discovery ---",
"/commands — browse slash commands",
"/tools — inspect callable tools",
"/hotkeys — view keyboard shortcuts",
"/service-tier — set request tier for supported providers",
"--- Installed Packages ---",
...packages.map((pkg) => pkg),
];
const selected = await ctx.ui.select("Capabilities", items);
if (!selected || selected.startsWith("---")) return;
if (selected.startsWith("/")) {
ctx.ui.setEditorText(selected.split(" — ")[0] ?? selected);
ctx.ui.notify(`Prefilled ${selected.split(" — ")[0]}`, "info");
}
},
});
}

View File

@@ -0,0 +1,309 @@
import { type Dirent, existsSync, readdirSync, readFileSync, writeFileSync } from "node:fs";
import { homedir } from "node:os";
import { basename, join, resolve } from "node:path";
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
const FRONTMATTER_PATTERN = /^---\n([\s\S]*?)\n---\n?([\s\S]*)$/;
const INHERIT_MAIN = "__inherit_main__";
type FrontmatterDocument = {
lines: string[];
body: string;
eol: string;
trailingNewline: boolean;
};
type SubagentModelConfig = {
agent: string;
model?: string;
filePath: string;
};
type SelectOption<T> = {
label: string;
value: T;
};
type CommandContext = Parameters<Parameters<ExtensionAPI["registerCommand"]>[1]["handler"]>[1];
type TargetChoice =
| { type: "main" }
| { type: "subagent"; agent: string; model?: string };
function expandHomePath(value: string): string {
if (value === "~") return homedir();
if (value.startsWith("~/")) return resolve(homedir(), value.slice(2));
return value;
}
function resolveFeynmanAgentDir(): string {
const configured = process.env.PI_CODING_AGENT_DIR ?? process.env.FEYNMAN_CODING_AGENT_DIR;
if (configured?.trim()) {
return resolve(expandHomePath(configured.trim()));
}
return resolve(homedir(), ".feynman", "agent");
}
function formatModelSpec(model: { provider: string; id: string }): string {
return `${model.provider}/${model.id}`;
}
function detectEol(text: string): string {
return text.includes("\r\n") ? "\r\n" : "\n";
}
function normalizeLineEndings(text: string): string {
return text.replace(/\r\n/g, "\n");
}
function parseFrontmatterDocument(text: string): FrontmatterDocument | null {
const normalized = normalizeLineEndings(text);
const match = normalized.match(FRONTMATTER_PATTERN);
if (!match) return null;
return {
lines: match[1].split("\n"),
body: match[2] ?? "",
eol: detectEol(text),
trailingNewline: normalized.endsWith("\n"),
};
}
function serializeFrontmatterDocument(document: FrontmatterDocument): string {
const normalized = `---\n${document.lines.join("\n")}\n---\n${document.body}`;
const withTrailingNewline =
document.trailingNewline && !normalized.endsWith("\n") ? `${normalized}\n` : normalized;
return document.eol === "\n" ? withTrailingNewline : withTrailingNewline.replace(/\n/g, "\r\n");
}
function parseFrontmatterKey(line: string): string | undefined {
const match = line.match(/^\s*([A-Za-z0-9_-]+)\s*:/);
return match?.[1]?.toLowerCase();
}
function getFrontmatterValue(lines: string[], key: string): string | undefined {
const normalizedKey = key.toLowerCase();
for (const line of lines) {
const parsedKey = parseFrontmatterKey(line);
if (parsedKey !== normalizedKey) continue;
const separatorIndex = line.indexOf(":");
if (separatorIndex === -1) return undefined;
const value = line.slice(separatorIndex + 1).trim();
return value.length > 0 ? value : undefined;
}
return undefined;
}
function upsertFrontmatterValue(lines: string[], key: string, value: string): string[] {
const normalizedKey = key.toLowerCase();
const nextLines = [...lines];
const existingIndex = nextLines.findIndex((line) => parseFrontmatterKey(line) === normalizedKey);
const serialized = `${key}: ${value}`;
if (existingIndex !== -1) {
nextLines[existingIndex] = serialized;
return nextLines;
}
const descriptionIndex = nextLines.findIndex((line) => parseFrontmatterKey(line) === "description");
const nameIndex = nextLines.findIndex((line) => parseFrontmatterKey(line) === "name");
const insertIndex = descriptionIndex !== -1 ? descriptionIndex + 1 : nameIndex !== -1 ? nameIndex + 1 : nextLines.length;
nextLines.splice(insertIndex, 0, serialized);
return nextLines;
}
function removeFrontmatterKey(lines: string[], key: string): string[] {
const normalizedKey = key.toLowerCase();
return lines.filter((line) => parseFrontmatterKey(line) !== normalizedKey);
}
function normalizeAgentName(name: string): string {
return name.trim().toLowerCase();
}
function getAgentsDir(agentDir: string): string {
return join(agentDir, "agents");
}
function listAgentFiles(agentsDir: string): string[] {
if (!existsSync(agentsDir)) return [];
return readdirSync(agentsDir, { withFileTypes: true })
.filter((entry: Dirent) => (entry.isFile() || entry.isSymbolicLink()) && entry.name.endsWith(".md"))
.filter((entry) => !entry.name.endsWith(".chain.md"))
.map((entry) => join(agentsDir, entry.name));
}
function readAgentConfig(filePath: string): SubagentModelConfig {
const content = readFileSync(filePath, "utf8");
const parsed = parseFrontmatterDocument(content);
const fallbackName = basename(filePath, ".md");
if (!parsed) return { agent: fallbackName, filePath };
return {
agent: getFrontmatterValue(parsed.lines, "name") ?? fallbackName,
model: getFrontmatterValue(parsed.lines, "model"),
filePath,
};
}
function listSubagentModelConfigs(agentDir: string): SubagentModelConfig[] {
return listAgentFiles(getAgentsDir(agentDir))
.map((filePath) => readAgentConfig(filePath))
.sort((left, right) => left.agent.localeCompare(right.agent));
}
function findAgentConfig(configs: SubagentModelConfig[], agentName: string): SubagentModelConfig | undefined {
const normalized = normalizeAgentName(agentName);
return (
configs.find((config) => normalizeAgentName(config.agent) === normalized) ??
configs.find((config) => normalizeAgentName(basename(config.filePath, ".md")) === normalized)
);
}
function getAgentConfigOrThrow(agentDir: string, agentName: string): SubagentModelConfig {
const configs = listSubagentModelConfigs(agentDir);
const target = findAgentConfig(configs, agentName);
if (target) return target;
if (configs.length === 0) {
throw new Error(`No subagent definitions found in ${getAgentsDir(agentDir)}.`);
}
const availableAgents = configs.map((config) => config.agent).join(", ");
throw new Error(`Unknown subagent: ${agentName}. Available agents: ${availableAgents}`);
}
function setSubagentModel(agentDir: string, agentName: string, modelSpec: string): void {
const normalizedModelSpec = modelSpec.trim();
if (!normalizedModelSpec) throw new Error("Model spec cannot be empty.");
const target = getAgentConfigOrThrow(agentDir, agentName);
const content = readFileSync(target.filePath, "utf8");
const parsed = parseFrontmatterDocument(content);
if (!parsed) {
const eol = detectEol(content);
const injected = `---${eol}name: ${target.agent}${eol}model: ${normalizedModelSpec}${eol}---${eol}${content}`;
writeFileSync(target.filePath, injected, "utf8");
return;
}
const nextLines = upsertFrontmatterValue(parsed.lines, "model", normalizedModelSpec);
if (nextLines.join("\n") !== parsed.lines.join("\n")) {
writeFileSync(target.filePath, serializeFrontmatterDocument({ ...parsed, lines: nextLines }), "utf8");
}
}
function unsetSubagentModel(agentDir: string, agentName: string): void {
const target = getAgentConfigOrThrow(agentDir, agentName);
const content = readFileSync(target.filePath, "utf8");
const parsed = parseFrontmatterDocument(content);
if (!parsed) return;
const nextLines = removeFrontmatterKey(parsed.lines, "model");
if (nextLines.join("\n") !== parsed.lines.join("\n")) {
writeFileSync(target.filePath, serializeFrontmatterDocument({ ...parsed, lines: nextLines }), "utf8");
}
}
async function selectOption<T>(
ctx: CommandContext,
title: string,
options: SelectOption<T>[],
): Promise<T | undefined> {
const selected = await ctx.ui.select(
title,
options.map((option) => option.label),
);
if (!selected) return undefined;
return options.find((option) => option.label === selected)?.value;
}
export function registerFeynmanModelCommand(pi: ExtensionAPI): void {
pi.registerCommand("feynman-model", {
description: "Open Feynman model menu (main + per-subagent overrides).",
handler: async (_args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("feynman-model requires interactive mode.", "error");
return;
}
try {
ctx.modelRegistry.refresh();
const availableModels = [...ctx.modelRegistry.getAvailable()].sort((left, right) =>
formatModelSpec(left).localeCompare(formatModelSpec(right)),
);
if (availableModels.length === 0) {
ctx.ui.notify("No models available.", "error");
return;
}
const agentDir = resolveFeynmanAgentDir();
const subagentConfigs = listSubagentModelConfigs(agentDir);
const currentMain = ctx.model ? formatModelSpec(ctx.model) : "(none)";
const targetOptions: SelectOption<TargetChoice>[] = [
{ label: `main (default): ${currentMain}`, value: { type: "main" } },
...subagentConfigs.map((config) => ({
label: `${config.agent}: ${config.model ?? "default"}`,
value: { type: "subagent" as const, agent: config.agent, model: config.model },
})),
];
const target = await selectOption(ctx, "Choose target", targetOptions);
if (!target) return;
if (target.type === "main") {
const selectedModel = await selectOption(
ctx,
"Select main model",
availableModels.map((model) => {
const spec = formatModelSpec(model);
const suffix = spec === currentMain ? " (current)" : "";
return { label: `${spec}${suffix}`, value: model };
}),
);
if (!selectedModel) return;
const success = await pi.setModel(selectedModel);
if (!success) {
ctx.ui.notify(`No API key found for ${selectedModel.provider}.`, "error");
return;
}
ctx.ui.notify(`Main model set to ${formatModelSpec(selectedModel)}.`, "info");
return;
}
const selectedSubagentModel = await selectOption(
ctx,
`Select model for ${target.agent}`,
[
{
label: target.model ? "(inherit main default)" : "(inherit main default) (current)",
value: INHERIT_MAIN,
},
...availableModels.map((model) => {
const spec = formatModelSpec(model);
const suffix = spec === target.model ? " (current)" : "";
return { label: `${spec}${suffix}`, value: spec };
}),
],
);
if (!selectedSubagentModel) return;
if (selectedSubagentModel === INHERIT_MAIN) {
unsetSubagentModel(agentDir, target.agent);
ctx.ui.notify(`${target.agent} now inherits the main model.`, "info");
return;
}
setSubagentModel(agentDir, target.agent, selectedSubagentModel);
ctx.ui.notify(`${target.agent} model set to ${selectedSubagentModel}.`, "info");
} catch (error) {
ctx.ui.notify(error instanceof Error ? error.message : String(error), "error");
}
},
});
}

View File

@@ -4,6 +4,7 @@ import { execSync } from "node:child_process";
import { resolve as resolvePath } from "node:path";
import type { ExtensionAPI, ExtensionContext } from "@mariozechner/pi-coding-agent";
import { truncateToWidth, visibleWidth } from "@mariozechner/pi-tui";
import {
APP_ROOT,
@@ -11,10 +12,8 @@ import {
FEYNMAN_VERSION,
} from "./shared.js";
const ANSI_RE = /\x1b\[[0-9;]*m/g;
function visibleLength(text: string): number {
return text.replace(ANSI_RE, "").length;
return visibleWidth(text);
}
function formatHeaderPath(path: string): string {
@@ -23,10 +22,8 @@ function formatHeaderPath(path: string): string {
}
function truncateVisible(text: string, maxVisible: number): string {
const raw = text.replace(ANSI_RE, "");
if (raw.length <= maxVisible) return text;
if (maxVisible <= 3) return ".".repeat(maxVisible);
return `${raw.slice(0, maxVisible - 3)}...`;
if (visibleWidth(text) <= maxVisible) return text;
return truncateToWidth(text, maxVisible, maxVisible <= 3 ? "" : "...");
}
function wrapWords(text: string, maxW: number): string[] {
@@ -34,12 +31,12 @@ function wrapWords(text: string, maxW: number): string[] {
const lines: string[] = [];
let cur = "";
for (let word of words) {
if (word.length > maxW) {
if (visibleWidth(word) > maxW) {
if (cur) { lines.push(cur); cur = ""; }
word = maxW > 3 ? `${word.slice(0, maxW - 1)}` : word.slice(0, maxW);
word = truncateToWidth(word, maxW, maxW > 3 ? "…" : "");
}
const test = cur ? `${cur} ${word}` : word;
if (cur && test.length > maxW) {
if (cur && visibleWidth(test) > maxW) {
lines.push(cur);
cur = word;
} else {
@@ -56,9 +53,10 @@ function padRight(text: string, width: number): string {
}
function centerText(text: string, width: number): string {
if (text.length >= width) return text.slice(0, width);
const left = Math.floor((width - text.length) / 2);
const right = width - text.length - left;
const textWidth = visibleWidth(text);
if (textWidth >= width) return truncateToWidth(text, width, "");
const left = Math.floor((width - textWidth) / 2);
const right = width - textWidth - left;
return `${" ".repeat(left)}${text}${" ".repeat(right)}`;
}
@@ -287,8 +285,8 @@ export function installFeynmanHeader(
if (activity) {
const maxActivityLen = leftW * 2;
const trimmed = activity.length > maxActivityLen
? `${activity.slice(0, maxActivityLen - 1)}`
const trimmed = visibleWidth(activity) > maxActivityLen
? truncateToWidth(activity, maxActivityLen, "…")
: activity;
leftLines.push("");
leftLines.push(theme.fg("accent", theme.bold("Last Activity")));

View File

@@ -0,0 +1,174 @@
import { homedir } from "node:os";
import { readFileSync, writeFileSync } from "node:fs";
import { resolve } from "node:path";
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
const FEYNMAN_SERVICE_TIERS = [
"auto",
"default",
"flex",
"priority",
"standard_only",
] as const;
type FeynmanServiceTier = (typeof FEYNMAN_SERVICE_TIERS)[number];
const SERVICE_TIER_SET = new Set<string>(FEYNMAN_SERVICE_TIERS);
const OPENAI_SERVICE_TIERS = new Set<FeynmanServiceTier>(["auto", "default", "flex", "priority"]);
const ANTHROPIC_SERVICE_TIERS = new Set<FeynmanServiceTier>(["auto", "standard_only"]);
type CommandContext = Parameters<Parameters<ExtensionAPI["registerCommand"]>[1]["handler"]>[1];
type SelectOption<T> = {
label: string;
value: T;
};
function resolveFeynmanSettingsPath(): string {
const configured = process.env.PI_CODING_AGENT_DIR?.trim();
const agentDir = configured
? configured.startsWith("~/")
? resolve(homedir(), configured.slice(2))
: resolve(configured)
: resolve(homedir(), ".feynman", "agent");
return resolve(agentDir, "settings.json");
}
function normalizeServiceTier(value: string | undefined): FeynmanServiceTier | undefined {
if (!value) return undefined;
const normalized = value.trim().toLowerCase();
return SERVICE_TIER_SET.has(normalized) ? (normalized as FeynmanServiceTier) : undefined;
}
function getConfiguredServiceTier(settingsPath: string): FeynmanServiceTier | undefined {
try {
const parsed = JSON.parse(readFileSync(settingsPath, "utf8")) as { serviceTier?: string };
return normalizeServiceTier(parsed.serviceTier);
} catch {
return undefined;
}
}
function setConfiguredServiceTier(settingsPath: string, tier: FeynmanServiceTier | undefined): void {
let settings: Record<string, unknown> = {};
try {
settings = JSON.parse(readFileSync(settingsPath, "utf8")) as Record<string, unknown>;
} catch {}
if (tier) {
settings.serviceTier = tier;
} else {
delete settings.serviceTier;
}
writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + "\n", "utf8");
}
function resolveActiveServiceTier(settingsPath: string): FeynmanServiceTier | undefined {
return normalizeServiceTier(process.env.FEYNMAN_SERVICE_TIER) ?? getConfiguredServiceTier(settingsPath);
}
function resolveProviderServiceTier(
provider: string | undefined,
tier: FeynmanServiceTier | undefined,
): FeynmanServiceTier | undefined {
if (!provider || !tier) return undefined;
if ((provider === "openai" || provider === "openai-codex") && OPENAI_SERVICE_TIERS.has(tier)) {
return tier;
}
if (provider === "anthropic" && ANTHROPIC_SERVICE_TIERS.has(tier)) {
return tier;
}
return undefined;
}
async function selectOption<T>(
ctx: CommandContext,
title: string,
options: SelectOption<T>[],
): Promise<T | undefined> {
const selected = await ctx.ui.select(
title,
options.map((option) => option.label),
);
if (!selected) return undefined;
return options.find((option) => option.label === selected)?.value;
}
function parseRequestedTier(rawArgs: string): FeynmanServiceTier | null | undefined {
const trimmed = rawArgs.trim();
if (!trimmed) return undefined;
if (trimmed === "unset" || trimmed === "clear" || trimmed === "off") return null;
return normalizeServiceTier(trimmed);
}
export function registerServiceTierControls(pi: ExtensionAPI): void {
pi.on("before_provider_request", (event, ctx) => {
if (!ctx.model || !event.payload || typeof event.payload !== "object") {
return;
}
const activeTier = resolveActiveServiceTier(resolveFeynmanSettingsPath());
const providerTier = resolveProviderServiceTier(ctx.model.provider, activeTier);
if (!providerTier) {
return;
}
return {
...(event.payload as Record<string, unknown>),
service_tier: providerTier,
};
});
pi.registerCommand("service-tier", {
description: "View or set the provider service tier override used for supported models.",
handler: async (args, ctx) => {
const settingsPath = resolveFeynmanSettingsPath();
const requested = parseRequestedTier(args);
if (requested === undefined && !args.trim()) {
if (!ctx.hasUI) {
ctx.ui.notify(getConfiguredServiceTier(settingsPath) ?? "not set", "info");
return;
}
const current = getConfiguredServiceTier(settingsPath);
const selected = await selectOption(
ctx,
"Select service tier",
[
{ label: current ? `unset (current: ${current})` : "unset (current)", value: null },
...FEYNMAN_SERVICE_TIERS.map((tier) => ({
label: tier === current ? `${tier} (current)` : tier,
value: tier,
})),
],
);
if (selected === undefined) return;
if (selected === null) {
setConfiguredServiceTier(settingsPath, undefined);
ctx.ui.notify("Cleared service tier override.", "info");
return;
}
setConfiguredServiceTier(settingsPath, selected);
ctx.ui.notify(`Service tier set to ${selected}.`, "info");
return;
}
if (requested === null) {
setConfiguredServiceTier(settingsPath, undefined);
ctx.ui.notify("Cleared service tier override.", "info");
return;
}
if (!requested) {
ctx.ui.notify("Use auto, default, flex, priority, standard_only, or unset.", "error");
return;
}
setConfiguredServiceTier(settingsPath, requested);
ctx.ui.notify(`Service tier set to ${requested}.`, "info");
},
});
}

View File

@@ -35,9 +35,14 @@ export function readPromptSpecs(appRoot) {
}
export const extensionCommandSpecs = [
{ name: "capabilities", args: "", section: "Project & Session", description: "Show installed packages, discovery entrypoints, and runtime capability counts.", publicDocs: true },
{ name: "commands", args: "", section: "Project & Session", description: "Browse all available slash commands, including built-in and package commands.", publicDocs: true },
{ name: "help", args: "", section: "Project & Session", description: "Show grouped Feynman commands and prefill the editor with a selected command.", publicDocs: true },
{ name: "feynman-model", args: "", section: "Project & Session", description: "Open Feynman model menu (main + per-subagent overrides).", publicDocs: true },
{ name: "init", args: "", section: "Project & Session", description: "Bootstrap AGENTS.md and session-log folders for a research project.", publicDocs: true },
{ name: "outputs", args: "", section: "Project & Session", description: "Browse all research artifacts (papers, outputs, experiments, notes).", publicDocs: true },
{ name: "service-tier", args: "", section: "Project & Session", description: "View or set the provider service tier override for supported models.", publicDocs: true },
{ name: "tools", args: "", section: "Project & Session", description: "Browse all callable tools with their source and parameter summary.", publicDocs: true },
];
export const livePackageCommandGroups = [
@@ -57,6 +62,7 @@ export const livePackageCommandGroups = [
{ name: "schedule-prompt", usage: "/schedule-prompt" },
{ name: "search", usage: "/search" },
{ name: "preview", usage: "/preview" },
{ name: "hotkeys", usage: "/hotkeys" },
{ name: "new", usage: "/new" },
{ name: "quit", usage: "/quit" },
{ name: "exit", usage: "/exit" },
@@ -80,9 +86,10 @@ export const cliCommandSections = [
title: "Model Management",
commands: [
{ usage: "feynman model list", description: "List available models in Pi auth storage." },
{ usage: "feynman model login [id]", description: "Login to a Pi OAuth model provider." },
{ usage: "feynman model logout [id]", description: "Logout from a Pi OAuth model provider." },
{ usage: "feynman model set <provider/model>", description: "Set the default model." },
{ usage: "feynman model login [id]", description: "Authenticate a model provider with OAuth or API-key setup." },
{ usage: "feynman model logout [id]", description: "Clear stored auth for a model provider." },
{ usage: "feynman model set <provider/model>", description: "Set the default model (also accepts provider:model)." },
{ usage: "feynman model tier [value]", description: "View or set the request service tier override." },
],
},
{
@@ -99,6 +106,8 @@ export const cliCommandSections = [
{ usage: "feynman packages list", description: "Show core and optional Pi package presets." },
{ usage: "feynman packages install <preset>", description: "Install optional package presets on demand." },
{ usage: "feynman search status", description: "Show Pi web-access status and config path." },
{ usage: "feynman search set <provider> [api-key]", description: "Set the web search provider and optionally save its API key." },
{ usage: "feynman search clear", description: "Reset web search provider to auto while preserving API keys." },
{ usage: "feynman update [package]", description: "Update installed packages, or a specific package." },
],
},
@@ -109,7 +118,8 @@ export const legacyFlags = [
{ usage: "--alpha-login", description: "Sign in to alphaXiv and exit." },
{ usage: "--alpha-logout", description: "Clear alphaXiv auth and exit." },
{ usage: "--alpha-status", description: "Show alphaXiv auth status and exit." },
{ usage: "--model <provider:model>", description: "Force a specific model." },
{ usage: "--model <provider/model|provider:model>", description: "Force a specific model." },
{ usage: "--service-tier <tier>", description: "Override request service tier for this run." },
{ usage: "--thinking <level>", description: "Set thinking level: off | minimal | low | medium | high | xhigh." },
{ usage: "--cwd <path>", description: "Set the working directory for tools." },
{ usage: "--session-dir <path>", description: "Set the session storage directory." },

1180
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,11 @@
{
"name": "@companion-ai/feynman",
"version": "0.2.15",
"version": "0.2.34",
"description": "Research-first CLI agent built on Pi and alphaXiv",
"license": "MIT",
"type": "module",
"engines": {
"node": ">=20.19.0"
"node": ">=20.19.0 <25"
},
"bin": {
"feynman": "bin/feynman.js"
@@ -59,14 +59,38 @@
]
},
"dependencies": {
"@companion-ai/alpha-hub": "^0.1.2",
"@mariozechner/pi-ai": "^0.62.0",
"@mariozechner/pi-coding-agent": "^0.62.0",
"@sinclair/typebox": "^0.34.48",
"dotenv": "^17.3.1"
"@clack/prompts": "^1.2.0",
"@companion-ai/alpha-hub": "^0.1.3",
"@mariozechner/pi-ai": "^0.67.6",
"@mariozechner/pi-coding-agent": "^0.67.6",
"@sinclair/typebox": "^0.34.49",
"dotenv": "^17.4.2"
},
"overrides": {
"basic-ftp": "5.3.0",
"@modelcontextprotocol/sdk": {
"@hono/node-server": "1.19.14",
"hono": "4.12.14"
},
"express": {
"router": {
"path-to-regexp": "8.4.2"
}
},
"proxy-agent": {
"pac-proxy-agent": {
"get-uri": {
"basic-ftp": "5.3.0"
}
}
},
"protobufjs": "7.5.5",
"minimatch": {
"brace-expansion": "5.0.5"
}
},
"devDependencies": {
"@types/node": "^25.5.0",
"@types/node": "^25.6.0",
"tsx": "^4.21.0",
"typescript": "^5.9.3"
},

View File

@@ -9,7 +9,7 @@ Audit the paper and codebase for: $@
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements:
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.

View File

@@ -9,7 +9,7 @@ Compare sources for: $@
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements:
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.

View File

@@ -4,186 +4,177 @@ args: <topic>
section: Research Workflows
topLevelCli: true
---
Run a deep research workflow for: $@
Run deep research for: $@
You are the Lead Researcher. You plan, delegate, evaluate, verify, write, and cite. Internal orchestration is invisible to the user unless they ask.
This is an execution request, not a request to explain or implement the workflow instructions.
Execute the workflow. Do not answer by describing the protocol, do not explain these instructions, do not restate the protocol, and do not ask for confirmation. Do not stop after planning. Your first actions should be tool calls that create directories and write the plan artifact.
## 1. Plan
## Required Artifacts
Analyze the research question using extended thinking. Develop a research strategy:
- Key questions that must be answered
- Evidence types needed (papers, web, code, data, docs)
- Sub-questions disjoint enough to parallelize
- Source types and time periods that matter
- Acceptance criteria: what evidence would make the answer "sufficient"
Derive a short slug from the topic: lowercase, hyphenated, no filler words, at most 5 words.
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words — e.g. "cloud-sandbox-pricing" not "deepresearch-plan"). Write the plan to `outputs/.plans/<slug>.md` as a self-contained artifact. Use this same slug for all artifacts in this run.
If `CHANGELOG.md` exists, read the most recent relevant entries before finalizing the plan. Once the workflow becomes multi-round or spans enough work to merit resume support, append concise entries to `CHANGELOG.md` after meaningful progress and before stopping.
Every run must leave these files on disk:
- `outputs/.plans/<slug>.md`
- `outputs/.drafts/<slug>-draft.md`
- `outputs/.drafts/<slug>-cited.md`
- `outputs/<slug>.md` or `papers/<slug>.md`
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md`
```markdown
# Research Plan: [topic]
If any capability fails, continue in degraded mode and still write a blocked or partial final output and provenance sidecar. Never end with chat-only output. Never end with only an explanation in chat. Use `Verification: BLOCKED` when verification could not be completed.
## Questions
1. ...
## Step 1: Plan
## Strategy
- Researcher allocations and dimensions
- Expected rounds
Create `outputs/.plans/<slug>.md` immediately. The plan must include:
- Key questions
- Evidence needed
- Scale decision
- Task ledger
- Verification log
- Decision log
## Acceptance Criteria
- [ ] All key questions answered with ≥2 independent sources
- [ ] Contradictions identified and addressed
- [ ] No single-source claims on critical findings
Make the scale decision before assigning owners in the plan. If the topic is a narrow "what is X" explainer, the plan must use lead-owned direct search tasks only; do not allocate researcher subagents in the task ledger.
## Task Ledger
| ID | Owner | Task | Status | Output |
|---|---|---|---|---|
| T1 | lead / researcher | ... | todo | ... |
Also save the plan with `memory_remember` using key `deepresearch.<slug>.plan` if that tool is available. If it is not available, continue without it.
## Verification Log
| Item | Method | Status | Evidence |
|---|---|---|---|
| Critical claim / computation / figure | source cross-read / rerun / direct fetch / code check | pending | path or URL |
After writing the plan, continue immediately. Do not pause for approval.
## Decision Log
(Updated as the workflow progresses)
```
## Step 2: Scale
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
Use direct search for:
- Single fact or narrow question, including "what is X" explainers
- Work you can answer with 3-10 tool calls
Present the plan to the user and ask them to confirm before proceeding. If the user wants changes, revise the plan first.
For "what is X" explainer topics, you MUST NOT spawn researcher subagents unless the user explicitly asks for comprehensive coverage, current landscape, benchmarks, or production deployment.
Do not inflate a simple explainer into a multi-agent survey.
## 2. Scale decision
Use subagents only when decomposition clearly helps:
- Direct comparison of 2-3 items: 2 `researcher` subagents
- Broad survey or multi-faceted topic: 3-4 `researcher` subagents
- Complex multi-domain research: 4-6 `researcher` subagents
| Query type | Execution |
|---|---|
| Single fact or narrow question | Search directly yourself, no subagents, 3-10 tool calls |
| Direct comparison (2-3 items) | 2 parallel `researcher` subagents |
| Broad survey or multi-faceted topic | 3-4 parallel `researcher` subagents |
| Complex multi-domain research | 4-6 parallel `researcher` subagents |
## Step 3: Gather Evidence
Never spawn subagents for work you can do in 5 tool calls.
Avoid crash-prone PDF parsing in this workflow. Do not call `alpha_get_paper` and do not fetch `.pdf` URLs unless the user explicitly asks for PDF extraction. Prefer paper metadata, abstracts, HTML pages, official docs, and web snippets. If only a PDF exists, cite the PDF URL from search metadata and mark full-text PDF parsing as blocked instead of fetching it.
## 3. Spawn researchers
If direct search was chosen:
- Skip researcher spawning entirely.
- Search and fetch sources yourself.
- Use multiple search terms/angles before drafting. Minimum: 3 distinct queries for direct-mode research, covering definition/history, mechanism/formula, and current usage/comparison when relevant.
- Record the exact search terms used in `<slug>-research-direct.md`.
- Write notes to `<slug>-research-direct.md`.
- Continue to synthesis.
Launch parallel `researcher` subagents via `subagent`. Each gets a structured brief with:
- **Objective:** what to find
- **Output format:** numbered sources, evidence table, inline source references
- **Tool guidance:** which search tools to prioritize
- **Task boundaries:** what NOT to cover (another researcher handles that)
- **Task IDs:** the specific ledger rows they own and must report back on
If subagents were chosen:
- Write a per-researcher brief first, such as `outputs/.plans/<slug>-T1.md`.
- Keep `subagent` tool-call JSON small and valid.
- Do not place multi-paragraph instructions inside the `subagent` JSON.
- Use only supported `subagent` keys. Do not add extra keys such as `artifacts` unless the tool schema explicitly exposes them.
- Always set `failFast: false`.
- Do not name exact tool commands in subagent tasks unless those tool names are visible in the current tool set.
- Prefer broad guidance such as "use paper search and web search"; if a PDF parser or paper fetch fails, the researcher must continue from metadata, abstracts, and web sources and mark PDF parsing as blocked.
Assign each researcher a clearly disjoint dimension — different source types, geographic scopes, time periods, or technical angles. Never duplicate coverage.
Example shape:
```
```json
{
tasks: [
{ agent: "researcher", task: "...", output: "<slug>-research-web.md" },
{ agent: "researcher", task: "...", output: "<slug>-research-papers.md" }
"tasks": [
{ "agent": "researcher", "task": "Read outputs/.plans/<slug>-T1.md and write <slug>-research-web.md.", "output": "<slug>-research-web.md" },
{ "agent": "researcher", "task": "Read outputs/.plans/<slug>-T2.md and write <slug>-research-papers.md.", "output": "<slug>-research-papers.md" }
],
concurrency: 4,
failFast: false
"concurrency": 4,
"failFast": false
}
```
Researchers write full outputs to files and pass references back — do not have them return full content into your context.
Researchers must not silently merge or skip assigned tasks. If something is impossible or redundant, mark the ledger row `blocked` or `superseded` with a note.
After evidence gathering, update the plan ledger and verification log. If research failed, record exactly what failed and proceed with a blocked or partial draft.
## 4. Evaluate and loop
## Step 4: Draft
After researchers return, read their output files and critically assess:
- Which plan questions remain unanswered?
- Which answers rest on only one source?
- Are there contradictions needing resolution?
- Is any key angle missing entirely?
- Did every assigned ledger task actually get completed, blocked, or explicitly superseded?
Write the report yourself. Do not delegate synthesis.
If gaps are significant, spawn another targeted batch of researchers. No fixed cap on rounds — iterate until evidence is sufficient or sources are exhausted.
Save to `outputs/.drafts/<slug>-draft.md`.
Update the plan artifact (`outputs/.plans/<slug>.md`) task ledger, verification log, and decision log after each round.
When the work spans multiple rounds, also append a concise chronological entry to `CHANGELOG.md` covering what changed, what was verified, what remains blocked, and the next recommended step.
Include:
- Executive summary
- Findings organized by question/theme
- Evidence-backed caveats and disagreements
- Open questions
- No invented sources, results, figures, benchmarks, images, charts, or tables
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
Before citation, sweep the draft:
- Every critical claim, number, figure, table, or benchmark must map to a source URL, research note, raw artifact path, or command/script output.
- Remove or downgrade unsupported claims.
- Mark inferences as inferences.
## 5. Write the report
## Step 5: Cite
Once evidence is sufficient, YOU write the full research brief directly. Do not delegate writing to another agent. Read the research files, synthesize the findings, and produce a complete document:
If direct search/no researcher subagents was chosen:
- Do citation yourself.
- Verify reachable HTML/doc URLs with available fetch/search tools.
- Copy or rewrite `outputs/.drafts/<slug>-draft.md` to `outputs/.drafts/<slug>-cited.md` with inline citations and a Sources section.
- Do not spawn the `verifier` subagent for simple direct-search runs.
```markdown
# Title
If researcher subagents were used, run the `verifier` agent after the draft exists. This step is mandatory and must complete before any reviewer runs. Do not run the `verifier` and `reviewer` in the same parallel `subagent` call.
## Executive Summary
2-3 paragraph overview of key findings.
Use this shape:
## Section 1: ...
Detailed findings organized by theme or question.
## Section N: ...
## Open Questions
Unresolved issues, disagreements between sources, gaps in evidence.
```json
{
"agent": "verifier",
"task": "Add inline citations to outputs/.drafts/<slug>-draft.md using the research files as source material. Verify every URL. Write the complete cited brief to outputs/.drafts/<slug>-cited.md.",
"output": "outputs/.drafts/<slug>-cited.md"
}
```
When the research includes quantitative data (benchmarks, performance comparisons, trends), generate charts using `pi-charts`. Use Mermaid diagrams for architectures and processes. Every visual must have a caption and reference the underlying data.
After the verifier returns, verify on disk that `outputs/.drafts/<slug>-cited.md` exists. If the verifier wrote elsewhere, find the cited file and move or copy it to `outputs/.drafts/<slug>-cited.md`.
Before finalizing the draft, do a claim sweep:
- map each critical claim, number, and figure to its supporting source or artifact in the verification log
- downgrade or remove anything that cannot be grounded
- label inferences as inferences
- if code or calculations were involved, record which checks were actually run and which remain unverified
## Step 6: Review
Save this draft to `outputs/.drafts/<slug>-draft.md`.
If direct search/no researcher subagents was chosen:
- Review the cited draft yourself.
- Write `<slug>-verification.md` with FATAL / MAJOR / MINOR findings and the checks performed.
- Fix FATAL issues before delivery.
- Do not spawn the `reviewer` subagent for simple direct-search runs.
## 6. Cite
If researcher subagents were used, only after `outputs/.drafts/<slug>-cited.md` exists, run the `reviewer` agent against it.
Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output:
Use this shape:
```
{ agent: "verifier", task: "Add inline citations to <slug>-draft.md using the research files as source material. Verify every URL.", output: "<slug>-brief.md" }
```json
{
"agent": "reviewer",
"task": "Verify outputs/.drafts/<slug>-cited.md. Flag unsupported claims, logical gaps, single-source critical claims, and overstated confidence. This is a verification pass, not a peer review.",
"output": "<slug>-verification.md"
}
```
The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
If the reviewer flags FATAL issues, fix them before delivery and run one more review pass. Note MAJOR issues in Open Questions. Accept MINOR issues.
## 7. Verify
When applying reviewer fixes, do not issue one giant `edit` tool call with many replacements. Use small localized edits only for 1-3 simple corrections. For section rewrites, table rewrites, or more than 3 substantive fixes, read the cited draft and write a corrected full file to `outputs/.drafts/<slug>-revised.md` instead.
Spawn the `reviewer` agent against the cited draft. The reviewer checks for:
- Unsupported claims that slipped past citation
- Logical gaps or contradictions between sections
- Single-source claims on critical findings
- Overstated confidence relative to evidence quality
The final candidate is `outputs/.drafts/<slug>-revised.md` if it exists; otherwise it is `outputs/.drafts/<slug>-cited.md`.
```
{ agent: "reviewer", task: "Verify <slug>-brief.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "<slug>-verification.md" }
```
## Step 7: Deliver
If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted.
After fixes, run at least one more review-style verification pass if any FATAL issues were found. Do not assume one fix solved everything.
Copy the final candidate to:
- `papers/<slug>.md` for paper-style drafts
- `outputs/<slug>.md` for everything else
## 8. Deliver
Copy the final cited and verified output to the appropriate folder:
- Paper-style drafts → `papers/`
- Everything else → `outputs/`
Save the final output as `<slug>.md` (in `outputs/` or `papers/` per the rule above).
Write a provenance record alongside it as `<slug>.provenance.md`:
Write provenance next to it as `<slug>.provenance.md`:
```markdown
# Provenance: [topic]
- **Date:** [date]
- **Rounds:** [number of researcher rounds]
- **Sources consulted:** [total unique sources across all research files]
- **Sources accepted:** [sources that survived citation verification]
- **Sources rejected:** [dead links, unverifiable, or removed]
- **Verification:** [PASS / PASS WITH NOTES — summary of reviewer findings]
- **Rounds:** [number of research rounds]
- **Sources consulted:** [count and/or list]
- **Sources accepted:** [count and/or list]
- **Sources rejected:** [dead, unverifiable, or removed]
- **Verification:** [PASS / PASS WITH NOTES / BLOCKED]
- **Plan:** outputs/.plans/<slug>.md
- **Research files:** [list of intermediate <slug>-research-*.md files]
- **Research files:** [files used]
```
## Background execution
Before responding, verify on disk that all required artifacts exist. If verification could not be completed, set `Verification: BLOCKED` or `PASS WITH NOTES` and list the missing checks.
If the user wants unattended execution or the sweep will clearly take a while:
- Launch the full workflow via `subagent` using `clarify: false, async: true`
- Report the async ID and how to check status with `subagent_status`
Final response should be brief: link the final file, provenance file, and any blocked checks.

View File

@@ -9,11 +9,12 @@ Write a paper-style draft for: $@
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements:
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Present the outline to the user and confirm before proceeding.
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Briefly summarize the outline to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested outline review.
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
- Use clean Markdown with LaTeX where equations materially help.
- Generate charts with `pi-charts` for quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines. Every figure needs a caption.
- Follow the system prompt's provenance rules for all results, figures, charts, images, tables, benchmarks, and quantitative comparisons. If evidence is missing, leave a placeholder or proposed experimental plan instead of claiming an outcome.
- Generate charts with `pi-charts` only for source-backed quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines only when the structure is supported by sources. Every figure needs a provenance-bearing caption.
- Before delivery, sweep the draft for any claim that sounds stronger than its support. Mark tentative results as tentative and remove unsupported numerics instead of letting the verifier discover them later.
- Save exactly one draft to `papers/<slug>.md`.
- End with a `Sources` appendix with direct URLs for all primary references.

View File

@@ -10,9 +10,9 @@ Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 wo
## Workflow
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`. Do not silently skip assigned questions; mark them `done`, `blocked`, or `superseded`.
3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. Before finishing the draft, sweep every strong claim against the verification log and downgrade anything that is inferred or single-source critical.
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
5. **Verify** — Spawn the `reviewer` agent to check the cited draft for unsupported claims, logical gaps, zombie sections, and single-source critical findings. Fix FATAL issues before delivering. Note MAJOR issues in Open Questions. If FATAL issues were found, run one more verification pass after the fixes.
6. **Deliver** — Save the final literature review to `outputs/<slug>.md`. Write a provenance record alongside it as `outputs/<slug>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used.
6. **Deliver** — Save the final literature review to `outputs/<slug>.md`. Write a provenance record alongside it as `outputs/<slug>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used. Before you stop, verify on disk that both files exist; do not stop at an intermediate cited draft alone.

View File

@@ -9,7 +9,7 @@ Review this AI research artifact: $@
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements:
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Present the plan to the user and confirm before proceeding.
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
- Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.

165
prompts/summarize.md Normal file
View File

@@ -0,0 +1,165 @@
---
description: Summarize any URL, local file, or PDF using the RLM pattern — source stored on disk, never injected raw into context.
args: <source>
section: Research Workflows
topLevelCli: true
---
Summarize the following source: $@
Derive a short slug from the source filename or URL domain (lowercase, hyphens, no filler words, ≤5 words — e.g. `attention-is-all-you-need`). Use this slug for all files in this run.
## Why this uses the RLM pattern
Standard summarization injects the full document into context. Above ~15k tokens, early content degrades as the window fills (context rot). This workflow keeps the document on disk as an external variable and reads only bounded windows — so context pressure is proportional to the window size, not the document size.
Tier 1 (< 8k chars) is a deliberate exception: direct injection is safe at ~2k tokens and windowed reading would add unnecessary friction.
---
## Step 1 — Fetch, validate, measure
Run all guards before any tier logic. A failure here is cheap; a failure mid-Tier-3 is not.
- **GitHub repo URL** (`https://github.com/owner/repo` exactly 4 slashes): fetch the raw README instead. Try `https://raw.githubusercontent.com/{owner}/{repo}/main/README.md`, then `/master/README.md`. A repo HTML page is not the document the user wants to summarize.
- **Remote URL**: fetch to disk with `curl -sL -o outputs/.notes/<slug>-raw.txt <url>`. Do NOT use fetch_content its return value enters context directly, bypassing the RLM external-variable principle.
- **Local file or PDF**: copy or extract to `outputs/.notes/<slug>-raw.txt`. For PDFs, extract text via `pdftotext` or equivalent before measuring.
- **Empty or failed fetch**: if the file is < 50 bytes after fetching, stop and surface the error to the user do not proceed to tier selection.
- **Binary content**: if the file is > 1 KB but contains < 100 readable text characters, stop and tell the user the content appears binary or unextracted.
- **Existing output**: if `outputs/<slug>-summary.md` already exists, ask the user whether to overwrite or use a different slug. Do not proceed until confirmed.
Measure decoded text characters (not bytes UTF-8 multi-byte chars would overcount). Log: `[summarize] source=<source> slug=<slug> chars=<count>`
---
## Step 2 — Choose tier
| Chars | Tier | Strategy |
|---|---|---|
| < 8 000 | 1 | Direct read full content enters context (safe at ~2k tokens) |
| 8 000 60 000 | 2 | RLM-lite windowed bash extraction, progressive notes to disk |
| > 60 000 | 3 | Full RLM — bash chunking + parallel researcher subagents |
Log: `[summarize] tier=<N> chars=<count>`
---
## Tier 1 — Direct read
Read `outputs/.notes/<slug>-raw.txt` in full. Summarize directly using the output format. Write to `outputs/<slug>-summary.md`.
---
## Tier 2 — RLM-lite windowed read
The document stays on disk. Extract 6 000-char windows via bash:
```python
# WHY f.seek/f.read: the read tool uses line offsets, not char offsets.
# For exact char-boundary windowing across arbitrary text, bash is required.
with open("outputs/.notes/<slug>-raw.txt", encoding="utf-8") as f:
f.seek(n * 6000)
window = f.read(6000)
```
For each window:
1. Extract key claims and evidence.
2. Append to `outputs/.notes/<slug>-notes.md` before reading the next window. This is the checkpoint: if the session is interrupted, processed windows survive.
3. Log: `[summarize] window <N>/<total> done`
Synthesize `outputs/.notes/<slug>-notes.md` into `outputs/<slug>-summary.md`.
---
## Tier 3 — Full RLM parallel chunks
Each chunk gets a fresh researcher subagent context window — context rot is impossible because no subagent sees more than 6 000 chars.
WHY 500-char overlap: academic papers contain multi-sentence arguments that span chunk boundaries. 500 chars (~80 words) ensures a cross-boundary claim appears fully in at least one adjacent chunk.
### 3a. Chunk the document
```python
import os
os.makedirs("outputs/.notes", exist_ok=True)
with open("outputs/.notes/<slug>-raw.txt", encoding="utf-8") as f:
text = f.read()
chunk_size, overlap = 6000, 500
chunks, i = [], 0
while i < len(text):
chunks.append(text[i : i + chunk_size])
i += chunk_size - overlap
for n, chunk in enumerate(chunks):
# Zero-pad index so files sort correctly (chunk-002 before chunk-010)
with open(f"outputs/.notes/<slug>-chunk-{n:03d}.txt", "w", encoding="utf-8") as f:
f.write(chunk)
print(f"[summarize] chunks={len(chunks)} chunk_size={chunk_size} overlap={overlap}")
```
### 3b. Confirm before spawning
Briefly summarize: "Source is ~<chars> chars -> <N> chunks -> <N> researcher subagents. This may take several minutes." Then continue automatically. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested review before launching.
### 3c. Dispatch researcher subagents
```json
{
"tasks": [{
"agent": "researcher",
"task": "Read ONLY `outputs/.notes/<slug>-chunk-NNN.txt`. Extract: (1) key claims, (2) methodology or technical approach, (3) cited evidence. Do NOT use web_search or fetch external URLs — this is single-source summarization. If a claim appears to start or end mid-sentence at the file boundary, mark it BOUNDARY PARTIAL. Write to `outputs/.notes/<slug>-summary-chunk-NNN.md`.",
"output": "outputs/.notes/<slug>-summary-chunk-NNN.md"
}],
"concurrency": 4,
"failFast": false
}
```
### 3d. Aggregate
After all subagents return, verify every expected `outputs/.notes/<slug>-summary-chunk-NNN.md` exists. Note any missing chunk indices — they will appear in the Coverage gaps section of the output. Do not abort on partial coverage; a partial summary with gaps noted is more useful than no summary.
When synthesizing:
- **Deduplicate**: a claim in multiple chunks is one claim — keep the most complete formulation.
- **Resolve boundary conflicts**: for adjacent-chunk contradictions, prefer the version with more supporting context.
- **Remove BOUNDARY PARTIAL markers** where a complete version exists in a neighbouring chunk.
Write to `outputs/<slug>-summary.md`.
---
## Output format
All tiers produce the same artifact at `outputs/<slug>-summary.md`:
```markdown
# Summary: [document title or source filename]
**Source:** [URL or file path]
**Date:** [YYYY-MM-DD]
**Tier:** [1 / 2 (N windows) / 3 (N chunks)]
## Key Claims
[3-7 most important assertions, each as a bullet]
## Methodology
[Approach, dataset, evaluation, baselines — omit for non-research documents]
## Limitations
[What the source explicitly flags as weak, incomplete, or out of scope]
## Verdict
[One paragraph: what this document establishes, its credibility, who should read it]
## Sources
1. [Title or filename] — [URL or file path]
## Coverage gaps *(Tier 3 only — omit if all chunks succeeded)*
[Missing chunk indices and their approximate byte ranges]
```
Before you stop, verify on disk that `outputs/<slug>-summary.md` exists.
Sources contains only the single source confirmed reachable in Step 1. No verifier subagent is needed — there are no URLs constructed from memory to verify.

View File

@@ -1,266 +0,0 @@
---
description: Submit a replication as a cryptographically verified ValiChord attestation, discover studies awaiting independent validation, query Harmony Records and reproducibility badges, or assist researchers in preparing a study for the validation pipeline.
section: Research Workflows
topLevelCli: true
---
# ValiChord Validation Workflow
ValiChord is a distributed peer-to-peer system for scientific reproducibility verification, built on Holochain. It implements a blind commit-reveal protocol in Rust across four DNAs, producing Harmony Records — immutable, cryptographically verifiable proofs that independent parties reproduced the same findings without coordinating. Verified studies receive automatic reproducibility badges (Gold/Silver/Bronze); validators accumulate a per-discipline reputation score across rounds.
This workflow integrates Feynman at three levels: as a **validator agent** running the full commit-reveal protocol; as a **researcher's assistant** helping prepare a study for submission; and as a **query tool** surfacing reproducibility status during research.
**Live demo of the commit-reveal protocol**: https://youtu.be/DQ5wZSD1YEw
---
## ValiChord's four-DNA architecture
| DNA | Name | Type | Role |
|-----|------|------|------|
| 1 | Researcher Repository | Private, single-agent | Researcher's local archive. Stores study, pre-registered protocol, data snapshots, deviation declarations. Only SHA-256 hashes ever leave this DNA. |
| 2 | Validator Workspace | Private, single-agent | Feynman's working space. Stores task privately. Seals the blind commitment here — content never propagates to the DHT. |
| 3 | Attestation | Shared DHT | Coordination layer. Manages validation requests, validator profiles, study claims, commitment anchors, phase markers, and public attestations. 36 zome functions. |
| 4 | Governance | Public DHT | Final record layer. Assembles HarmonyRecords, issues reproducibility badges, tracks validator reputation, records governance decisions. All read functions accessible via HTTP Gateway without running a node. |
The key guarantee: a validator's findings are cryptographically sealed (`SHA-256(msgpack(attestation) || nonce)`) before the reveal phase opens. Neither party can adjust findings after seeing the other's results. The researcher runs a parallel commit-reveal — locking their expected results before the validators reveal — so no party can adapt to seeing the other's outcome.
---
## Workflow A: Feynman as validator agent
### Step 0: Publish validator profile (one-time setup)
On first use, publish Feynman's public profile to DNA 3 so it appears in validator discovery indexes and conflict-of-interest checks:
```
publish_validator_profile(profile: ValidatorProfile)
```
Key fields:
- `agent_type``AutomatedTool` (AI agents are first-class validators; the protocol makes no distinction between human and machine validators)
- `disciplines` — list of disciplines Feynman can validate (e.g. ComputationalBiology, Statistics)
- `certification_tier` — starts as `Provisional`; advances to `Certified` after 5+ validations with ≥60% agreement rate, `Senior` after 20+ with ≥80%
If a profile already exists, use `update_validator_profile` to merge changes.
### Step 1: Gather inputs or discover study
**If the user provides a `request_ref`**: use it directly.
**If Feynman is proactively discovering work**: query the pending queue in DNA 3:
```
get_pending_requests_for_discipline(discipline: Discipline)
```
Returns all unclaimed `ValidationRequest` entries for the discipline. Each contains:
- `data_hash` — the ExternalHash identifier (used as `request_ref` throughout)
- `num_validators_required` — quorum needed to close the round
- `validation_tier` — Basic / Enhanced / Comprehensive
- `access_urls` — where to fetch the data and code
Optionally assess study complexity before committing:
```
assess_difficulty(input: AssessDifficultyInput)
```
Scores code volume, dependency count, documentation quality, data accessibility, and environment complexity. Returns predicted duration and confidence. Use this to decide whether to proceed before claiming.
If replication results are not yet available, suggest `/replicate` first.
### Step 2: Claim the study
Before receiving a formal task assignment, register intent to validate via DNA 3:
```
claim_study(request_ref: ExternalHash)
```
This:
- Reserves a validator slot (enforced capacity: no over-subscription)
- Triggers conflict-of-interest check — rejects claim if Feynman's institution matches the researcher's
- Records a `StudyClaim` entry on the shared DHT
If a claimed validator goes dark, any other validator can free the slot:
```
reclaim_abandoned_claim(input: ReclaimInput)
```
### Step 3: Receive task and seal private attestation — Commit phase
Connect to the ValiChord conductor via AppWebSocket. Using DNA 2 (Validator Workspace):
```
receive_task(request_ref, discipline, deadline_secs, validation_focus, time_cap_secs, compensation_tier)
```
`validation_focus` specifies which aspect Feynman is validating:
- `ComputationalReproducibility` — re-run code, check numerical outputs
- `PreCommitmentAdherence` — verify results match pre-registered analysis plan
- `MethodologicalReview` — assess statistical choices and protocol validity
Then seal the private attestation — this is the blind commitment:
```
seal_private_attestation(task_hash, attestation)
```
Where `attestation` includes:
- `outcome``Reproduced` / `PartiallyReproduced` / `FailedToReproduce` / `UnableToAssess`
- `outcome_summary` — key metrics, effect direction, confidence interval overlap, overall agreement
- `confidence` — High / Medium / Low
- `time_invested_secs` and `time_breakdown` — environment_setup, data_acquisition, code_execution, troubleshooting
- `computational_resources` — whether personal hardware, HPC, GPU, or cloud was required; estimated cost in pence
- `deviation_flags` — any undeclared departures from the original protocol (type, severity, evidence)
The coordinator computes `commitment_hash = SHA-256(msgpack(attestation) || nonce)` and writes a `CommitmentAnchor` to DNA 3's shared DHT. The attestation content remains private in DNA 2.
Save `task_hash` and `commitment_hash` to `outputs/<slug>-valichord-commit.json`.
### Step 4: Wait for RevealOpen phase
Poll DNA 3 (Attestation) until the phase transitions:
```
get_current_phase(request_ref: ExternalHash)
```
Returns `null` (still commit phase), `"RevealOpen"`, or `"Complete"`. Poll every 30 seconds. The phase opens automatically when the `CommitmentAnchor` count reaches `num_validators_required` — no manual trigger required.
During this wait, the researcher also runs their parallel commit-reveal: they lock their expected results via `publish_researcher_commitment` before the reveal phase opens, then reveal via `reveal_researcher_result` after all validators have submitted. No party — researcher or validator — can adapt to seeing the other's outcome.
### Step 5: Submit attestation — Reveal phase
When phase is `RevealOpen`, publish the full attestation to the shared DHT via DNA 3:
```
submit_attestation(attestation, nonce)
```
The coordinator verifies `SHA-256(msgpack(attestation) || nonce) == CommitmentAnchor.commitment_hash` before writing. This prevents adaptive reveals — the attestation must match exactly what was committed.
### Step 6: Retrieve Harmony Record and badges
Call DNA 4 (Governance) explicitly after `submit_attestation` returns — DHT propagation means the ValidatorToAttestation link may not be visible within the same transaction:
```
check_and_create_harmony_record(request_ref)
get_harmony_record(request_ref)
get_badges_for_study(request_ref)
```
The **Harmony Record** contains:
- `outcome` — the majority reproduced/not-reproduced finding
- `agreement_level` — ExactMatch / WithinTolerance / DirectionalMatch / Divergent / UnableToAssess
- `participating_validators` — array of validator agent keys
- `validation_duration_secs`
- `ActionHash` — the immutable on-chain identifier
**Reproducibility badges** are automatically issued when the Harmony Record is created:
| Badge | Threshold |
|-------|-----------|
| GoldReproducible | ≥7 validators, ≥90% agreement |
| SilverReproducible | ≥5 validators, ≥70% agreement |
| BronzeReproducible | ≥3 validators, ≥50% agreement |
| FailedReproduction | Divergent outcomes |
Save the full record and badges to `outputs/<slug>-harmony-record.json`.
### Step 7: Check updated reputation
After each validation round, Feynman's reputation record in DNA 4 is updated:
```
get_validator_reputation(validator: AgentPubKey)
```
Returns per-discipline scores: total validations, agreement rate, average time, and current `CertificationTier` (Provisional → Certified → Senior). Reputation is a long-term asset — AI validators accumulate a cryptographically verifiable track record across all ValiChord rounds they participate in.
### Step 8: Report to user
Present:
- Outcome and agreement level
- Reproducibility badge(s) issued to the study
- Feynman's updated reputation score for this discipline
- ActionHash — the permanent public identifier for this Harmony Record
- Confirmation that the record is written to the Governance DHT and accessible via HTTP Gateway without any special infrastructure
- Path to saved outputs
---
## Workflow B: Query existing Harmony Record
`get_harmony_record` and `get_badges_for_study` in DNA 4 are `Unrestricted` functions — accessible via Holochain's HTTP Gateway without connecting to a conductor or running a node.
```
GET <http_gateway_url>/get_harmony_record/<request_ref_b64>
GET <http_gateway_url>/get_badges_for_study/<request_ref_b64>
```
Use this to:
- Check reproducibility status of a cited study during `/deepresearch`
- Surface Harmony Records and badges in research summaries
- Verify whether a study has undergone independent validation before recommending it
The following read functions are also unrestricted on DNA 3:
`get_attestations_for_request`, `get_validators_for_discipline`, `get_pending_requests_for_discipline`, `get_validator_profile`, `get_current_phase`, `get_difficulty_assessment`, `get_researcher_reveal`
---
## Workflow C: Proactive discipline queue monitoring
Feynman can act as a standing validator for a discipline — periodically checking for new studies that need validation without waiting to be assigned:
```
get_pending_requests_for_discipline(discipline: Discipline)
```
Returns all unclaimed `ValidationRequest` entries. For each, optionally run `assess_difficulty` to estimate workload before claiming.
This enables Feynman to operate as an autonomous reproducibility agent: polling the queue, assessing difficulty, claiming appropriate studies, and running the full Workflow A cycle unsupervised.
---
## Workflow D: Researcher preparation assistant
Before a study enters the validation pipeline, Feynman can assist the researcher in preparing it via DNA 1 (Researcher Repository). This workflow runs on the researcher's side, not the validator's.
**Register the study:**
```
register_study(study: ResearchStudy)
```
**Pre-register the analysis protocol** (immutable once written — creates a tamper-evident commitment to the analysis plan before data collection or validation begins):
```
register_protocol(input: RegisterProtocolInput)
```
**Take a cryptographic data snapshot** (records a SHA-256 hash of the dataset at a point in time — proves data was not modified after validation began):
```
take_data_snapshot(input: TakeDataSnapshotInput)
```
**Declare any deviations** from the pre-registered plan before the commit phase opens (pre-commit transparency):
```
declare_deviation(input: DeclareDeviationInput)
```
Only hashes ever leave DNA 1 — the raw data and protocol text remain on the researcher's device.
**Repository Readiness Checker**: ValiChord also ships a standalone audit tool that scans a research repository for 30+ reproducibility failure modes before submission — missing dependency files, absolute paths, undeclared environment requirements, data documentation gaps, human-subjects data exposure risks, and more. Feynman is the natural interface for this tool: running the audit, interpreting findings in plain language, guiding the researcher through fixes, and confirming the repository meets the bar for independent validation. See: https://github.com/topeuph-ai/ValiChord
---
## Notes
- AI agents are first-class participants in ValiChord's protocol. Feynman can autonomously publish profiles, claim studies, seal attestations, wait for phase transitions, and submit reveals — the protocol makes no distinction between human and AI validators.
- ValiChord's privacy guarantee is structural, not policy-based. DNA 1 (researcher data) and DNA 2 (validator workspace) are single-agent private DHTs — propagation to the shared network is architecturally impossible, not merely restricted.
- All 72 zome functions across the four DNAs are callable via AppWebSocket. The 20+ `Unrestricted` read functions on DNA 3 and DNA 4 are additionally accessible via HTTP Gateway without any Holochain node.
- If a validation round stalls due to validator dropout, `force_finalize_round` in DNA 4 closes it after a 7-day timeout with a reduced quorum, preventing indefinite blocking.
- Live demo (full commit-reveal cycle, Harmony Record generated): https://youtu.be/DQ5wZSD1YEw
- Running the demo: `bash demo/start.sh` in a GitHub Codespace, then open port 8888 publicly
- ValiChord repo: https://github.com/topeuph-ai/ValiChord

View File

@@ -9,7 +9,7 @@ Create a research watch for: $@
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
Requirements:
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
- Start with a baseline sweep of the topic.
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
- Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.

View File

@@ -275,7 +275,8 @@ function writeLauncher(bundleRoot, target) {
"@echo off",
"setlocal",
'set "ROOT=%~dp0"',
'"%ROOT%node\\node.exe" "%ROOT%app\\bin\\feynman.js" %*',
'if "%ROOT:~-1%"=="\\" set "ROOT=%ROOT:~0,-1%"',
'"%ROOT%\\node\\node.exe" "%ROOT%\\app\\bin\\feynman.js" %*',
"",
].join("\r\n"),
"utf8",

View File

@@ -1,4 +1,6 @@
const MIN_NODE_VERSION = "20.19.0";
const MAX_NODE_MAJOR = 24;
const PREFERRED_NODE_MAJOR = 22;
function parseNodeVersion(version) {
const [major = "0", minor = "0", patch = "0"] = version.replace(/^v/, "").split(".");
@@ -16,16 +18,20 @@ function compareNodeVersions(left, right) {
}
function isSupportedNodeVersion(version = process.versions.node) {
return compareNodeVersions(parseNodeVersion(version), parseNodeVersion(MIN_NODE_VERSION)) >= 0;
const parsed = parseNodeVersion(version);
return compareNodeVersions(parsed, parseNodeVersion(MIN_NODE_VERSION)) >= 0 && parsed.major <= MAX_NODE_MAJOR;
}
function getUnsupportedNodeVersionLines(version = process.versions.node) {
const isWindows = process.platform === "win32";
const parsed = parseNodeVersion(version);
return [
`feynman requires Node.js ${MIN_NODE_VERSION} or later (detected ${version}).`,
isWindows
? "Install a newer Node.js from https://nodejs.org, or use the standalone installer:"
: "Switch to Node 20 with `nvm install 20 && nvm use 20`, or use the standalone installer:",
`feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected ${version}).`,
parsed.major > MAX_NODE_MAJOR
? "This newer Node release is not supported yet because native Pi packages may fail to build."
: isWindows
? "Install a supported Node.js release from https://nodejs.org, or use the standalone installer:"
: `Switch to a supported Node release with \`nvm install ${PREFERRED_NODE_MAJOR} && nvm use ${PREFERRED_NODE_MAJOR}\`, or use the standalone installer:`,
isWindows
? "irm https://feynman.is/install.ps1 | iex"
: "curl -fsSL https://feynman.is/install | bash",

View File

@@ -46,7 +46,7 @@ function Resolve-VersionMetadata {
return [PSCustomObject]@{
ResolvedVersion = $resolvedVersion
GitRef = "v$resolvedVersion"
DownloadUrl = "https://github.com/getcompanion-ai/feynman/archive/refs/tags/v$resolvedVersion.zip"
DownloadUrl = if ($env:FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL) { $env:FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL } else { "https://github.com/getcompanion-ai/feynman/archive/refs/tags/v$resolvedVersion.zip" }
}
}
@@ -92,8 +92,9 @@ try {
}
$skillsSource = Join-Path $sourceRoot.FullName "skills"
if (-not (Test-Path $skillsSource)) {
throw "Could not find skills/ in downloaded archive."
$promptsSource = Join-Path $sourceRoot.FullName "prompts"
if (-not (Test-Path $skillsSource) -or -not (Test-Path $promptsSource)) {
throw "Could not find the bundled skills resources in the downloaded archive."
}
$installParent = Split-Path $installDir -Parent
@@ -107,6 +108,10 @@ try {
New-Item -ItemType Directory -Path $installDir -Force | Out-Null
Copy-Item -Path (Join-Path $skillsSource "*") -Destination $installDir -Recurse -Force
New-Item -ItemType Directory -Path (Join-Path $installDir "prompts") -Force | Out-Null
Copy-Item -Path (Join-Path $promptsSource "*") -Destination (Join-Path $installDir "prompts") -Recurse -Force
Copy-Item -Path (Join-Path $sourceRoot.FullName "AGENTS.md") -Destination (Join-Path $installDir "AGENTS.md") -Force
Copy-Item -Path (Join-Path $sourceRoot.FullName "CONTRIBUTING.md") -Destination (Join-Path $installDir "CONTRIBUTING.md") -Force
Write-Host "==> Installed skills to $installDir"
if ($Scope -eq "Repo") {

View File

@@ -146,7 +146,8 @@ archive_metadata="$(resolve_version)"
resolved_version="$(printf '%s\n' "$archive_metadata" | sed -n '1p')"
git_ref="$(printf '%s\n' "$archive_metadata" | sed -n '2p')"
archive_url=""
archive_url="${FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL:-}"
if [ -z "$archive_url" ]; then
case "$git_ref" in
main)
archive_url="https://github.com/getcompanion-ai/feynman/archive/refs/heads/main.tar.gz"
@@ -155,6 +156,7 @@ case "$git_ref" in
archive_url="https://github.com/getcompanion-ai/feynman/archive/refs/tags/${git_ref}.tar.gz"
;;
esac
fi
if [ -z "$archive_url" ]; then
echo "Could not resolve a download URL for ref: $git_ref" >&2
@@ -181,8 +183,8 @@ step "Extracting skills"
tar -xzf "$archive_path" -C "$extract_dir"
source_root="$(find "$extract_dir" -mindepth 1 -maxdepth 1 -type d | head -n 1)"
if [ -z "$source_root" ] || [ ! -d "$source_root/skills" ]; then
echo "Could not find skills/ in downloaded archive." >&2
if [ -z "$source_root" ] || [ ! -d "$source_root/skills" ] || [ ! -d "$source_root/prompts" ]; then
echo "Could not find the bundled skills resources in the downloaded archive." >&2
exit 1
fi
@@ -190,6 +192,10 @@ mkdir -p "$(dirname "$install_dir")"
rm -rf "$install_dir"
mkdir -p "$install_dir"
cp -R "$source_root/skills/." "$install_dir/"
mkdir -p "$install_dir/prompts"
cp -R "$source_root/prompts/." "$install_dir/prompts/"
cp "$source_root/AGENTS.md" "$install_dir/AGENTS.md"
cp "$source_root/CONTRIBUTING.md" "$install_dir/CONTRIBUTING.md"
step "Installed skills to $install_dir"
case "$SCOPE" in

View File

@@ -109,8 +109,8 @@ This usually means the release exists, but not all platform bundles were uploade
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
- pass the latest published version explicitly, e.g.:
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
"@
}
@@ -125,12 +125,18 @@ Workarounds:
New-Item -ItemType Directory -Path $installBinDir -Force | Out-Null
$shimPath = Join-Path $installBinDir "feynman.cmd"
$shimPs1Path = Join-Path $installBinDir "feynman.ps1"
Write-Host "==> Linking feynman into $installBinDir"
@"
@echo off
"$bundleDir\feynman.cmd" %*
CALL "$bundleDir\feynman.cmd" %*
"@ | Set-Content -Path $shimPath -Encoding ASCII
@"
`$BundleDir = "$bundleDir"
& "`$BundleDir\node\node.exe" "`$BundleDir\app\bin\feynman.js" @args
"@ | Set-Content -Path $shimPs1Path -Encoding UTF8
$currentUserPath = [Environment]::GetEnvironmentVariable("Path", "User")
$alreadyOnPath = $false
if ($currentUserPath) {
@@ -153,9 +159,7 @@ Workarounds:
Write-Warning "Current shell resolves feynman to $($resolvedCommand.Source)"
Write-Host "Run in a new shell, or run: `$env:Path = '$installBinDir;' + `$env:Path"
Write-Host "Then run: feynman"
if ($resolvedCommand.Source -like "*node_modules*@companion-ai*feynman*") {
Write-Host "If that path is an old global npm install, remove it with: npm uninstall -g @companion-ai/feynman"
}
Write-Host "If that path is an old package-manager install, remove it or put $installBinDir first on PATH."
}
Write-Host "Feynman $resolvedVersion installed successfully."

View File

@@ -177,11 +177,7 @@ warn_command_conflict() {
step "Run now: export PATH=\"$INSTALL_BIN_DIR:\$PATH\" && hash -r && feynman"
step "Or launch directly: $expected_path"
case "$resolved_path" in
*"/node_modules/@companion-ai/feynman/"* | *"/node_modules/.bin/feynman")
step "If that path is an old global npm install, remove it with: npm uninstall -g @companion-ai/feynman"
;;
esac
step "If that path is an old package-manager install, remove it or put $INSTALL_BIN_DIR first on PATH."
fi
}
@@ -264,8 +260,8 @@ This usually means the release exists, but not all platform bundles were uploade
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
- pass the latest published version explicitly, e.g.:
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
EOF
exit 1
fi

View File

@@ -0,0 +1 @@
export declare function patchAlphaHubAuthSource(source: string): string;

View File

@@ -0,0 +1,66 @@
const LEGACY_SUCCESS_HTML = "'<html><body><h2>Logged in to Alpha Hub</h2><p>You can close this tab.</p></body></html>'";
const LEGACY_ERROR_HTML = "'<html><body><h2>Login failed</h2><p>You can close this tab.</p></body></html>'";
const bodyAttr = 'style="font-family:system-ui,sans-serif;text-align:center;padding-top:20vh;background:#050a08;color:#f0f5f2"';
const logo = '<h1 style="font-family:monospace;font-size:48px;color:#34d399;margin:0">feynman</h1>';
const FEYNMAN_SUCCESS_HTML = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#34d399;margin-top:16px">Logged in</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
const FEYNMAN_ERROR_HTML = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#ef4444;margin-top:16px">Login failed</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
const CURRENT_OPEN_BROWSER = [
"function openBrowser(url) {",
" try {",
" const plat = platform();",
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
" else if (plat === 'win32') execSync(`start \"\" \"${url}\"`);",
" } catch {}",
"}",
].join("\n");
const PATCHED_OPEN_BROWSER = [
"function openBrowser(url) {",
" try {",
" const plat = platform();",
" const isWsl = plat === 'linux' && (Boolean(process.env.WSL_DISTRO_NAME) || Boolean(process.env.WSL_INTEROP));",
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
" else if (isWsl) {",
" try {",
" execSync(`wslview \"${url}\"`);",
" } catch {",
" execSync(`cmd.exe /c start \"\" \"${url}\"`);",
" }",
" }",
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
" else if (plat === 'win32') execSync(`cmd /c start \"\" \"${url}\"`);",
" } catch {}",
"}",
].join("\n");
const LEGACY_WIN_OPEN = "else if (plat === 'win32') execSync(`start \"${url}\"`);";
const FIXED_WIN_OPEN = "else if (plat === 'win32') execSync(`cmd /c start \"\" \"${url}\"`);";
const OPEN_BROWSER_LOG = "process.stderr.write('Opening browser for alphaXiv login...\\n');";
const OPEN_BROWSER_LOG_WITH_URL = "process.stderr.write(`Opening browser for alphaXiv login...\\nAuth URL: ${authUrl.toString()}\\n`);";
export function patchAlphaHubAuthSource(source) {
let patched = source;
if (patched.includes(LEGACY_SUCCESS_HTML)) {
patched = patched.replace(LEGACY_SUCCESS_HTML, FEYNMAN_SUCCESS_HTML);
}
if (patched.includes(LEGACY_ERROR_HTML)) {
patched = patched.replace(LEGACY_ERROR_HTML, FEYNMAN_ERROR_HTML);
}
if (patched.includes(CURRENT_OPEN_BROWSER)) {
patched = patched.replace(CURRENT_OPEN_BROWSER, PATCHED_OPEN_BROWSER);
}
if (patched.includes(LEGACY_WIN_OPEN)) {
patched = patched.replace(LEGACY_WIN_OPEN, FIXED_WIN_OPEN);
}
if (patched.includes(OPEN_BROWSER_LOG)) {
patched = patched.replace(OPEN_BROWSER_LOG, OPEN_BROWSER_LOG_WITH_URL);
}
return patched;
}

View File

@@ -0,0 +1 @@
export function patchPiExtensionLoaderSource(source: string): string;

View File

@@ -0,0 +1,32 @@
const PATH_TO_FILE_URL_IMPORT = 'import { fileURLToPath, pathToFileURL } from "node:url";';
const FILE_URL_TO_PATH_IMPORT = 'import { fileURLToPath } from "node:url";';
const IMPORT_CALL = 'const module = await jiti.import(extensionPath, { default: true });';
const PATCHED_IMPORT_CALL = [
' const extensionSpecifier = process.platform === "win32" && path.isAbsolute(extensionPath)',
' ? pathToFileURL(extensionPath).href',
' : extensionPath;',
' const module = await jiti.import(extensionSpecifier, { default: true });',
].join("\n");
export function patchPiExtensionLoaderSource(source) {
let patched = source;
if (patched.includes(PATH_TO_FILE_URL_IMPORT) || patched.includes(PATCHED_IMPORT_CALL)) {
return patched;
}
if (patched.includes(FILE_URL_TO_PATH_IMPORT)) {
patched = patched.replace(FILE_URL_TO_PATH_IMPORT, PATH_TO_FILE_URL_IMPORT);
}
if (!patched.includes(PATH_TO_FILE_URL_IMPORT)) {
return source;
}
if (!patched.includes(IMPORT_CALL)) {
return source;
}
return patched.replace(IMPORT_CALL, PATCHED_IMPORT_CALL);
}

View File

@@ -0,0 +1 @@
export function patchPiGoogleLegacySchemaSource(source: string): string;

View File

@@ -0,0 +1,44 @@
const HELPER = [
"function normalizeLegacyToolSchema(schema) {",
" if (Array.isArray(schema)) return schema.map((item) => normalizeLegacyToolSchema(item));",
' if (!schema || typeof schema !== "object") return schema;',
" const normalized = {};",
" for (const [key, value] of Object.entries(schema)) {",
' if (key === "const") {',
" normalized.enum = [value];",
" continue;",
" }",
" normalized[key] = normalizeLegacyToolSchema(value);",
" }",
" return normalized;",
"}",
].join("\n");
const ORIGINAL =
' ...(useParameters ? { parameters: tool.parameters } : { parametersJsonSchema: tool.parameters }),';
const PATCHED = [
" ...(useParameters",
" ? { parameters: normalizeLegacyToolSchema(tool.parameters) }",
" : { parametersJsonSchema: tool.parameters }),",
].join("\n");
export function patchPiGoogleLegacySchemaSource(source) {
let patched = source;
if (patched.includes("function normalizeLegacyToolSchema(schema) {")) {
return patched;
}
if (!patched.includes(ORIGINAL)) {
return source;
}
patched = patched.replace(ORIGINAL, PATCHED);
const marker = "export function convertTools(tools, useParameters = false) {";
const markerIndex = patched.indexOf(marker);
if (markerIndex === -1) {
return source;
}
return `${patched.slice(0, markerIndex)}${HELPER}\n\n${patched.slice(markerIndex)}`;
}

View File

@@ -0,0 +1,3 @@
export const PI_SUBAGENTS_PATCH_TARGETS: string[];
export function patchPiSubagentsSource(relativePath: string, source: string): string;
export function stripPiSubagentBuiltinModelSource(source: string): string;

View File

@@ -0,0 +1,341 @@
export const PI_SUBAGENTS_PATCH_TARGETS = [
"index.ts",
"agents.ts",
"artifacts.ts",
"run-history.ts",
"skills.ts",
"chain-clarify.ts",
"subagent-executor.ts",
"schemas.ts",
];
const RESOLVE_PI_AGENT_DIR_HELPER = [
"function resolvePiAgentDir(): string {",
' const configured = process.env.FEYNMAN_CODING_AGENT_DIR?.trim() || process.env.PI_CODING_AGENT_DIR?.trim();',
' if (!configured) return path.join(os.homedir(), ".pi", "agent");',
' return configured.startsWith("~/") ? path.join(os.homedir(), configured.slice(2)) : configured;',
"}",
].join("\n");
function injectResolvePiAgentDirHelper(source) {
if (source.includes("function resolvePiAgentDir(): string {")) {
return source;
}
const lines = source.split("\n");
let insertAt = 0;
let importSeen = false;
let importOpen = false;
for (let index = 0; index < lines.length; index += 1) {
const trimmed = lines[index].trim();
if (!importSeen) {
if (trimmed === "" || trimmed.startsWith("/**") || trimmed.startsWith("*") || trimmed.startsWith("*/")) {
insertAt = index + 1;
continue;
}
if (trimmed.startsWith("import ")) {
importSeen = true;
importOpen = !trimmed.endsWith(";");
insertAt = index + 1;
continue;
}
break;
}
if (trimmed.startsWith("import ")) {
importOpen = !trimmed.endsWith(";");
insertAt = index + 1;
continue;
}
if (importOpen) {
if (trimmed.endsWith(";")) importOpen = false;
insertAt = index + 1;
continue;
}
if (trimmed === "") {
insertAt = index + 1;
continue;
}
insertAt = index;
break;
}
return [...lines.slice(0, insertAt), "", RESOLVE_PI_AGENT_DIR_HELPER, "", ...lines.slice(insertAt)].join("\n");
}
function replaceAll(source, from, to) {
return source.split(from).join(to);
}
export function stripPiSubagentBuiltinModelSource(source) {
if (!source.startsWith("---\n")) {
return source;
}
const endIndex = source.indexOf("\n---", 4);
if (endIndex === -1) {
return source;
}
const frontmatter = source.slice(4, endIndex);
const nextFrontmatter = frontmatter
.split("\n")
.filter((line) => !/^\s*model\s*:/.test(line))
.join("\n");
return `---\n${nextFrontmatter}${source.slice(endIndex)}`;
}
export function patchPiSubagentsSource(relativePath, source) {
let patched = source;
switch (relativePath) {
case "index.ts":
patched = replaceAll(
patched,
'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
'const configPath = path.join(resolvePiAgentDir(), "extensions", "subagent", "config.json");',
);
patched = replaceAll(
patched,
"• PARALLEL: { tasks: [{agent,task,count?}, ...], concurrency?: number, worktree?: true } - concurrent execution (worktree: isolate each task in a git worktree)",
"• PARALLEL: { tasks: [{agent,task,count?,output?}, ...], concurrency?: number, worktree?: true } - concurrent execution (output: per-task file target, worktree: isolate each task in a git worktree)",
);
break;
case "agents.ts":
patched = replaceAll(
patched,
'const userDir = path.join(os.homedir(), ".pi", "agent", "agents");',
'const userDir = path.join(resolvePiAgentDir(), "agents");',
);
patched = replaceAll(
patched,
[
'export function discoverAgents(cwd: string, scope: AgentScope): AgentDiscoveryResult {',
'\tconst userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
'\tconst userDirNew = path.join(os.homedir(), ".agents");',
].join("\n"),
[
'export function discoverAgents(cwd: string, scope: AgentScope): AgentDiscoveryResult {',
'\tconst userDir = path.join(resolvePiAgentDir(), "agents");',
].join("\n"),
);
patched = replaceAll(
patched,
[
'\tconst userAgentsOld = scope === "project" ? [] : loadAgentsFromDir(userDirOld, "user");',
'\tconst userAgentsNew = scope === "project" ? [] : loadAgentsFromDir(userDirNew, "user");',
'\tconst userAgents = [...userAgentsOld, ...userAgentsNew];',
].join("\n"),
'\tconst userAgents = scope === "project" ? [] : loadAgentsFromDir(userDir, "user");',
);
patched = replaceAll(
patched,
[
'const userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
'const userDirNew = path.join(os.homedir(), ".agents");',
].join("\n"),
'const userDir = path.join(resolvePiAgentDir(), "agents");',
);
patched = replaceAll(
patched,
[
'\tconst user = [',
'\t\t...loadAgentsFromDir(userDirOld, "user"),',
'\t\t...loadAgentsFromDir(userDirNew, "user"),',
'\t];',
].join("\n"),
'\tconst user = loadAgentsFromDir(userDir, "user");',
);
patched = replaceAll(
patched,
[
'\tconst chains = [',
'\t\t...loadChainsFromDir(userDirOld, "user"),',
'\t\t...loadChainsFromDir(userDirNew, "user"),',
'\t\t...(projectDir ? loadChainsFromDir(projectDir, "project") : []),',
'\t];',
].join("\n"),
[
'\tconst chains = [',
'\t\t...loadChainsFromDir(userDir, "user"),',
'\t\t...(projectDir ? loadChainsFromDir(projectDir, "project") : []),',
'\t];',
].join("\n"),
);
patched = replaceAll(
patched,
'\tconst userDir = fs.existsSync(userDirNew) ? userDirNew : userDirOld;',
'\tconst userDir = path.join(resolvePiAgentDir(), "agents");',
);
break;
case "artifacts.ts":
patched = replaceAll(
patched,
'const sessionsBase = path.join(os.homedir(), ".pi", "agent", "sessions");',
'const sessionsBase = path.join(resolvePiAgentDir(), "sessions");',
);
break;
case "run-history.ts":
patched = replaceAll(
patched,
'const HISTORY_PATH = path.join(os.homedir(), ".pi", "agent", "run-history.jsonl");',
'const HISTORY_PATH = path.join(resolvePiAgentDir(), "run-history.jsonl");',
);
break;
case "skills.ts":
patched = replaceAll(
patched,
'const AGENT_DIR = path.join(os.homedir(), ".pi", "agent");',
"const AGENT_DIR = resolvePiAgentDir();",
);
break;
case "chain-clarify.ts":
patched = replaceAll(
patched,
'const dir = path.join(os.homedir(), ".pi", "agent", "agents");',
'const dir = path.join(resolvePiAgentDir(), "agents");',
);
break;
case "subagent-executor.ts":
patched = replaceAll(
patched,
[
"\tcwd?: string;",
"\tcount?: number;",
"\tmodel?: string;",
"\tskill?: string | string[] | boolean;",
].join("\n"),
[
"\tcwd?: string;",
"\tcount?: number;",
"\tmodel?: string;",
"\tskill?: string | string[] | boolean;",
"\toutput?: string | false;",
].join("\n"),
);
patched = replaceAll(
patched,
[
"\t\t\tcwd: task.cwd,",
"\t\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
].join("\n"),
[
"\t\t\tcwd: task.cwd,",
"\t\t\toutput: task.output,",
"\t\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
].join("\n"),
);
patched = replaceAll(
patched,
[
"\t\tcwd: task.cwd,",
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
].join("\n"),
[
"\t\tcwd: task.cwd,",
"\t\toutput: task.output,",
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
].join("\n"),
);
patched = replaceAll(
patched,
[
"\t\t\t\tcwd: t.cwd,",
"\t\t\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
].join("\n"),
[
"\t\t\t\tcwd: t.cwd,",
"\t\t\t\toutput: t.output,",
"\t\t\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
].join("\n"),
);
patched = replaceAll(
patched,
[
"\t\tcwd: t.cwd,",
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
].join("\n"),
[
"\t\tcwd: t.cwd,",
"\t\toutput: t.output,",
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
].join("\n"),
);
patched = replaceAll(
patched,
[
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
"\t\t\tresolveStepBehavior(c, { skills: skillOverrides[i] }),",
"\t\t);",
].join("\n"),
[
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
"\t\t\tresolveStepBehavior(c, { output: tasks[i]?.output, skills: skillOverrides[i] }),",
"\t\t);",
].join("\n"),
);
patched = replaceAll(
patched,
"\tconst behaviors = agentConfigs.map((config) => resolveStepBehavior(config, {}));",
"\tconst behaviors = agentConfigs.map((config, i) => resolveStepBehavior(config, { output: tasks[i]?.output, skills: skillOverrides[i] }));",
);
patched = replaceAll(
patched,
[
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, input.taskTexts[index]!, {",
].join("\n"),
[
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
"\t\tconst outputPath = typeof input.behaviors[index]?.output === \"string\"",
"\t\t\t? resolveSingleOutputPath(input.behaviors[index]?.output, input.ctx.cwd, taskCwd)",
"\t\t\t: undefined;",
"\t\tconst taskText = injectSingleOutputInstruction(input.taskTexts[index]!, outputPath);",
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, taskText, {",
].join("\n"),
);
patched = replaceAll(
patched,
[
"\t\t\tmaxOutput: input.maxOutput,",
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
].join("\n"),
[
"\t\t\tmaxOutput: input.maxOutput,",
"\t\t\toutputPath,",
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
].join("\n"),
);
break;
case "schemas.ts":
patched = replaceAll(
patched,
[
"\tcwd: Type.Optional(Type.String()),",
'\tcount: Type.Optional(Type.Integer({ minimum: 1, description: "Repeat this parallel task N times with the same settings." })),',
'\tmodel: Type.Optional(Type.String({ description: "Override model for this task (e.g. \'google/gemini-3-pro\')" })),',
].join("\n"),
[
"\tcwd: Type.Optional(Type.String()),",
'\tcount: Type.Optional(Type.Integer({ minimum: 1, description: "Repeat this parallel task N times with the same settings." })),',
'\toutput: Type.Optional(Type.Any({ description: "Output file for this parallel task (string), or false to disable. Relative paths resolve against cwd." })),',
'\tmodel: Type.Optional(Type.String({ description: "Override model for this task (e.g. \'google/gemini-3-pro\')" })),',
].join("\n"),
);
patched = replaceAll(
patched,
'tasks: Type.Optional(Type.Array(TaskItem, { description: "PARALLEL mode: [{agent, task, count?}, ...]" })),',
'tasks: Type.Optional(Type.Array(TaskItem, { description: "PARALLEL mode: [{agent, task, count?, output?}, ...]" })),',
);
break;
default:
return source;
}
if (patched === source) {
return source;
}
return patched.includes("resolvePiAgentDir()") ? injectResolvePiAgentDirHelper(patched) : patched;
}

View File

@@ -0,0 +1,2 @@
export const PI_WEB_ACCESS_PATCH_TARGETS: string[];
export function patchPiWebAccessSource(relativePath: string, source: string): string;

View File

@@ -0,0 +1,48 @@
export const PI_WEB_ACCESS_PATCH_TARGETS = [
"index.ts",
"exa.ts",
"gemini-api.ts",
"gemini-search.ts",
"gemini-web.ts",
"github-extract.ts",
"perplexity.ts",
"video-extract.ts",
"youtube-extract.ts",
];
const LEGACY_CONFIG_EXPR = 'join(homedir(), ".pi", "web-search.json")';
const PATCHED_CONFIG_EXPR =
'process.env.FEYNMAN_WEB_SEARCH_CONFIG ?? process.env.PI_WEB_SEARCH_CONFIG ?? join(homedir(), ".pi", "web-search.json")';
export function patchPiWebAccessSource(relativePath, source) {
let patched = source;
let changed = false;
if (!patched.includes(PATCHED_CONFIG_EXPR)) {
patched = patched.split(LEGACY_CONFIG_EXPR).join(PATCHED_CONFIG_EXPR);
changed = patched !== source;
}
if (relativePath === "index.ts") {
const workflowDefaultOriginal = 'const workflow = resolveWorkflow(params.workflow ?? configWorkflow, ctx?.hasUI !== false);';
const workflowDefaultPatched = 'const workflow = resolveWorkflow(params.workflow ?? configWorkflow ?? "none", ctx?.hasUI !== false);';
if (patched.includes(workflowDefaultOriginal)) {
patched = patched.replace(workflowDefaultOriginal, workflowDefaultPatched);
changed = true;
}
if (patched.includes('summary-review = open curator with auto summary draft (default)')) {
patched = patched.replace(
'summary-review = open curator with auto summary draft (default)',
'summary-review = open curator with auto summary draft (opt-in)',
);
changed = true;
}
}
if (relativePath === "index.ts" && changed) {
patched = patched.replace('import { join } from "node:path";', 'import { dirname, join } from "node:path";');
patched = patched.replace('const dir = join(homedir(), ".pi");', "const dir = dirname(WEB_SEARCH_CONFIG_PATH);");
}
return patched;
}

View File

@@ -1,12 +1,23 @@
import { spawnSync } from "node:child_process";
import { existsSync, mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs";
import { existsSync, lstatSync, mkdirSync, readdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
import { createRequire } from "node:module";
import { dirname, resolve } from "node:path";
import { homedir } from "node:os";
import { delimiter, dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url";
import { FEYNMAN_LOGO_HTML } from "../logo.mjs";
import { patchAlphaHubAuthSource } from "./lib/alpha-hub-auth-patch.mjs";
import { patchPiExtensionLoaderSource } from "./lib/pi-extension-loader-patch.mjs";
import { patchPiGoogleLegacySchemaSource } from "./lib/pi-google-legacy-schema-patch.mjs";
import { PI_WEB_ACCESS_PATCH_TARGETS, patchPiWebAccessSource } from "./lib/pi-web-access-patch.mjs";
import { PI_SUBAGENTS_PATCH_TARGETS, patchPiSubagentsSource, stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs";
const here = dirname(fileURLToPath(import.meta.url));
const appRoot = resolve(here, "..");
const feynmanHome = resolve(process.env.FEYNMAN_HOME ?? homedir(), ".feynman");
const feynmanNpmPrefix = resolve(feynmanHome, "npm-global");
process.env.FEYNMAN_NPM_PREFIX = feynmanNpmPrefix;
process.env.NPM_CONFIG_PREFIX = feynmanNpmPrefix;
process.env.npm_config_prefix = feynmanNpmPrefix;
const appRequire = createRequire(resolve(appRoot, "package.json"));
const isGlobalInstall = process.env.npm_config_global === "true" || process.env.npm_config_location === "global";
@@ -51,9 +62,20 @@ const cliPath = piPackageRoot ? resolve(piPackageRoot, "dist", "cli.js") : null;
const bunCliPath = piPackageRoot ? resolve(piPackageRoot, "dist", "bun", "cli.js") : null;
const interactiveModePath = piPackageRoot ? resolve(piPackageRoot, "dist", "modes", "interactive", "interactive-mode.js") : null;
const interactiveThemePath = piPackageRoot ? resolve(piPackageRoot, "dist", "modes", "interactive", "theme", "theme.js") : null;
const extensionLoaderPath = piPackageRoot ? resolve(piPackageRoot, "dist", "core", "extensions", "loader.js") : null;
const terminalPath = piTuiRoot ? resolve(piTuiRoot, "dist", "terminal.js") : null;
const editorPath = piTuiRoot ? resolve(piTuiRoot, "dist", "components", "editor.js") : null;
const workspaceRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
const workspaceExtensionLoaderPath = resolve(
workspaceRoot,
"@mariozechner",
"pi-coding-agent",
"dist",
"core",
"extensions",
"loader.js",
);
const piSubagentsRoot = resolve(workspaceRoot, "pi-subagents");
const webAccessPath = resolve(workspaceRoot, "pi-web-access", "index.ts");
const sessionSearchIndexerPath = resolve(
workspaceRoot,
@@ -66,12 +88,46 @@ const piMemoryPath = resolve(workspaceRoot, "@samfp", "pi-memory", "src", "index
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
const workspaceDir = resolve(appRoot, ".feynman", "npm");
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
const workspaceManifestPath = resolve(workspaceDir, ".runtime-manifest.json");
const workspaceArchivePath = resolve(appRoot, ".feynman", "runtime-workspace.tgz");
const globalNodeModulesRoot = resolve(feynmanNpmPrefix, "lib", "node_modules");
const PRUNE_VERSION = 3;
const NATIVE_PACKAGE_SPECS = new Set([
"@kaiserlich-dev/pi-session-search",
"@samfp/pi-memory",
]);
const FILTERED_INSTALL_OUTPUT_PATTERNS = [
/npm warn deprecated node-domexception@1\.0\.0/i,
/npm notice/i,
/^(added|removed|changed) \d+ packages?( in .+)?$/i,
/^\d+ packages are looking for funding$/i,
/^run `npm fund` for details$/i,
];
function arraysMatch(left, right) {
return left.length === right.length && left.every((value, index) => value === right[index]);
}
function supportsNativePackageSources(version = process.versions.node) {
const [major = "0"] = version.replace(/^v/, "").split(".");
return (Number.parseInt(major, 10) || 0) <= 24;
}
function createInstallCommand(packageManager, packageSpecs) {
switch (packageManager) {
case "npm":
return ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", ...packageSpecs];
return [
"install",
"--global=false",
"--location=project",
"--prefer-offline",
"--no-audit",
"--no-fund",
"--legacy-peer-deps",
"--loglevel",
"error",
...packageSpecs,
];
case "pnpm":
return ["add", "--prefer-offline", "--reporter", "silent", ...packageSpecs];
case "bun":
@@ -110,12 +166,24 @@ function installWorkspacePackages(packageSpecs) {
const result = spawnSync(packageManager, createInstallCommand(packageManager, packageSpecs), {
cwd: workspaceDir,
stdio: ["ignore", "ignore", "pipe"],
stdio: ["ignore", "pipe", "pipe"],
timeout: 300000,
env: {
...process.env,
PATH: getPathWithCurrentNode(process.env.PATH),
},
});
for (const stream of [result.stdout, result.stderr]) {
if (!stream?.length) continue;
for (const line of stream.toString().split(/\r?\n/)) {
if (!line.trim()) continue;
if (FILTERED_INSTALL_OUTPUT_PATTERNS.some((pattern) => pattern.test(line.trim()))) continue;
process.stderr.write(`${line}\n`);
}
}
if (result.status !== 0) {
if (result.stderr?.length) process.stderr.write(result.stderr);
process.stderr.write(`[feynman] ${packageManager} failed while setting up bundled packages.\n`);
return false;
}
@@ -128,6 +196,146 @@ function parsePackageName(spec) {
return match?.[1] ?? spec;
}
function filterUnsupportedPackageSpecs(packageSpecs) {
if (supportsNativePackageSources()) return packageSpecs;
return packageSpecs.filter((spec) => !NATIVE_PACKAGE_SPECS.has(parsePackageName(spec)));
}
function workspaceContainsPackages(packageSpecs) {
return packageSpecs.every((spec) => existsSync(resolve(workspaceRoot, parsePackageName(spec))));
}
function workspaceMatchesRuntime(packageSpecs) {
if (!existsSync(workspaceManifestPath)) return false;
try {
const manifest = JSON.parse(readFileSync(workspaceManifestPath, "utf8"));
if (!Array.isArray(manifest.packageSpecs)) {
return false;
}
if (!arraysMatch(manifest.packageSpecs, packageSpecs)) {
if (!(workspaceContainsPackages(packageSpecs) && packageSpecs.every((spec) => manifest.packageSpecs.includes(spec)))) {
return false;
}
}
if (!supportsNativePackageSources() && workspaceContainsPackages(packageSpecs)) {
return true;
}
if (
manifest.nodeAbi !== process.versions.modules ||
manifest.platform !== process.platform ||
manifest.arch !== process.arch ||
manifest.pruneVersion !== PRUNE_VERSION
) {
return false;
}
return packageSpecs.every((spec) => existsSync(resolve(workspaceRoot, parsePackageName(spec))));
} catch {
return false;
}
}
function writeWorkspaceManifest(packageSpecs) {
writeFileSync(
workspaceManifestPath,
JSON.stringify(
{
packageSpecs,
generatedAt: new Date().toISOString(),
nodeAbi: process.versions.modules,
nodeVersion: process.version,
platform: process.platform,
arch: process.arch,
pruneVersion: PRUNE_VERSION,
},
null,
2,
) + "\n",
"utf8",
);
}
function ensureParentDir(path) {
mkdirSync(dirname(path), { recursive: true });
}
function packageDependencyExists(packagePath, globalNodeModulesRoot, dependency) {
return existsSync(resolve(packagePath, "node_modules", dependency)) ||
existsSync(resolve(globalNodeModulesRoot, dependency));
}
function installedPackageLooksUsable(packagePath, globalNodeModulesRoot) {
if (!existsSync(resolve(packagePath, "package.json"))) return false;
try {
const pkg = JSON.parse(readFileSync(resolve(packagePath, "package.json"), "utf8"));
return Object.keys(pkg.dependencies ?? {}).every((dependency) =>
packageDependencyExists(packagePath, globalNodeModulesRoot, dependency)
);
} catch {
return false;
}
}
function linkPointsTo(linkPath, targetPath) {
try {
if (!lstatSync(linkPath).isSymbolicLink()) return false;
return resolve(dirname(linkPath), readlinkSync(linkPath)) === targetPath;
} catch {
return false;
}
}
function listWorkspacePackageNames(root) {
if (!existsSync(root)) return [];
const names = [];
for (const entry of readdirSync(root, { withFileTypes: true })) {
if (!entry.isDirectory() && !entry.isSymbolicLink()) continue;
if (entry.name.startsWith(".")) continue;
if (entry.name.startsWith("@")) {
const scopeRoot = resolve(root, entry.name);
for (const scopedEntry of readdirSync(scopeRoot, { withFileTypes: true })) {
if (!scopedEntry.isDirectory() && !scopedEntry.isSymbolicLink()) continue;
names.push(`${entry.name}/${scopedEntry.name}`);
}
continue;
}
names.push(entry.name);
}
return names;
}
function linkBundledPackage(packageName) {
const sourcePath = resolve(workspaceRoot, packageName);
const targetPath = resolve(globalNodeModulesRoot, packageName);
if (!existsSync(sourcePath)) return false;
if (linkPointsTo(targetPath, sourcePath)) return false;
try {
if (lstatSync(targetPath).isSymbolicLink()) {
rmSync(targetPath, { force: true });
} else if (!installedPackageLooksUsable(targetPath, globalNodeModulesRoot)) {
rmSync(targetPath, { recursive: true, force: true });
}
} catch {}
if (existsSync(targetPath)) return false;
ensureParentDir(targetPath);
try {
symlinkSync(sourcePath, targetPath, process.platform === "win32" ? "junction" : "dir");
return true;
} catch {
return false;
}
}
function ensureBundledPackageLinks(packageSpecs) {
if (!workspaceMatchesRuntime(packageSpecs)) return;
for (const packageName of listWorkspacePackageNames(workspaceRoot)) {
linkBundledPackage(packageName);
}
}
function restorePackagedWorkspace(packageSpecs) {
if (!existsSync(workspaceArchivePath)) return false;
@@ -153,24 +361,26 @@ function restorePackagedWorkspace(packageSpecs) {
return false;
}
function refreshPackagedWorkspace(packageSpecs) {
return installWorkspacePackages(packageSpecs);
}
function resolveExecutable(name, fallbackPaths = []) {
for (const candidate of fallbackPaths) {
if (existsSync(candidate)) return candidate;
}
const isWindows = process.platform === "win32";
const env = {
...process.env,
PATH: process.env.PATH ?? "",
};
const result = isWindows
? spawnSync("cmd", ["/c", `where ${name}`], {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
env,
})
: spawnSync("sh", ["-lc", `command -v ${name}`], {
: spawnSync("sh", ["-c", `command -v ${name}`], {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
env,
});
if (result.status === 0) {
const resolved = result.stdout.trim().split(/\r?\n/)[0];
@@ -179,6 +389,12 @@ function resolveExecutable(name, fallbackPaths = []) {
return null;
}
function getPathWithCurrentNode(pathValue = process.env.PATH ?? "") {
const nodeDir = dirname(process.execPath);
const parts = pathValue.split(delimiter).filter(Boolean);
return parts.includes(nodeDir) ? pathValue : `${nodeDir}${delimiter}${pathValue}`;
}
function ensurePackageWorkspace() {
if (!existsSync(settingsPath)) return;
@@ -188,10 +404,17 @@ function ensurePackageWorkspace() {
.filter((v) => typeof v === "string" && v.startsWith("npm:"))
.map((v) => v.slice(4))
: [];
const supportedPackageSpecs = filterUnsupportedPackageSpecs(packageSpecs);
if (packageSpecs.length === 0) return;
if (existsSync(resolve(workspaceRoot, parsePackageName(packageSpecs[0])))) return;
if (restorePackagedWorkspace(packageSpecs) && refreshPackagedWorkspace(packageSpecs)) return;
if (supportedPackageSpecs.length === 0) return;
if (workspaceMatchesRuntime(supportedPackageSpecs)) {
ensureBundledPackageLinks(supportedPackageSpecs);
return;
}
if (restorePackagedWorkspace(packageSpecs) && workspaceMatchesRuntime(supportedPackageSpecs)) {
ensureBundledPackageLinks(supportedPackageSpecs);
return;
}
mkdirSync(workspaceDir, { recursive: true });
writeFileSync(
@@ -208,7 +431,7 @@ function ensurePackageWorkspace() {
process.stderr.write(`\r${frames[frame++ % frames.length]} setting up feynman... ${elapsed}s`);
}, 80);
const result = installWorkspacePackages(packageSpecs);
const result = installWorkspacePackages(supportedPackageSpecs);
clearInterval(spinner);
const elapsed = Math.round((Date.now() - start) / 1000);
@@ -216,7 +439,9 @@ function ensurePackageWorkspace() {
if (!result) {
process.stderr.write(`\r✗ setup failed (${elapsed}s)\n`);
} else {
process.stderr.write(`\r✓ feynman ready (${elapsed}s)\n`);
process.stderr.write("\r\x1b[2K");
writeWorkspaceManifest(supportedPackageSpecs);
ensureBundledPackageLinks(supportedPackageSpecs);
}
}
@@ -243,6 +468,32 @@ function ensurePandoc() {
ensurePandoc();
if (existsSync(piSubagentsRoot)) {
for (const relativePath of PI_SUBAGENTS_PATCH_TARGETS) {
const entryPath = resolve(piSubagentsRoot, relativePath);
if (!existsSync(entryPath)) continue;
const source = readFileSync(entryPath, "utf8");
const patched = patchPiSubagentsSource(relativePath, source);
if (patched !== source) {
writeFileSync(entryPath, patched, "utf8");
}
}
const builtinAgentsRoot = resolve(piSubagentsRoot, "agents");
if (existsSync(builtinAgentsRoot)) {
for (const entry of readdirSync(builtinAgentsRoot, { withFileTypes: true })) {
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
const entryPath = resolve(builtinAgentsRoot, entry.name);
const source = readFileSync(entryPath, "utf8");
const patched = stripPiSubagentBuiltinModelSource(source);
if (patched !== source) {
writeFileSync(entryPath, patched, "utf8");
}
}
}
}
if (packageJsonPath && existsSync(packageJsonPath)) {
const pkg = JSON.parse(readFileSync(packageJsonPath, "utf8"));
if (pkg.piConfig?.name !== "feynman" || pkg.piConfig?.configDir !== ".feynman") {
@@ -337,6 +588,18 @@ if (interactiveModePath && existsSync(interactiveModePath)) {
}
}
for (const loaderPath of [extensionLoaderPath, workspaceExtensionLoaderPath].filter(Boolean)) {
if (!existsSync(loaderPath)) {
continue;
}
const source = readFileSync(loaderPath, "utf8");
const patched = patchPiExtensionLoaderSource(source);
if (patched !== source) {
writeFileSync(loaderPath, patched, "utf8");
}
}
if (interactiveThemePath && existsSync(interactiveThemePath)) {
let themeSource = readFileSync(interactiveThemePath, "utf8");
const desiredGetEditorTheme = [
@@ -512,6 +775,21 @@ if (existsSync(webAccessPath)) {
}
}
const piWebAccessRoot = resolve(workspaceRoot, "pi-web-access");
if (existsSync(piWebAccessRoot)) {
for (const relativePath of PI_WEB_ACCESS_PATCH_TARGETS) {
const entryPath = resolve(piWebAccessRoot, relativePath);
if (!existsSync(entryPath)) continue;
const source = readFileSync(entryPath, "utf8");
const patched = patchPiWebAccessSource(relativePath, source);
if (patched !== source) {
writeFileSync(entryPath, patched, "utf8");
}
}
}
if (existsSync(sessionSearchIndexerPath)) {
const source = readFileSync(sessionSearchIndexerPath, "utf8");
const original = 'const sessionsDir = path.join(os.homedir(), ".pi", "agent", "sessions");';
@@ -523,6 +801,7 @@ if (existsSync(sessionSearchIndexerPath)) {
}
const oauthPagePath = piAiRoot ? resolve(piAiRoot, "dist", "utils", "oauth", "oauth-page.js") : null;
const googleSharedPath = piAiRoot ? resolve(piAiRoot, "dist", "providers", "google-shared.js") : null;
if (oauthPagePath && existsSync(oauthPagePath)) {
let source = readFileSync(oauthPagePath, "utf8");
@@ -535,30 +814,24 @@ if (oauthPagePath && existsSync(oauthPagePath)) {
if (changed) writeFileSync(oauthPagePath, source, "utf8");
}
if (googleSharedPath && existsSync(googleSharedPath)) {
const source = readFileSync(googleSharedPath, "utf8");
const patched = patchPiGoogleLegacySchemaSource(source);
if (patched !== source) {
writeFileSync(googleSharedPath, patched, "utf8");
}
}
const alphaHubAuthPath = findPackageRoot("@companion-ai/alpha-hub")
? resolve(findPackageRoot("@companion-ai/alpha-hub"), "src", "lib", "auth.js")
: null;
if (alphaHubAuthPath && existsSync(alphaHubAuthPath)) {
let source = readFileSync(alphaHubAuthPath, "utf8");
const oldSuccess = "'<html><body><h2>Logged in to Alpha Hub</h2><p>You can close this tab.</p></body></html>'";
const oldError = "'<html><body><h2>Login failed</h2><p>You can close this tab.</p></body></html>'";
const bodyAttr = `style="font-family:system-ui,sans-serif;text-align:center;padding-top:20vh;background:#050a08;color:#f0f5f2"`;
const logo = `<h1 style="font-family:monospace;font-size:48px;color:#34d399;margin:0">feynman</h1>`;
const newSuccess = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#34d399;margin-top:16px">Logged in</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
const newError = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#ef4444;margin-top:16px">Login failed</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
if (source.includes(oldSuccess)) {
source = source.replace(oldSuccess, newSuccess);
const source = readFileSync(alphaHubAuthPath, "utf8");
const patched = patchAlphaHubAuthSource(source);
if (patched !== source) {
writeFileSync(alphaHubAuthPath, patched, "utf8");
}
if (source.includes(oldError)) {
source = source.replace(oldError, newError);
}
const brokenWinOpen = "else if (plat === 'win32') execSync(`start \"${url}\"`);";
const fixedWinOpen = "else if (plat === 'win32') execSync(`cmd /c start \"\" \"${url}\"`);";
if (source.includes(brokenWinOpen)) {
source = source.replace(brokenWinOpen, fixedWinOpen);
}
writeFileSync(alphaHubAuthPath, source, "utf8");
}
if (existsSync(piMemoryPath)) {

View File

@@ -1,26 +1,44 @@
import { existsSync, mkdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
import { existsSync, mkdirSync, readdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
import { createHash } from "node:crypto";
import { resolve } from "node:path";
import { spawnSync } from "node:child_process";
import { stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs";
const appRoot = resolve(import.meta.dirname, "..");
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
const packageJsonPath = resolve(appRoot, "package.json");
const packageLockPath = resolve(appRoot, "package-lock.json");
const feynmanDir = resolve(appRoot, ".feynman");
const workspaceDir = resolve(appRoot, ".feynman", "npm");
const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
const PRUNE_VERSION = 3;
const PRUNE_VERSION = 4;
const PINNED_RUNTIME_PACKAGES = [
"@mariozechner/pi-agent-core",
"@mariozechner/pi-ai",
"@mariozechner/pi-coding-agent",
"@mariozechner/pi-tui",
];
function readPackageSpecs() {
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
if (!Array.isArray(settings.packages)) {
return [];
const packageSpecs = Array.isArray(settings.packages)
? settings.packages
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
.map((value) => value.slice(4))
: [];
for (const packageName of PINNED_RUNTIME_PACKAGES) {
const version = readLockedPackageVersion(packageName);
if (version) {
packageSpecs.push(`${packageName}@${version}`);
}
}
return settings.packages
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
.map((value) => value.slice(4));
return Array.from(new Set(packageSpecs));
}
function parsePackageName(spec) {
@@ -28,10 +46,41 @@ function parsePackageName(spec) {
return match?.[1] ?? spec;
}
function readLockedPackageVersion(packageName) {
if (!existsSync(packageLockPath)) {
return undefined;
}
try {
const lockfile = JSON.parse(readFileSync(packageLockPath, "utf8"));
const entry = lockfile.packages?.[`node_modules/${packageName}`];
return typeof entry?.version === "string" ? entry.version : undefined;
} catch {
return undefined;
}
}
function arraysMatch(left, right) {
return left.length === right.length && left.every((value, index) => value === right[index]);
}
function hashFile(path) {
if (!existsSync(path)) {
return null;
}
return createHash("sha256").update(readFileSync(path)).digest("hex");
}
function getRuntimeInputHash() {
const hash = createHash("sha256");
for (const path of [packageJsonPath, packageLockPath, settingsPath]) {
hash.update(path);
hash.update("\0");
hash.update(hashFile(path) ?? "missing");
hash.update("\0");
}
return hash.digest("hex");
}
function workspaceIsCurrent(packageSpecs) {
if (!existsSync(manifestPath) || !existsSync(workspaceNodeModulesDir)) {
return false;
@@ -42,6 +91,9 @@ function workspaceIsCurrent(packageSpecs) {
if (!Array.isArray(manifest.packageSpecs) || !arraysMatch(manifest.packageSpecs, packageSpecs)) {
return false;
}
if (manifest.runtimeInputHash !== getRuntimeInputHash()) {
return false;
}
if (
manifest.nodeAbi !== process.versions.modules ||
manifest.platform !== process.platform ||
@@ -72,6 +124,17 @@ function writeWorkspacePackageJson() {
);
}
function childNpmInstallEnv() {
return {
...process.env,
// `npm pack --dry-run` exports dry-run config to lifecycle scripts. The
// vendored runtime workspace must still install real node_modules so the
// publish artifact can be validated without poisoning the archive.
npm_config_dry_run: "false",
NPM_CONFIG_DRY_RUN: "false",
};
}
function prepareWorkspace(packageSpecs) {
rmSync(workspaceDir, { recursive: true, force: true });
mkdirSync(workspaceDir, { recursive: true });
@@ -84,9 +147,9 @@ function prepareWorkspace(packageSpecs) {
const result = spawnSync(
process.env.npm_execpath ? process.execPath : "npm",
process.env.npm_execpath
? [process.env.npm_execpath, "install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
: ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
{ stdio: "inherit" },
? [process.env.npm_execpath, "install", "--prefer-online", "--no-audit", "--no-fund", "--no-dry-run", "--legacy-peer-deps", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
: ["install", "--prefer-online", "--no-audit", "--no-fund", "--no-dry-run", "--legacy-peer-deps", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
{ stdio: "inherit", env: childNpmInstallEnv() },
);
if (result.status !== 0) {
process.exit(result.status ?? 1);
@@ -99,6 +162,7 @@ function writeManifest(packageSpecs) {
JSON.stringify(
{
packageSpecs,
runtimeInputHash: getRuntimeInputHash(),
generatedAt: new Date().toISOString(),
nodeAbi: process.versions.modules,
nodeVersion: process.version,
@@ -122,6 +186,25 @@ function pruneWorkspace() {
}
}
function stripBundledPiSubagentModelPins() {
const agentsRoot = resolve(workspaceNodeModulesDir, "pi-subagents", "agents");
if (!existsSync(agentsRoot)) {
return false;
}
let changed = false;
for (const entry of readdirSync(agentsRoot, { withFileTypes: true })) {
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
const entryPath = resolve(agentsRoot, entry.name);
const source = readFileSync(entryPath, "utf8");
const patched = stripPiSubagentBuiltinModelSource(source);
if (patched === source) continue;
writeFileSync(entryPath, patched, "utf8");
changed = true;
}
return changed;
}
function archiveIsCurrent() {
if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) {
return false;
@@ -145,6 +228,10 @@ const packageSpecs = readPackageSpecs();
if (workspaceIsCurrent(packageSpecs)) {
console.log("[feynman] vendored runtime workspace already up to date");
if (stripBundledPiSubagentModelPins()) {
writeManifest(packageSpecs);
console.log("[feynman] stripped bundled pi-subagents model pins");
}
if (archiveIsCurrent()) {
process.exit(0);
}
@@ -157,6 +244,7 @@ if (workspaceIsCurrent(packageSpecs)) {
console.log("[feynman] preparing vendored runtime workspace...");
prepareWorkspace(packageSpecs);
pruneWorkspace();
stripBundledPiSubagentModelPins();
writeManifest(packageSpecs);
createWorkspaceArchive();
console.log("[feynman] vendored runtime workspace ready");

View File

@@ -5,7 +5,7 @@ description: Autonomous experiment loop that tries ideas, measures results, keep
# Autoresearch
Run the `/autoresearch` workflow. Read the prompt template at `prompts/autoresearch.md` for the full procedure.
Run the `/autoresearch` workflow. Read the prompt template at `../prompts/autoresearch.md` for the full procedure.
Tools used: `init_experiment`, `run_experiment`, `log_experiment` (from pi-autoresearch)

View File

@@ -5,7 +5,7 @@ description: Contribute changes to the Feynman repository itself. Use when the t
# Contributing
Read `CONTRIBUTING.md` first, then `AGENTS.md` for repo-level agent conventions.
Read `../CONTRIBUTING.md` first, then `../AGENTS.md` for repo-level agent conventions.
Use this skill when working on Feynman itself, especially for:

View File

@@ -5,7 +5,7 @@ description: Run a thorough, source-heavy investigation on any topic. Use when t
# Deep Research
Run the `/deepresearch` workflow. Read the prompt template at `prompts/deepresearch.md` for the full procedure.
Run the `/deepresearch` workflow. Read the prompt template at `../prompts/deepresearch.md` for the full procedure.
Agents used: `researcher`, `verifier`, `reviewer`

View File

@@ -5,6 +5,6 @@ description: Inspect active background research work including running processes
# Jobs
Run the `/jobs` workflow. Read the prompt template at `prompts/jobs.md` for the full procedure.
Run the `/jobs` workflow. Read the prompt template at `../prompts/jobs.md` for the full procedure.
Shows active `pi-processes`, scheduled `pi-schedule-prompt` entries, and running subagent tasks.

View File

@@ -5,7 +5,7 @@ description: Run a literature review using paper search and primary-source synth
# Literature Review
Run the `/lit` workflow. Read the prompt template at `prompts/lit.md` for the full procedure.
Run the `/lit` workflow. Read the prompt template at `../prompts/lit.md` for the full procedure.
Agents used: `researcher`, `verifier`, `reviewer`

View File

@@ -5,7 +5,7 @@ description: Compare a paper's claims against its public codebase. Use when the
# Paper-Code Audit
Run the `/audit` workflow. Read the prompt template at `prompts/audit.md` for the full procedure.
Run the `/audit` workflow. Read the prompt template at `../prompts/audit.md` for the full procedure.
Agents used: `researcher`, `verifier`

View File

@@ -5,7 +5,7 @@ description: Turn research findings into a polished paper-style draft with secti
# Paper Writing
Run the `/draft` workflow. Read the prompt template at `prompts/draft.md` for the full procedure.
Run the `/draft` workflow. Read the prompt template at `../prompts/draft.md` for the full procedure.
Agents used: `writer`, `verifier`

View File

@@ -5,7 +5,7 @@ description: Simulate a tough but constructive peer review of an AI research art
# Peer Review
Run the `/review` workflow. Read the prompt template at `prompts/review.md` for the full procedure.
Run the `/review` workflow. Read the prompt template at `../prompts/review.md` for the full procedure.
Agents used: `researcher`, `reviewer`

View File

@@ -5,7 +5,7 @@ description: Plan or execute a replication of a paper, claim, or benchmark. Use
# Replication
Run the `/replicate` workflow. Read the prompt template at `prompts/replicate.md` for the full procedure.
Run the `/replicate` workflow. Read the prompt template at `../prompts/replicate.md` for the full procedure.
Agents used: `researcher`

View File

@@ -5,6 +5,6 @@ description: Write a durable session log capturing completed work, findings, ope
# Session Log
Run the `/log` workflow. Read the prompt template at `prompts/log.md` for the full procedure.
Run the `/log` workflow. Read the prompt template at `../prompts/log.md` for the full procedure.
Output: session log in `notes/session-logs/`.

View File

@@ -5,7 +5,7 @@ description: Compare multiple sources on a topic and produce a grounded comparis
# Source Comparison
Run the `/compare` workflow. Read the prompt template at `prompts/compare.md` for the full procedure.
Run the `/compare` workflow. Read the prompt template at `../prompts/compare.md` for the full procedure.
Agents used: `researcher`, `verifier`

View File

@@ -1,19 +0,0 @@
---
name: valichord-validation
description: Integrate with ValiChord to submit a replication as a cryptographically verified validator attestation, discover studies awaiting independent validation, query Harmony Records and reproducibility badges, or assist researchers in preparing a study for the validation pipeline. Feynman operates as a first-class AI validator — publishing a validator profile, claiming studies, running the blind commit-reveal protocol, and accumulating a verifiable per-discipline reputation. Also surfaces reproducibility status during /deepresearch and literature reviews via ValiChord's HTTP Gateway.
---
# ValiChord Validation
Run the `/valichord` workflow. Read the prompt template at `prompts/valichord.md` for the full procedure.
ValiChord is a four-DNA Holochain system for scientific reproducibility verification. Feynman integrates at four points:
- As a **validator agent** — running `/replicate` then submitting findings as a sealed attestation into the blind commit-reveal protocol, earning reproducibility badges for researchers and building Feynman's own verifiable per-discipline reputation (Provisional → Certified → Senior)
- As a **proactive discovery agent** — querying the pending study queue by discipline, assessing difficulty, and autonomously claiming appropriate validation work without waiting to be assigned
- As a **researcher's assistant** — helping prepare studies for submission: registering protocols, taking cryptographic data snapshots, and running the Repository Readiness Checker to identify and fix reproducibility failure modes before validation begins
- As a **research query tool** — checking whether a study carries a Harmony Record or reproducibility badge (Gold/Silver/Bronze) via ValiChord's HTTP Gateway, for use during `/deepresearch` or literature reviews
Output: a Harmony Record — an immutable, publicly accessible cryptographic proof of independent reproducibility written to the ValiChord Governance DHT — plus automatic badge issuance and an updated validator reputation score.
Live demo (commit-reveal cycle end-to-end): https://youtu.be/DQ5wZSD1YEw
ValiChord repo: https://github.com/topeuph-ai/ValiChord

View File

@@ -5,7 +5,7 @@ description: Set up a recurring research watch on a topic, company, paper area,
# Watch
Run the `/watch` workflow. Read the prompt template at `prompts/watch.md` for the full procedure.
Run the `/watch` workflow. Read the prompt template at `../prompts/watch.md` for the full procedure.
Agents used: `researcher`

View File

@@ -1,6 +1,6 @@
import "dotenv/config";
import { readFileSync } from "node:fs";
import { existsSync, readFileSync } from "node:fs";
import { dirname, resolve } from "node:path";
import { parseArgs } from "node:util";
import { fileURLToPath } from "node:url";
@@ -11,13 +11,17 @@ import {
login as loginAlpha,
logout as logoutAlpha,
} from "@companion-ai/alpha-hub/lib";
import { DefaultPackageManager, SettingsManager } from "@mariozechner/pi-coding-agent";
import { SettingsManager } from "@mariozechner/pi-coding-agent";
import { syncBundledAssets } from "./bootstrap/sync.js";
import { ensureFeynmanHome, getDefaultSessionDir, getFeynmanAgentDir, getFeynmanHome } from "./config/paths.js";
import { launchPiChat } from "./pi/launch.js";
import { installPackageSources, updateConfiguredPackages } from "./pi/package-ops.js";
import { MAX_NATIVE_PACKAGE_NODE_MAJOR } from "./pi/package-presets.js";
import { CORE_PACKAGE_SOURCES, getOptionalPackagePresetSources, listOptionalPackagePresets } from "./pi/package-presets.js";
import { normalizeFeynmanSettings, normalizeThinkingLevel, parseModelSpec } from "./pi/settings.js";
import { applyFeynmanPackageManagerEnv } from "./pi/runtime.js";
import { getConfiguredServiceTier, normalizeServiceTier, setConfiguredServiceTier } from "./model/service-tier.js";
import {
authenticateModelProvider,
getCurrentModelSpec,
@@ -26,7 +30,9 @@ import {
printModelList,
setDefaultModelSpec,
} from "./model/commands.js";
import { printSearchStatus } from "./search/commands.js";
import { buildModelStatusSnapshotFromRecords, getAvailableModelRecords, getSupportedModelRecords } from "./model/catalog.js";
import { clearSearchConfig, printSearchStatus, setSearchProvider } from "./search/commands.js";
import type { PiWebSearchProvider } from "./pi/web-access.js";
import { runDoctor, runStatus } from "./setup/doctor.js";
import { setupPreviewDependencies } from "./setup/preview.js";
import { runSetup } from "./setup/setup.js";
@@ -127,7 +133,7 @@ async function handleModelCommand(subcommand: string | undefined, args: string[]
if (subcommand === "login") {
if (args[0]) {
// Specific provider given - use OAuth login directly
// Specific provider given - resolve OAuth vs API-key setup automatically
await loginModelProvider(feynmanAuthPath, args[0], feynmanSettingsPath);
} else {
// No provider specified - show auth method choice
@@ -144,39 +150,67 @@ async function handleModelCommand(subcommand: string | undefined, args: string[]
if (subcommand === "set") {
const spec = args[0];
if (!spec) {
throw new Error("Usage: feynman model set <provider/model>");
throw new Error("Usage: feynman model set <provider/model|provider:model>");
}
setDefaultModelSpec(feynmanSettingsPath, feynmanAuthPath, spec);
return;
}
if (subcommand === "tier") {
const requested = args[0];
if (!requested) {
console.log(getConfiguredServiceTier(feynmanSettingsPath) ?? "not set");
return;
}
if (requested === "unset" || requested === "clear" || requested === "off") {
setConfiguredServiceTier(feynmanSettingsPath, undefined);
console.log("Cleared service tier override");
return;
}
const tier = normalizeServiceTier(requested);
if (!tier) {
throw new Error("Usage: feynman model tier <auto|default|flex|priority|standard_only|unset>");
}
setConfiguredServiceTier(feynmanSettingsPath, tier);
console.log(`Service tier set to ${tier}`);
return;
}
throw new Error(`Unknown model command: ${subcommand}`);
}
async function handleUpdateCommand(workingDir: string, feynmanAgentDir: string, source?: string): Promise<void> {
const settingsManager = SettingsManager.create(workingDir, feynmanAgentDir);
const packageManager = new DefaultPackageManager({
cwd: workingDir,
agentDir: feynmanAgentDir,
settingsManager,
});
packageManager.setProgressCallback((event) => {
if (event.type === "start") {
console.log(`Updating ${event.source}...`);
} else if (event.type === "complete") {
console.log(`Updated ${event.source}`);
} else if (event.type === "error") {
console.error(`Failed to update ${event.source}: ${event.message ?? "unknown error"}`);
}
});
await packageManager.update(source);
await settingsManager.flush();
try {
const result = await updateConfiguredPackages(workingDir, feynmanAgentDir, source);
if (result.updated.length === 0) {
console.log("All packages up to date.");
return;
}
for (const updatedSource of result.updated) {
console.log(`Updated ${updatedSource}`);
}
for (const skippedSource of result.skipped) {
console.log(`Skipped ${skippedSource} on Node ${process.versions.node} (native packages are only supported through Node ${MAX_NATIVE_PACKAGE_NODE_MAJOR}.x).`);
}
console.log("All packages up to date.");
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
if (message.includes("No supported package manager found")) {
console.log("No package manager is available for live package updates.");
console.log("If you installed the standalone app, rerun the installer to get newer bundled packages.");
return;
}
throw error;
}
}
async function handlePackagesCommand(subcommand: string | undefined, args: string[], workingDir: string, feynmanAgentDir: string): Promise<void> {
applyFeynmanPackageManagerEnv(feynmanAgentDir);
const settingsManager = SettingsManager.create(workingDir, feynmanAgentDir);
const configuredSources = new Set(
settingsManager
@@ -216,38 +250,67 @@ async function handlePackagesCommand(subcommand: string | undefined, args: strin
throw new Error(`Unknown package preset: ${target}`);
}
const packageManager = new DefaultPackageManager({
cwd: workingDir,
agentDir: feynmanAgentDir,
settingsManager,
});
packageManager.setProgressCallback((event) => {
if (event.type === "start") {
console.log(`Installing ${event.source}...`);
} else if (event.type === "complete") {
console.log(`Installed ${event.source}`);
} else if (event.type === "error") {
console.error(`Failed to install ${event.source}: ${event.message ?? "unknown error"}`);
const appRoot = resolve(dirname(fileURLToPath(import.meta.url)), "..");
const isStandaloneBundle = !existsSync(resolve(appRoot, ".feynman", "runtime-workspace.tgz")) && existsSync(resolve(appRoot, ".feynman", "npm"));
if (target === "generative-ui" && process.platform === "darwin" && isStandaloneBundle) {
console.log("The generative-ui preset is currently unavailable in the standalone macOS bundle.");
console.log("Its native glimpseui dependency fails to compile reliably in that environment.");
console.log("If you need generative-ui, install Feynman through npm instead of the standalone bundle.");
return;
}
});
const pendingSources = sources.filter((source) => !configuredSources.has(source));
for (const source of sources) {
if (configuredSources.has(source)) {
console.log(`${source} already installed`);
continue;
}
await packageManager.install(source);
}
if (pendingSources.length === 0) {
console.log("Optional packages installed.");
return;
}
try {
const result = await installPackageSources(workingDir, feynmanAgentDir, pendingSources, { persist: true });
for (const skippedSource of result.skipped) {
console.log(`Skipped ${skippedSource} on Node ${process.versions.node} (native packages are only supported through Node ${MAX_NATIVE_PACKAGE_NODE_MAJOR}.x).`);
}
await settingsManager.flush();
console.log("Optional packages installed.");
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
if (message.includes("No supported package manager found")) {
console.log("No package manager is available for optional package installs.");
console.log("Install npm, pnpm, or bun, or rerun the standalone installer for bundled package updates.");
return;
}
function handleSearchCommand(subcommand: string | undefined): void {
throw error;
}
}
function handleSearchCommand(subcommand: string | undefined, args: string[]): void {
if (!subcommand || subcommand === "status") {
printSearchStatus();
return;
}
if (subcommand === "set") {
const provider = args[0] as PiWebSearchProvider | undefined;
const validProviders: PiWebSearchProvider[] = ["auto", "perplexity", "exa", "gemini"];
if (!provider || !validProviders.includes(provider)) {
throw new Error("Usage: feynman search set <auto|perplexity|exa|gemini> [api-key]");
}
setSearchProvider(provider, args[1]);
return;
}
if (subcommand === "clear") {
clearSearchConfig();
return;
}
throw new Error(`Unknown search command: ${subcommand}`);
}
@@ -283,6 +346,24 @@ export function resolveInitialPrompt(
return undefined;
}
export function shouldRunInteractiveSetup(
explicitModelSpec: string | undefined,
currentModelSpec: string | undefined,
isInteractiveTerminal: boolean,
authPath: string,
): boolean {
if (explicitModelSpec || !isInteractiveTerminal) {
return false;
}
const status = buildModelStatusSnapshotFromRecords(
getSupportedModelRecords(authPath),
getAvailableModelRecords(authPath),
currentModelSpec,
);
return !status.currentValid;
}
export async function main(): Promise<void> {
const here = dirname(fileURLToPath(import.meta.url));
const appRoot = resolve(here, "..");
@@ -305,9 +386,11 @@ export async function main(): Promise<void> {
"alpha-login": { type: "boolean" },
"alpha-logout": { type: "boolean" },
"alpha-status": { type: "boolean" },
mode: { type: "string" },
model: { type: "string" },
"new-session": { type: "boolean" },
prompt: { type: "string" },
"service-tier": { type: "string" },
"session-dir": { type: "string" },
"setup-preview": { type: "boolean" },
thinking: { type: "string" },
@@ -414,7 +497,7 @@ export async function main(): Promise<void> {
}
if (command === "search") {
handleSearchCommand(rest[0]);
handleSearchCommand(rest[0], rest.slice(1));
return;
}
@@ -434,6 +517,17 @@ export async function main(): Promise<void> {
}
const explicitModelSpec = values.model ?? process.env.FEYNMAN_MODEL;
const explicitServiceTier = normalizeServiceTier(values["service-tier"] ?? process.env.FEYNMAN_SERVICE_TIER);
const mode = values.mode;
if (mode !== undefined && mode !== "text" && mode !== "json" && mode !== "rpc") {
throw new Error("Unknown mode. Use text, json, or rpc.");
}
if ((values["service-tier"] ?? process.env.FEYNMAN_SERVICE_TIER) && !explicitServiceTier) {
throw new Error("Unknown service tier. Use auto, default, flex, priority, or standard_only.");
}
if (explicitServiceTier) {
process.env.FEYNMAN_SERVICE_TIER = explicitServiceTier;
}
if (explicitModelSpec) {
const modelRegistry = createModelRegistry(feynmanAuthPath);
const explicitModel = parseModelSpec(explicitModelSpec, modelRegistry);
@@ -442,7 +536,13 @@ export async function main(): Promise<void> {
}
}
if (!explicitModelSpec && !getCurrentModelSpec(feynmanSettingsPath) && process.stdin.isTTY && process.stdout.isTTY) {
const currentModelSpec = getCurrentModelSpec(feynmanSettingsPath);
if (shouldRunInteractiveSetup(
explicitModelSpec,
currentModelSpec,
Boolean(process.stdin.isTTY && process.stdout.isTTY),
feynmanAuthPath,
)) {
await runSetup({
settingsPath: feynmanSettingsPath,
bundledSettingsPath,
@@ -458,15 +558,17 @@ export async function main(): Promise<void> {
normalizeFeynmanSettings(feynmanSettingsPath, bundledSettingsPath, thinkingLevel, feynmanAuthPath);
}
const workflowCommandNames = new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name));
await launchPiChat({
appRoot,
workingDir,
sessionDir,
feynmanAgentDir,
feynmanVersion,
mode,
thinkingLevel,
explicitModelSpec,
oneShotPrompt: values.prompt,
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name))),
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, workflowCommandNames),
});
}

View File

@@ -48,6 +48,7 @@ const PROVIDER_LABELS: Record<string, string> = {
huggingface: "Hugging Face",
"amazon-bedrock": "Amazon Bedrock",
"azure-openai-responses": "Azure OpenAI Responses",
litellm: "LiteLLM Proxy",
};
const RESEARCH_MODEL_PREFERENCES = [
@@ -95,6 +96,14 @@ const RESEARCH_MODEL_PREFERENCES = [
spec: "zai/glm-5",
reason: "good fallback when GLM is the available research model",
},
{
spec: "minimax/minimax-m2.7",
reason: "good fallback when MiniMax is the available research model",
},
{
spec: "minimax/minimax-m2.7-highspeed",
reason: "good fallback when MiniMax is the available research model",
},
{
spec: "kimi-coding/kimi-k2-thinking",
reason: "good fallback when Kimi is the available research model",

View File

@@ -4,7 +4,7 @@ import { exec as execCallback } from "node:child_process";
import { promisify } from "node:util";
import { readJson } from "../pi/settings.js";
import { promptChoice, promptText } from "../setup/prompts.js";
import { promptChoice, promptSelect, promptText, type PromptSelectOption } from "../setup/prompts.js";
import { openUrl } from "../system/open-url.js";
import { printInfo, printSection, printSuccess, printWarning } from "../ui/terminal.js";
import {
@@ -55,13 +55,22 @@ async function selectOAuthProvider(authPath: string, action: "login" | "logout")
return providers[0];
}
const choices = providers.map((provider) => `${provider.id}${provider.name ?? provider.id}`);
choices.push("Cancel");
const selection = await promptChoice(`Choose an OAuth provider to ${action}:`, choices, 0);
if (selection >= providers.length) {
const selection = await promptSelect<OAuthProviderInfo | "cancel">(
`Choose an OAuth provider to ${action}:`,
[
...providers.map((provider) => ({
value: provider,
label: provider.name ?? provider.id,
hint: provider.id,
})),
{ value: "cancel", label: "Cancel" },
],
providers[0],
);
if (selection === "cancel") {
return undefined;
}
return providers[selection];
return selection;
}
type ApiKeyProviderInfo = {
@@ -71,10 +80,13 @@ type ApiKeyProviderInfo = {
};
const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
{ id: "__custom__", label: "Custom provider (baseUrl + API key)" },
{ id: "openai", label: "OpenAI Platform API", envVar: "OPENAI_API_KEY" },
{ id: "anthropic", label: "Anthropic API", envVar: "ANTHROPIC_API_KEY" },
{ id: "google", label: "Google Gemini API", envVar: "GEMINI_API_KEY" },
{ id: "lm-studio", label: "LM Studio (local OpenAI-compatible server)" },
{ id: "litellm", label: "LiteLLM Proxy (OpenAI-compatible gateway)" },
{ id: "__custom__", label: "Custom provider (local/self-hosted/proxy)" },
{ id: "amazon-bedrock", label: "Amazon Bedrock (AWS credential chain)" },
{ id: "openrouter", label: "OpenRouter", envVar: "OPENROUTER_API_KEY" },
{ id: "zai", label: "Z.AI / GLM", envVar: "ZAI_API_KEY" },
{ id: "kimi-coding", label: "Kimi / Moonshot", envVar: "KIMI_API_KEY" },
@@ -91,16 +103,58 @@ const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
{ id: "azure-openai-responses", label: "Azure OpenAI (Responses)", envVar: "AZURE_OPENAI_API_KEY" },
];
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
const choices = API_KEY_PROVIDERS.map(
(provider) => `${provider.id}${provider.label}${provider.envVar ? ` (${provider.envVar})` : ""}`,
);
choices.push("Cancel");
const selection = await promptChoice("Choose an API-key provider:", choices, 0);
if (selection >= API_KEY_PROVIDERS.length) {
function resolveApiKeyProvider(input: string): ApiKeyProviderInfo | undefined {
const normalizedInput = normalizeProviderId(input);
if (!normalizedInput) {
return undefined;
}
return API_KEY_PROVIDERS[selection];
return API_KEY_PROVIDERS.find((provider) => provider.id === normalizedInput);
}
export function resolveModelProviderForCommand(
authPath: string,
input: string,
): { kind: "oauth" | "api-key"; id: string } | undefined {
const oauthProvider = resolveOAuthProvider(authPath, input);
if (oauthProvider) {
return { kind: "oauth", id: oauthProvider.id };
}
const apiKeyProvider = resolveApiKeyProvider(input);
if (apiKeyProvider) {
return { kind: "api-key", id: apiKeyProvider.id };
}
return undefined;
}
function apiKeyProviderHint(provider: ApiKeyProviderInfo): string {
if (provider.id === "__custom__") {
return "Ollama, vLLM, LM Studio, proxies";
}
if (provider.id === "lm-studio") {
return "http://localhost:1234/v1";
}
if (provider.id === "litellm") {
return "http://localhost:4000/v1";
}
return provider.envVar ?? provider.id;
}
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
const options: PromptSelectOption<ApiKeyProviderInfo | "cancel">[] = API_KEY_PROVIDERS.map((provider) => ({
value: provider,
label: provider.label,
hint: apiKeyProviderHint(provider),
}));
options.push({ value: "cancel", label: "Cancel" });
const defaultProvider = API_KEY_PROVIDERS.find((provider) => provider.id === "openai") ?? API_KEY_PROVIDERS[0];
const selection = await promptSelect("Choose an API-key provider:", options, defaultProvider);
if (selection === "cancel") {
return undefined;
}
return selection;
}
type CustomProviderSetup = {
@@ -321,6 +375,103 @@ async function promptCustomProviderSetup(): Promise<CustomProviderSetup | undefi
return { providerId, modelIds, baseUrl, api, apiKeyConfig, authHeader };
}
async function promptLmStudioProviderSetup(): Promise<CustomProviderSetup | undefined> {
printSection("LM Studio");
printInfo("Start the LM Studio local server first, then load a model.");
const baseUrlRaw = await promptText("Base URL", "http://localhost:1234/v1");
const { baseUrl } = normalizeCustomProviderBaseUrl("openai-completions", baseUrlRaw);
if (!baseUrl) {
printWarning("Base URL is required.");
return undefined;
}
const detectedModelIds = await bestEffortFetchOpenAiModelIds(baseUrl, "lm-studio", false);
let modelIdsDefault = "local-model";
if (detectedModelIds && detectedModelIds.length > 0) {
const sample = detectedModelIds.slice(0, 10).join(", ");
printInfo(`Detected LM Studio models: ${sample}${detectedModelIds.length > 10 ? ", ..." : ""}`);
modelIdsDefault = detectedModelIds[0]!;
} else {
printInfo("No models detected from /models. Enter the exact model id shown in LM Studio.");
}
const modelIdsRaw = await promptText("Model id(s) (comma-separated)", modelIdsDefault);
const modelIds = normalizeModelIds(modelIdsRaw);
if (modelIds.length === 0) {
printWarning("At least one model id is required.");
return undefined;
}
return {
providerId: "lm-studio",
modelIds,
baseUrl,
api: "openai-completions",
apiKeyConfig: "lm-studio",
authHeader: false,
};
}
async function promptLiteLlmProviderSetup(): Promise<CustomProviderSetup | undefined> {
printSection("LiteLLM Proxy");
printInfo("Start the LiteLLM proxy first. Feynman uses the OpenAI-compatible chat-completions API.");
const baseUrlRaw = await promptText("Base URL", "http://localhost:4000/v1");
const { baseUrl } = normalizeCustomProviderBaseUrl("openai-completions", baseUrlRaw);
if (!baseUrl) {
printWarning("Base URL is required.");
return undefined;
}
const keyChoices = [
"Yes (use LITELLM_MASTER_KEY and send Authorization: Bearer <key>)",
"No (proxy runs without authentication)",
"Cancel",
];
const keySelection = await promptChoice("Is the proxy protected by a master key?", keyChoices, 0);
if (keySelection >= 2) {
return undefined;
}
const hasKey = keySelection === 0;
const apiKeyConfig = hasKey ? "LITELLM_MASTER_KEY" : "local";
const authHeader = hasKey;
if (hasKey) {
printInfo("Set LITELLM_MASTER_KEY in your shell or .env before using Feynman.");
}
const resolvedKey = hasKey ? await resolveApiKeyConfig(apiKeyConfig) : apiKeyConfig;
const detectedModelIds = resolvedKey
? await bestEffortFetchOpenAiModelIds(baseUrl, resolvedKey, authHeader)
: undefined;
let modelIdsDefault = "gpt-4";
if (detectedModelIds && detectedModelIds.length > 0) {
const sample = detectedModelIds.slice(0, 10).join(", ");
printInfo(`Detected LiteLLM models: ${sample}${detectedModelIds.length > 10 ? ", ..." : ""}`);
modelIdsDefault = detectedModelIds[0]!;
} else {
printInfo("No models detected from /models. Enter the model id(s) from your LiteLLM config.");
}
const modelIdsRaw = await promptText("Model id(s) (comma-separated)", modelIdsDefault);
const modelIds = normalizeModelIds(modelIdsRaw);
if (modelIds.length === 0) {
printWarning("At least one model id is required.");
return undefined;
}
return {
providerId: "litellm",
modelIds,
baseUrl,
api: "openai-completions",
apiKeyConfig,
authHeader,
};
}
async function verifyCustomProvider(setup: CustomProviderSetup, authPath: string): Promise<void> {
const registry = createModelRegistry(authPath);
const modelsError = registry.getError();
@@ -447,13 +598,116 @@ async function verifyCustomProvider(setup: CustomProviderSetup, authPath: string
printInfo("Verification: skipped network probe for this API mode.");
}
async function configureApiKeyProvider(authPath: string): Promise<boolean> {
const provider = await selectApiKeyProvider();
async function verifyBedrockCredentialChain(): Promise<void> {
const { defaultProvider } = await import("@aws-sdk/credential-provider-node");
const credentials = await defaultProvider({})();
if (!credentials?.accessKeyId || !credentials?.secretAccessKey) {
throw new Error("AWS credential chain resolved without usable Bedrock credentials.");
}
}
async function configureBedrockProvider(authPath: string): Promise<boolean> {
printSection("AWS Credentials: Amazon Bedrock");
printInfo("Feynman will verify the AWS SDK credential chain used by Pi's Bedrock provider.");
printInfo("Supported sources include AWS_PROFILE, ~/.aws credentials/config, SSO, ECS/IRSA, and EC2 instance roles.");
try {
await verifyBedrockCredentialChain();
AuthStorage.create(authPath).set("amazon-bedrock", { type: "api_key", key: "<authenticated>" });
printSuccess("Verified AWS credential chain and marked Amazon Bedrock as configured.");
printInfo("Use `feynman model list` to see available Bedrock models.");
return true;
} catch (error) {
printWarning(`AWS credential verification failed: ${error instanceof Error ? error.message : String(error)}`);
printInfo("Configure AWS credentials first, for example:");
printInfo(" export AWS_PROFILE=default");
printInfo(" # or set AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY");
printInfo(" # or use an EC2/ECS/IRSA role with valid Bedrock access");
return false;
}
}
function maybeSetRecommendedDefaultModel(settingsPath: string | undefined, authPath: string): void {
if (!settingsPath) {
return;
}
const currentSpec = getCurrentModelSpec(settingsPath);
const available = getAvailableModelRecords(authPath);
const currentValid = currentSpec ? available.some((m) => `${m.provider}/${m.id}` === currentSpec) : false;
if ((!currentSpec || !currentValid) && available.length > 0) {
const recommended = chooseRecommendedModel(authPath);
if (recommended) {
setDefaultModelSpec(settingsPath, authPath, recommended.spec);
}
}
}
async function configureApiKeyProvider(authPath: string, providerId?: string): Promise<boolean> {
const provider = providerId ? resolveApiKeyProvider(providerId) : await selectApiKeyProvider();
if (!provider) {
if (providerId) {
throw new Error(`Unknown API-key model provider: ${providerId}`);
}
printInfo("API key setup cancelled.");
return false;
}
if (provider.id === "amazon-bedrock") {
return configureBedrockProvider(authPath);
}
if (provider.id === "lm-studio") {
const setup = await promptLmStudioProviderSetup();
if (!setup) {
printInfo("LM Studio setup cancelled.");
return false;
}
const modelsJsonPath = getModelsJsonPath(authPath);
const result = upsertProviderConfig(modelsJsonPath, setup.providerId, {
baseUrl: setup.baseUrl,
apiKey: setup.apiKeyConfig,
api: setup.api,
authHeader: setup.authHeader,
models: setup.modelIds.map((id) => ({ id })),
});
if (!result.ok) {
printWarning(result.error);
return false;
}
printSuccess("Saved LM Studio provider.");
await verifyCustomProvider(setup, authPath);
return true;
}
if (provider.id === "litellm") {
const setup = await promptLiteLlmProviderSetup();
if (!setup) {
printInfo("LiteLLM setup cancelled.");
return false;
}
const modelsJsonPath = getModelsJsonPath(authPath);
const result = upsertProviderConfig(modelsJsonPath, setup.providerId, {
baseUrl: setup.baseUrl,
apiKey: setup.apiKeyConfig,
api: setup.api,
authHeader: setup.authHeader,
models: setup.modelIds.map((id) => ({ id })),
});
if (!result.ok) {
printWarning(result.error);
return false;
}
printSuccess("Saved LiteLLM provider.");
await verifyCustomProvider(setup, authPath);
return true;
}
if (provider.id === "__custom__") {
const setup = await promptCustomProviderSetup();
if (!setup) {
@@ -512,7 +766,7 @@ async function configureApiKeyProvider(authPath: string): Promise<boolean> {
}
function resolveAvailableModelSpec(authPath: string, input: string): string | undefined {
const normalizedInput = input.trim().toLowerCase();
const normalizedInput = input.trim().replace(/^([^/:]+):(.+)$/, "$1/$2").toLowerCase();
if (!normalizedInput) {
return undefined;
}
@@ -528,6 +782,17 @@ function resolveAvailableModelSpec(authPath: string, input: string): string | un
return `${exactIdMatches[0]!.provider}/${exactIdMatches[0]!.id}`;
}
// When multiple providers expose the same bare model ID, prefer providers the
// user explicitly configured in auth storage.
if (exactIdMatches.length > 1) {
const authData = readJson(authPath) as Record<string, unknown>;
const configuredProviders = new Set(Object.keys(authData));
const configuredMatches = exactIdMatches.filter((model) => configuredProviders.has(model.provider));
if (configuredMatches.length === 1) {
return `${configuredMatches[0]!.provider}/${configuredMatches[0]!.id}`;
}
}
return undefined;
}
@@ -566,30 +831,22 @@ export function printModelList(settingsPath: string, authPath: string): void {
export async function authenticateModelProvider(authPath: string, settingsPath?: string): Promise<boolean> {
const choices = [
"API key (OpenAI, Anthropic, Google, custom provider, ...)",
"OAuth login (ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
"OAuth login (recommended: ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
"API key or custom provider (OpenAI, Anthropic, Google, local/self-hosted, ...)",
"Cancel",
];
const selection = await promptChoice("How do you want to authenticate?", choices, 0);
if (selection === 0) {
const configured = await configureApiKeyProvider(authPath);
if (configured && settingsPath) {
const currentSpec = getCurrentModelSpec(settingsPath);
const available = getAvailableModelRecords(authPath);
const currentValid = currentSpec ? available.some((m) => `${m.provider}/${m.id}` === currentSpec) : false;
if ((!currentSpec || !currentValid) && available.length > 0) {
const recommended = chooseRecommendedModel(authPath);
if (recommended) {
setDefaultModelSpec(settingsPath, authPath, recommended.spec);
}
}
}
return configured;
return loginModelProvider(authPath, undefined, settingsPath);
}
if (selection === 1) {
return loginModelProvider(authPath, undefined, settingsPath);
const configured = await configureApiKeyProvider(authPath);
if (configured) {
maybeSetRecommendedDefaultModel(settingsPath, authPath);
}
return configured;
}
printInfo("Authentication cancelled.");
@@ -597,10 +854,24 @@ export async function authenticateModelProvider(authPath: string, settingsPath?:
}
export async function loginModelProvider(authPath: string, providerId?: string, settingsPath?: string): Promise<boolean> {
if (providerId) {
const resolvedProvider = resolveModelProviderForCommand(authPath, providerId);
if (!resolvedProvider) {
throw new Error(`Unknown model provider: ${providerId}`);
}
if (resolvedProvider.kind === "api-key") {
const configured = await configureApiKeyProvider(authPath, resolvedProvider.id);
if (configured) {
maybeSetRecommendedDefaultModel(settingsPath, authPath);
}
return configured;
}
}
const provider = providerId ? resolveOAuthProvider(authPath, providerId) : await selectOAuthProvider(authPath, "login");
if (!provider) {
if (providerId) {
throw new Error(`Unknown OAuth model provider: ${providerId}`);
throw new Error(`Unknown model provider: ${providerId}`);
}
printInfo("Login cancelled.");
return false;
@@ -637,35 +908,38 @@ export async function loginModelProvider(authPath: string, providerId?: string,
printSuccess(`Model provider login complete: ${provider.id}`);
if (settingsPath) {
const currentSpec = getCurrentModelSpec(settingsPath);
const available = getAvailableModelRecords(authPath);
const currentValid = currentSpec
? available.some((m) => `${m.provider}/${m.id}` === currentSpec)
: false;
if ((!currentSpec || !currentValid) && available.length > 0) {
const recommended = chooseRecommendedModel(authPath);
if (recommended) {
setDefaultModelSpec(settingsPath, authPath, recommended.spec);
}
}
}
maybeSetRecommendedDefaultModel(settingsPath, authPath);
return true;
}
export async function logoutModelProvider(authPath: string, providerId?: string): Promise<void> {
const provider = providerId ? resolveOAuthProvider(authPath, providerId) : await selectOAuthProvider(authPath, "logout");
if (!provider) {
const authStorage = AuthStorage.create(authPath);
if (providerId) {
throw new Error(`Unknown OAuth model provider: ${providerId}`);
const resolvedProvider = resolveModelProviderForCommand(authPath, providerId);
if (resolvedProvider) {
authStorage.logout(resolvedProvider.id);
printSuccess(`Model provider logout complete: ${resolvedProvider.id}`);
return;
}
const normalizedProviderId = normalizeProviderId(providerId);
if (authStorage.has(normalizedProviderId)) {
authStorage.logout(normalizedProviderId);
printSuccess(`Model provider logout complete: ${normalizedProviderId}`);
return;
}
throw new Error(`Unknown model provider: ${providerId}`);
}
const provider = await selectOAuthProvider(authPath, "logout");
if (!provider) {
printInfo("Logout cancelled.");
return;
}
AuthStorage.create(authPath).logout(provider.id);
authStorage.logout(provider.id);
printSuccess(`Model provider logout complete: ${provider.id}`);
}
@@ -689,20 +963,20 @@ export async function runModelSetup(settingsPath: string, authPath: string): Pro
while (status.availableModels.length === 0) {
const choices = [
"API key (OpenAI, Anthropic, ZAI, Kimi, MiniMax, ...)",
"OAuth login (ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
"OAuth login (recommended: ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
"API key or custom provider (OpenAI, Anthropic, ZAI, Kimi, MiniMax, ...)",
"Cancel",
];
const selection = await promptChoice("Choose how to configure model access:", choices, 0);
if (selection === 0) {
const configured = await configureApiKeyProvider(authPath);
if (!configured) {
const loggedIn = await loginModelProvider(authPath, undefined, settingsPath);
if (!loggedIn) {
status = collectModelStatus(settingsPath, authPath);
continue;
}
} else if (selection === 1) {
const loggedIn = await loginModelProvider(authPath, undefined, settingsPath);
if (!loggedIn) {
const configured = await configureApiKeyProvider(authPath);
if (!configured) {
status = collectModelStatus(settingsPath, authPath);
continue;
}

View File

@@ -1,12 +1,41 @@
import { dirname, resolve } from "node:path";
import { AuthStorage, ModelRegistry } from "@mariozechner/pi-coding-agent";
import { getModels } from "@mariozechner/pi-ai";
import { anthropicOAuthProvider } from "@mariozechner/pi-ai/oauth";
export function getModelsJsonPath(authPath: string): string {
return resolve(dirname(authPath), "models.json");
}
export function createModelRegistry(authPath: string): ModelRegistry {
return new ModelRegistry(AuthStorage.create(authPath), getModelsJsonPath(authPath));
function registerFeynmanModelOverlays(modelRegistry: ModelRegistry): void {
const anthropicModels = getModels("anthropic");
if (anthropicModels.some((model) => model.id === "claude-opus-4-7")) {
return;
}
const opus46 = anthropicModels.find((model) => model.id === "claude-opus-4-6");
if (!opus46) {
return;
}
modelRegistry.registerProvider("anthropic", {
baseUrl: "https://api.anthropic.com",
api: "anthropic-messages",
oauth: anthropicOAuthProvider,
models: [
...anthropicModels,
{
...opus46,
id: "claude-opus-4-7",
name: "Claude Opus 4.7",
},
],
});
}
export function createModelRegistry(authPath: string): ModelRegistry {
const registry = ModelRegistry.create(AuthStorage.create(authPath), getModelsJsonPath(authPath));
registerFeynmanModelOverlays(registry);
return registry;
}

65
src/model/service-tier.ts Normal file
View File

@@ -0,0 +1,65 @@
import { mkdirSync, readFileSync, writeFileSync } from "node:fs";
import { dirname } from "node:path";
export const FEYNMAN_SERVICE_TIERS = [
"auto",
"default",
"flex",
"priority",
"standard_only",
] as const;
export type FeynmanServiceTier = (typeof FEYNMAN_SERVICE_TIERS)[number];
const SERVICE_TIER_SET = new Set<string>(FEYNMAN_SERVICE_TIERS);
const OPENAI_SERVICE_TIERS = new Set<FeynmanServiceTier>(["auto", "default", "flex", "priority"]);
const ANTHROPIC_SERVICE_TIERS = new Set<FeynmanServiceTier>(["auto", "standard_only"]);
function readSettings(settingsPath: string): Record<string, unknown> {
try {
return JSON.parse(readFileSync(settingsPath, "utf8")) as Record<string, unknown>;
} catch {
return {};
}
}
export function normalizeServiceTier(value: string | undefined): FeynmanServiceTier | undefined {
if (!value) return undefined;
const normalized = value.trim().toLowerCase();
return SERVICE_TIER_SET.has(normalized) ? (normalized as FeynmanServiceTier) : undefined;
}
export function getConfiguredServiceTier(settingsPath: string): FeynmanServiceTier | undefined {
const settings = readSettings(settingsPath);
return normalizeServiceTier(typeof settings.serviceTier === "string" ? settings.serviceTier : undefined);
}
export function setConfiguredServiceTier(settingsPath: string, tier: FeynmanServiceTier | undefined): void {
const settings = readSettings(settingsPath);
if (tier) {
settings.serviceTier = tier;
} else {
delete settings.serviceTier;
}
mkdirSync(dirname(settingsPath), { recursive: true });
writeFileSync(settingsPath, JSON.stringify(settings, null, 2) + "\n", "utf8");
}
export function resolveActiveServiceTier(settingsPath: string): FeynmanServiceTier | undefined {
return normalizeServiceTier(process.env.FEYNMAN_SERVICE_TIER) ?? getConfiguredServiceTier(settingsPath);
}
export function resolveProviderServiceTier(
provider: string | undefined,
tier: FeynmanServiceTier | undefined,
): FeynmanServiceTier | undefined {
if (!provider || !tier) return undefined;
if ((provider === "openai" || provider === "openai-codex") && OPENAI_SERVICE_TIERS.has(tier)) {
return tier;
}
if (provider === "anthropic" && ANTHROPIC_SERVICE_TIERS.has(tier)) {
return tier;
}
return undefined;
}

View File

@@ -1,9 +1,15 @@
import { spawn } from "node:child_process";
import { existsSync } from "node:fs";
import { constants } from "node:os";
import { buildPiArgs, buildPiEnv, type PiRuntimeOptions, resolvePiPaths } from "./runtime.js";
import { buildPiArgs, buildPiEnv, type PiRuntimeOptions, resolvePiPaths, toNodeImportSpecifier } from "./runtime.js";
import { ensureSupportedNodeVersion } from "../system/node-version.js";
export function exitCodeFromSignal(signal: NodeJS.Signals): number {
const signalNumber = constants.signals[signal];
return typeof signalNumber === "number" ? 128 + signalNumber : 1;
}
export async function launchPiChat(options: PiRuntimeOptions): Promise<void> {
ensureSupportedNodeVersion();
@@ -18,13 +24,13 @@ export async function launchPiChat(options: PiRuntimeOptions): Promise<void> {
throw new Error(`Promise polyfill not found: ${promisePolyfillPath}`);
}
if (process.stdout.isTTY) {
if (process.stdout.isTTY && options.mode !== "rpc") {
process.stdout.write("\x1b[2J\x1b[3J\x1b[H");
}
const importArgs = useDevPolyfill
? ["--import", tsxLoaderPath, "--import", promisePolyfillSourcePath]
: ["--import", promisePolyfillPath];
? ["--import", toNodeImportSpecifier(tsxLoaderPath), "--import", toNodeImportSpecifier(promisePolyfillSourcePath)]
: ["--import", toNodeImportSpecifier(promisePolyfillPath)];
const child = spawn(process.execPath, [...importArgs, piCliPath, ...buildPiArgs(options)], {
cwd: options.workingDir,
@@ -36,11 +42,9 @@ export async function launchPiChat(options: PiRuntimeOptions): Promise<void> {
child.on("error", reject);
child.on("exit", (code, signal) => {
if (signal) {
try {
process.kill(process.pid, signal);
} catch {
process.exitCode = 1;
}
console.error(`feynman terminated because the Pi child exited with ${signal}.`);
process.exitCode = exitCodeFromSignal(signal);
resolvePromise();
return;
}
process.exitCode = code ?? 0;

542
src/pi/package-ops.ts Normal file
View File

@@ -0,0 +1,542 @@
import { spawn } from "node:child_process";
import { cpSync, existsSync, lstatSync, mkdirSync, readdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
import { fileURLToPath } from "node:url";
import { dirname, join, resolve } from "node:path";
import { DefaultPackageManager, SettingsManager } from "@mariozechner/pi-coding-agent";
import { NATIVE_PACKAGE_SOURCES, supportsNativePackageSources } from "./package-presets.js";
import { applyFeynmanPackageManagerEnv, getFeynmanNpmPrefixPath } from "./runtime.js";
import { getPathWithCurrentNode, resolveExecutable } from "../system/executables.js";
type PackageScope = "user" | "project";
type ConfiguredPackage = {
source: string;
scope: PackageScope;
filtered: boolean;
installedPath?: string;
};
type NpmSource = {
name: string;
source: string;
spec: string;
pinned: boolean;
};
export type MissingConfiguredPackageSummary = {
missing: ConfiguredPackage[];
bundled: ConfiguredPackage[];
};
export type InstallPackageSourcesResult = {
installed: string[];
skipped: string[];
};
export type UpdateConfiguredPackagesResult = {
updated: string[];
skipped: string[];
};
const FILTERED_INSTALL_OUTPUT_PATTERNS = [
/npm warn deprecated node-domexception@1\.0\.0/i,
/npm notice/i,
/^(added|removed|changed) \d+ packages?( in .+)?$/i,
/^(\d+ )?packages are looking for funding$/i,
/^run `npm fund` for details$/i,
];
const APP_ROOT = resolve(dirname(fileURLToPath(import.meta.url)), "..", "..");
function createPackageContext(workingDir: string, agentDir: string) {
applyFeynmanPackageManagerEnv(agentDir);
process.env.PATH = getPathWithCurrentNode(process.env.PATH);
const settingsManager = SettingsManager.create(workingDir, agentDir);
const packageManager = new DefaultPackageManager({
cwd: workingDir,
agentDir,
settingsManager,
});
return {
settingsManager,
packageManager,
};
}
function shouldSkipNativeSource(source: string, version = process.versions.node): boolean {
return !supportsNativePackageSources(version) && NATIVE_PACKAGE_SOURCES.includes(source as (typeof NATIVE_PACKAGE_SOURCES)[number]);
}
function filterUnsupportedSources(sources: string[], version = process.versions.node): { supported: string[]; skipped: string[] } {
const supported: string[] = [];
const skipped: string[] = [];
for (const source of sources) {
if (shouldSkipNativeSource(source, version)) {
skipped.push(source);
continue;
}
supported.push(source);
}
return { supported, skipped };
}
function relayFilteredOutput(chunk: Buffer | string, writer: NodeJS.WriteStream): void {
const text = chunk.toString();
for (const line of text.split(/\r?\n/)) {
if (!line.trim()) continue;
if (FILTERED_INSTALL_OUTPUT_PATTERNS.some((pattern) => pattern.test(line.trim()))) {
continue;
}
writer.write(`${line}\n`);
}
}
function parseNpmSource(source: string): NpmSource | undefined {
if (!source.startsWith("npm:")) {
return undefined;
}
const spec = source.slice("npm:".length).trim();
const match = spec.match(/^(@?[^@]+(?:\/[^@]+)?)(?:@(.+))?$/);
const name = match?.[1] ?? spec;
const version = match?.[2];
return {
name,
source,
spec,
pinned: Boolean(version),
};
}
function dedupeNpmSources(sources: string[], updateToLatest: boolean): string[] {
const specs = new Map<string, string>();
for (const source of sources) {
const parsed = parseNpmSource(source);
if (!parsed) continue;
specs.set(parsed.name, updateToLatest && !parsed.pinned ? `${parsed.name}@latest` : parsed.spec);
}
return [...specs.values()];
}
function ensureProjectInstallRoot(workingDir: string): string {
const installRoot = resolve(workingDir, ".feynman", "npm");
mkdirSync(installRoot, { recursive: true });
const ignorePath = join(installRoot, ".gitignore");
if (!existsSync(ignorePath)) {
writeFileSync(ignorePath, "*\n!.gitignore\n", "utf8");
}
const packageJsonPath = join(installRoot, "package.json");
if (!existsSync(packageJsonPath)) {
writeFileSync(packageJsonPath, JSON.stringify({ name: "feynman-packages", private: true }, null, 2) + "\n", "utf8");
}
return installRoot;
}
function resolveAdjacentNpmExecutable(): string | undefined {
const executableName = process.platform === "win32" ? "npm.cmd" : "npm";
const candidate = resolve(dirname(process.execPath), executableName);
return existsSync(candidate) ? candidate : undefined;
}
function resolvePackageManagerCommand(settingsManager: SettingsManager): { command: string; args: string[] } | undefined {
const configured = settingsManager.getNpmCommand();
if (!configured || configured.length === 0) {
const adjacentNpm = resolveAdjacentNpmExecutable() ?? resolveExecutable("npm");
return adjacentNpm ? { command: adjacentNpm, args: [] } : undefined;
}
const [command = "npm", ...args] = configured;
if (!command) {
return undefined;
}
const executable = resolveExecutable(command);
if (!executable) {
return undefined;
}
return { command: executable, args };
}
function childPackageManagerEnv(): NodeJS.ProcessEnv {
return {
...process.env,
PATH: getPathWithCurrentNode(process.env.PATH),
npm_config_dry_run: "false",
NPM_CONFIG_DRY_RUN: "false",
};
}
async function runPackageManagerInstall(
settingsManager: SettingsManager,
workingDir: string,
agentDir: string,
scope: PackageScope,
specs: string[],
): Promise<void> {
if (specs.length === 0) {
return;
}
const packageManagerCommand = resolvePackageManagerCommand(settingsManager);
if (!packageManagerCommand) {
throw new Error("No supported package manager found. Install npm, pnpm, or bun, or configure `npmCommand`.");
}
const args = [
...packageManagerCommand.args,
"install",
"--no-audit",
"--no-fund",
"--legacy-peer-deps",
"--loglevel",
"error",
];
if (scope === "user") {
args.push("-g", "--prefix", getFeynmanNpmPrefixPath(agentDir));
} else {
args.push("--prefix", ensureProjectInstallRoot(workingDir));
}
args.push(...specs);
await new Promise<void>((resolvePromise, reject) => {
const child = spawn(packageManagerCommand.command, args, {
cwd: scope === "user" ? agentDir : workingDir,
stdio: ["ignore", "pipe", "pipe"],
env: childPackageManagerEnv(),
});
child.stdout?.on("data", (chunk) => relayFilteredOutput(chunk, process.stdout));
child.stderr?.on("data", (chunk) => relayFilteredOutput(chunk, process.stderr));
child.on("error", reject);
child.on("exit", (code) => {
if ((code ?? 1) !== 0) {
const installingGenerativeUi = specs.some((spec) => spec.startsWith("pi-generative-ui"));
if (installingGenerativeUi && process.platform === "darwin") {
reject(
new Error(
"Installing pi-generative-ui failed. Its native glimpseui dependency did not compile against the current macOS/Xcode toolchain. Try the npm-installed Feynman path with your local Node toolchain or skip this optional preset for now.",
),
);
return;
}
reject(new Error(`${packageManagerCommand.command} install failed with code ${code ?? 1}`));
return;
}
resolvePromise();
});
});
}
function groupConfiguredNpmSources(packages: ConfiguredPackage[]): Record<PackageScope, string[]> {
return {
user: packages.filter((entry) => entry.scope === "user").map((entry) => entry.source),
project: packages.filter((entry) => entry.scope === "project").map((entry) => entry.source),
};
}
function isBundledWorkspacePackagePath(installedPath: string | undefined, appRoot: string): boolean {
if (!installedPath) {
return false;
}
const bundledRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
return installedPath.startsWith(bundledRoot);
}
export function getMissingConfiguredPackages(
workingDir: string,
agentDir: string,
appRoot: string,
): MissingConfiguredPackageSummary {
const { packageManager } = createPackageContext(workingDir, agentDir);
const configured = packageManager.listConfiguredPackages();
return configured.reduce<MissingConfiguredPackageSummary>(
(summary, entry) => {
if (entry.installedPath) {
if (isBundledWorkspacePackagePath(entry.installedPath, appRoot)) {
summary.bundled.push(entry);
}
return summary;
}
summary.missing.push(entry);
return summary;
},
{ missing: [], bundled: [] },
);
}
export async function installPackageSources(
workingDir: string,
agentDir: string,
sources: string[],
options?: { local?: boolean; persist?: boolean },
): Promise<InstallPackageSourcesResult> {
const { settingsManager, packageManager } = createPackageContext(workingDir, agentDir);
const scope: PackageScope = options?.local ? "project" : "user";
const installed: string[] = [];
const bundledSeeded = scope === "user" ? seedBundledWorkspacePackages(agentDir, APP_ROOT, sources) : [];
installed.push(...bundledSeeded);
const remainingSources = sources.filter((source) => !bundledSeeded.includes(source));
const grouped = groupConfiguredNpmSources(
remainingSources.map((source) => ({
source,
scope,
filtered: false,
})),
);
const { supported: supportedUserSources, skipped } = filterUnsupportedSources(grouped.user);
const { supported: supportedProjectSources, skipped: skippedProject } = filterUnsupportedSources(grouped.project);
skipped.push(...skippedProject);
const supportedNpmSources = scope === "user" ? supportedUserSources : supportedProjectSources;
if (supportedNpmSources.length > 0) {
await runPackageManagerInstall(settingsManager, workingDir, agentDir, scope, dedupeNpmSources(supportedNpmSources, false));
installed.push(...supportedNpmSources);
}
for (const source of sources) {
if (parseNpmSource(source)) {
continue;
}
await packageManager.install(source, { local: options?.local });
installed.push(source);
}
if (options?.persist) {
for (const source of installed) {
if (packageManager.addSourceToSettings(source, { local: options?.local })) {
continue;
}
skipped.push(source);
}
await settingsManager.flush();
}
return { installed, skipped };
}
export async function updateConfiguredPackages(
workingDir: string,
agentDir: string,
source?: string,
): Promise<UpdateConfiguredPackagesResult> {
const { settingsManager, packageManager } = createPackageContext(workingDir, agentDir);
if (source) {
await packageManager.update(source);
return { updated: [source], skipped: [] };
}
const availableUpdates = await packageManager.checkForAvailableUpdates();
if (availableUpdates.length === 0) {
return { updated: [], skipped: [] };
}
const npmUpdatesByScope: Record<PackageScope, string[]> = { user: [], project: [] };
const gitUpdates: string[] = [];
const skipped: string[] = [];
for (const entry of availableUpdates) {
if (entry.type === "npm") {
if (shouldSkipNativeSource(entry.source)) {
skipped.push(entry.source);
continue;
}
npmUpdatesByScope[entry.scope].push(entry.source);
continue;
}
gitUpdates.push(entry.source);
}
for (const scope of ["user", "project"] as const) {
const sources = npmUpdatesByScope[scope];
if (sources.length === 0) continue;
await runPackageManagerInstall(settingsManager, workingDir, agentDir, scope, dedupeNpmSources(sources, true));
}
for (const gitSource of gitUpdates) {
await packageManager.update(gitSource);
}
return {
updated: availableUpdates
.map((entry) => entry.source)
.filter((source) => !skipped.includes(source)),
skipped,
};
}
function ensureParentDir(path: string): void {
mkdirSync(dirname(path), { recursive: true });
}
function pathsMatchSymlinkTarget(linkPath: string, targetPath: string): boolean {
try {
if (!lstatSync(linkPath).isSymbolicLink()) {
return false;
}
return resolve(dirname(linkPath), readlinkSync(linkPath)) === targetPath;
} catch {
return false;
}
}
function linkDirectory(linkPath: string, targetPath: string): void {
if (pathsMatchSymlinkTarget(linkPath, targetPath)) {
return;
}
try {
if (existsSync(linkPath) && lstatSync(linkPath).isSymbolicLink()) {
rmSync(linkPath, { force: true });
}
} catch {}
if (existsSync(linkPath)) {
return;
}
ensureParentDir(linkPath);
try {
symlinkSync(targetPath, linkPath, process.platform === "win32" ? "junction" : "dir");
} catch {
// Fallback for filesystems that do not allow symlinks.
if (!existsSync(linkPath)) {
cpSync(targetPath, linkPath, { recursive: true });
}
}
}
function packageNameToPath(root: string, packageName: string): string {
return resolve(root, packageName);
}
function listBundledWorkspacePackageNames(root: string): string[] {
if (!existsSync(root)) {
return [];
}
const names: string[] = [];
for (const entry of readdirSync(root, { withFileTypes: true })) {
if (!entry.isDirectory() && !entry.isSymbolicLink()) continue;
if (entry.name.startsWith(".")) continue;
if (entry.name.startsWith("@")) {
const scopeRoot = resolve(root, entry.name);
for (const scopedEntry of readdirSync(scopeRoot, { withFileTypes: true })) {
if (!scopedEntry.isDirectory() && !scopedEntry.isSymbolicLink()) continue;
names.push(`${entry.name}/${scopedEntry.name}`);
}
continue;
}
names.push(entry.name);
}
return names;
}
function packageDependencyExists(packagePath: string, globalNodeModulesRoot: string, dependency: string): boolean {
return existsSync(packageNameToPath(resolve(packagePath, "node_modules"), dependency)) ||
existsSync(packageNameToPath(globalNodeModulesRoot, dependency));
}
function installedPackageLooksUsable(packagePath: string, globalNodeModulesRoot: string): boolean {
if (!existsSync(resolve(packagePath, "package.json"))) {
return false;
}
try {
const pkg = JSON.parse(readFileSync(resolve(packagePath, "package.json"), "utf8")) as {
dependencies?: Record<string, string>;
};
const dependencies = Object.keys(pkg.dependencies ?? {});
return dependencies.every((dependency) => packageDependencyExists(packagePath, globalNodeModulesRoot, dependency));
} catch {
return false;
}
}
function replaceBrokenPackageWithBundledCopy(targetPath: string, bundledPackagePath: string, globalNodeModulesRoot: string): boolean {
if (!existsSync(targetPath)) {
return false;
}
if (pathsMatchSymlinkTarget(targetPath, bundledPackagePath)) {
return false;
}
if (installedPackageLooksUsable(targetPath, globalNodeModulesRoot)) {
return false;
}
rmSync(targetPath, { recursive: true, force: true });
linkDirectory(targetPath, bundledPackagePath);
return true;
}
function seedBundledPackage(globalNodeModulesRoot: string, bundledNodeModulesRoot: string, packageName: string): boolean {
const bundledPackagePath = resolve(bundledNodeModulesRoot, packageName);
if (!existsSync(bundledPackagePath)) {
return false;
}
const targetPath = resolve(globalNodeModulesRoot, packageName);
if (replaceBrokenPackageWithBundledCopy(targetPath, bundledPackagePath, globalNodeModulesRoot)) {
return true;
}
if (!existsSync(targetPath)) {
linkDirectory(targetPath, bundledPackagePath);
return true;
}
return false;
}
export function seedBundledWorkspacePackages(
agentDir: string,
appRoot: string,
sources: string[],
): string[] {
const bundledNodeModulesRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
if (!existsSync(bundledNodeModulesRoot)) {
return [];
}
const globalNodeModulesRoot = resolve(getFeynmanNpmPrefixPath(agentDir), "lib", "node_modules");
const seeded: string[] = [];
const bundledPackageNames = listBundledWorkspacePackageNames(bundledNodeModulesRoot);
for (const packageName of bundledPackageNames) {
seedBundledPackage(globalNodeModulesRoot, bundledNodeModulesRoot, packageName);
}
for (const source of sources) {
if (shouldSkipNativeSource(source)) continue;
const parsed = parseNpmSource(source);
if (!parsed) continue;
const targetPath = resolve(globalNodeModulesRoot, parsed.name);
if (pathsMatchSymlinkTarget(targetPath, resolve(bundledNodeModulesRoot, parsed.name))) {
seeded.push(source);
}
}
return seeded;
}

View File

@@ -17,6 +17,13 @@ export const CORE_PACKAGE_SOURCES = [
"npm:@tmustier/pi-ralph-wiggum",
] as const;
export const NATIVE_PACKAGE_SOURCES = [
"npm:@kaiserlich-dev/pi-session-search",
"npm:@samfp/pi-memory",
] as const;
export const MAX_NATIVE_PACKAGE_NODE_MAJOR = 24;
export const OPTIONAL_PACKAGE_PRESETS = {
"generative-ui": {
description: "Interactive Glimpse UI widgets.",
@@ -50,6 +57,24 @@ export function shouldPruneLegacyDefaultPackages(packages: PackageSource[] | und
return arraysMatchAsSets(packages as string[], LEGACY_DEFAULT_PACKAGE_SOURCES);
}
function parseNodeMajor(version: string): number {
const [major = "0"] = version.replace(/^v/, "").split(".");
return Number.parseInt(major, 10) || 0;
}
export function supportsNativePackageSources(version = process.versions.node): boolean {
return parseNodeMajor(version) <= MAX_NATIVE_PACKAGE_NODE_MAJOR;
}
export function filterPackageSourcesForCurrentNode<T extends string>(sources: readonly T[], version = process.versions.node): T[] {
if (supportsNativePackageSources(version)) {
return [...sources];
}
const blocked = new Set<string>(NATIVE_PACKAGE_SOURCES);
return sources.filter((source) => !blocked.has(source));
}
export function getOptionalPackagePresetSources(name: string): string[] | undefined {
const normalized = name.trim().toLowerCase();
if (normalized === "ui") {

View File

@@ -1,5 +1,6 @@
import { existsSync, readFileSync } from "node:fs";
import { delimiter, dirname, resolve } from "node:path";
import { delimiter, dirname, isAbsolute, resolve } from "node:path";
import { pathToFileURL } from "node:url";
import {
BROWSER_FALLBACK_PATHS,
@@ -14,12 +15,25 @@ export type PiRuntimeOptions = {
sessionDir: string;
feynmanAgentDir: string;
feynmanVersion?: string;
mode?: "text" | "json" | "rpc";
thinkingLevel?: string;
explicitModelSpec?: string;
oneShotPrompt?: string;
initialPrompt?: string;
};
export function getFeynmanNpmPrefixPath(feynmanAgentDir: string): string {
return resolve(dirname(feynmanAgentDir), "npm-global");
}
export function applyFeynmanPackageManagerEnv(feynmanAgentDir: string): string {
const feynmanNpmPrefixPath = getFeynmanNpmPrefixPath(feynmanAgentDir);
process.env.FEYNMAN_NPM_PREFIX = feynmanNpmPrefixPath;
process.env.NPM_CONFIG_PREFIX = feynmanNpmPrefixPath;
process.env.npm_config_prefix = feynmanNpmPrefixPath;
return feynmanNpmPrefixPath;
}
export function resolvePiPaths(appRoot: string) {
return {
piPackageRoot: resolve(appRoot, "node_modules", "@mariozechner", "pi-coding-agent"),
@@ -35,6 +49,10 @@ export function resolvePiPaths(appRoot: string) {
};
}
export function toNodeImportSpecifier(modulePath: string): string {
return isAbsolute(modulePath) ? pathToFileURL(modulePath).href : modulePath;
}
export function validatePiInstallation(appRoot: string): string[] {
const paths = resolvePiPaths(appRoot);
const missing: string[] = [];
@@ -66,6 +84,9 @@ export function buildPiArgs(options: PiRuntimeOptions): string[] {
args.push("--system-prompt", readFileSync(paths.systemPromptPath, "utf8"));
}
if (options.mode) {
args.push("--mode", options.mode);
}
if (options.explicitModelSpec) {
args.push("--model", options.explicitModelSpec);
}
@@ -83,9 +104,9 @@ export function buildPiArgs(options: PiRuntimeOptions): string[] {
export function buildPiEnv(options: PiRuntimeOptions): NodeJS.ProcessEnv {
const paths = resolvePiPaths(options.appRoot);
const feynmanHome = dirname(options.feynmanAgentDir);
const feynmanNpmPrefixPath = resolve(feynmanHome, "npm-global");
const feynmanNpmPrefixPath = getFeynmanNpmPrefixPath(options.feynmanAgentDir);
const feynmanNpmBinPath = resolve(feynmanNpmPrefixPath, "bin");
const feynmanWebSearchConfigPath = resolve(dirname(options.feynmanAgentDir), "web-search.json");
const currentPath = process.env.PATH ?? "";
const binEntries = [paths.nodeModulesBinPath, resolve(paths.piWorkspaceNodeModulesPath, ".bin"), feynmanNpmBinPath];
@@ -97,10 +118,13 @@ export function buildPiEnv(options: PiRuntimeOptions): NodeJS.ProcessEnv {
FEYNMAN_VERSION: options.feynmanVersion,
FEYNMAN_SESSION_DIR: options.sessionDir,
FEYNMAN_MEMORY_DIR: resolve(dirname(options.feynmanAgentDir), "memory"),
FEYNMAN_WEB_SEARCH_CONFIG: feynmanWebSearchConfigPath,
FEYNMAN_NODE_EXECUTABLE: process.execPath,
FEYNMAN_BIN_PATH: resolve(options.appRoot, "bin", "feynman.js"),
FEYNMAN_NPM_PREFIX: feynmanNpmPrefixPath,
// Ensure the Pi child process uses Feynman's agent dir for auth/models/settings.
// Patched Pi uses FEYNMAN_CODING_AGENT_DIR; upstream Pi uses PI_CODING_AGENT_DIR.
FEYNMAN_CODING_AGENT_DIR: options.feynmanAgentDir,
PI_CODING_AGENT_DIR: options.feynmanAgentDir,
PANDOC_PATH: process.env.PANDOC_PATH ?? resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS),
PI_HARDWARE_CURSOR: process.env.PI_HARDWARE_CURSOR ?? "1",

View File

@@ -3,7 +3,7 @@ import { dirname } from "node:path";
import { ModelRegistry, type PackageSource } from "@mariozechner/pi-coding-agent";
import { CORE_PACKAGE_SOURCES, shouldPruneLegacyDefaultPackages } from "./package-presets.js";
import { CORE_PACKAGE_SOURCES, filterPackageSourcesForCurrentNode, shouldPruneLegacyDefaultPackages } from "./package-presets.js";
import { createModelRegistry } from "../model/registry.js";
export type ThinkingLevel = "off" | "minimal" | "low" | "medium" | "high" | "xhigh";
@@ -67,6 +67,23 @@ function choosePreferredModel(
return availableModels[0];
}
function filterConfiguredPackagesForCurrentNode(packages: PackageSource[] | undefined): PackageSource[] {
if (!Array.isArray(packages)) {
return [];
}
const filteredStringSources = new Set(filterPackageSourcesForCurrentNode(
packages
.map((entry) => (typeof entry === "string" ? entry : entry.source))
.filter((entry): entry is string => typeof entry === "string"),
));
return packages.filter((entry) => {
const source = typeof entry === "string" ? entry : entry.source;
return filteredStringSources.has(source);
});
}
export function readJson(path: string): Record<string, unknown> {
if (!existsSync(path)) {
return {};
@@ -110,10 +127,13 @@ export function normalizeFeynmanSettings(
settings.theme = "feynman";
settings.quietStartup = true;
settings.collapseChangelog = true;
const supportedCorePackages = filterPackageSourcesForCurrentNode(CORE_PACKAGE_SOURCES);
if (!Array.isArray(settings.packages) || settings.packages.length === 0) {
settings.packages = [...CORE_PACKAGE_SOURCES];
settings.packages = supportedCorePackages;
} else if (shouldPruneLegacyDefaultPackages(settings.packages as PackageSource[])) {
settings.packages = [...CORE_PACKAGE_SOURCES];
settings.packages = supportedCorePackages;
} else {
settings.packages = filterConfiguredPackagesForCurrentNode(settings.packages as PackageSource[]);
}
const modelRegistry = createModelRegistry(authPath);

View File

@@ -1,13 +1,17 @@
import { existsSync, readFileSync } from "node:fs";
import { homedir } from "node:os";
import { resolve } from "node:path";
import { existsSync, mkdirSync, readFileSync, writeFileSync } from "node:fs";
import { dirname, resolve } from "node:path";
import { getFeynmanHome } from "../config/paths.js";
export type PiWebSearchProvider = "auto" | "perplexity" | "gemini";
export type PiWebSearchProvider = "auto" | "perplexity" | "exa" | "gemini";
export type PiWebSearchWorkflow = "none" | "summary-review";
export type PiWebAccessConfig = Record<string, unknown> & {
route?: PiWebSearchProvider;
provider?: PiWebSearchProvider;
searchProvider?: PiWebSearchProvider;
workflow?: PiWebSearchWorkflow;
perplexityApiKey?: string;
exaApiKey?: string;
geminiApiKey?: string;
chromeProfile?: string;
};
@@ -16,19 +20,26 @@ export type PiWebAccessStatus = {
configPath: string;
searchProvider: PiWebSearchProvider;
requestProvider: PiWebSearchProvider;
workflow: PiWebSearchWorkflow;
perplexityConfigured: boolean;
exaConfigured: boolean;
geminiApiConfigured: boolean;
chromeProfile?: string;
routeLabel: string;
note: string;
};
export function getPiWebSearchConfigPath(home = process.env.HOME ?? homedir()): string {
return resolve(home, ".feynman", "web-search.json");
export function getPiWebSearchConfigPath(home?: string): string {
const feynmanHome = home ? resolve(home, ".feynman") : getFeynmanHome();
return resolve(feynmanHome, "web-search.json");
}
function normalizeProvider(value: unknown): PiWebSearchProvider | undefined {
return value === "auto" || value === "perplexity" || value === "gemini" ? value : undefined;
return value === "auto" || value === "perplexity" || value === "exa" || value === "gemini" ? value : undefined;
}
function normalizeWorkflow(value: unknown): PiWebSearchWorkflow | undefined {
return value === "none" || value === "summary-review" ? value : undefined;
}
function normalizeNonEmptyString(value: unknown): string | undefined {
@@ -48,10 +59,29 @@ export function loadPiWebAccessConfig(configPath = getPiWebSearchConfigPath()):
}
}
export function savePiWebAccessConfig(
updates: Partial<Record<keyof PiWebAccessConfig, unknown>>,
configPath = getPiWebSearchConfigPath(),
): void {
const merged: Record<string, unknown> = { ...loadPiWebAccessConfig(configPath) };
for (const [key, value] of Object.entries(updates)) {
if (value === undefined) {
delete merged[key];
} else {
merged[key] = value;
}
}
mkdirSync(dirname(configPath), { recursive: true });
writeFileSync(configPath, JSON.stringify(merged, null, 2) + "\n", "utf8");
}
function formatRouteLabel(provider: PiWebSearchProvider): string {
switch (provider) {
case "perplexity":
return "Perplexity";
case "exa":
return "Exa";
case "gemini":
return "Gemini";
default:
@@ -63,10 +93,12 @@ function formatRouteNote(provider: PiWebSearchProvider): string {
switch (provider) {
case "perplexity":
return "Pi web-access will use Perplexity for search.";
case "exa":
return "Pi web-access will use Exa for search.";
case "gemini":
return "Pi web-access will use Gemini API or Gemini Browser.";
default:
return "Pi web-access will try Perplexity, then Gemini API, then Gemini Browser.";
return "Pi web-access will try Perplexity, then Exa, then Gemini API, then Gemini Browser.";
}
}
@@ -74,9 +106,12 @@ export function getPiWebAccessStatus(
config: PiWebAccessConfig = loadPiWebAccessConfig(),
configPath = getPiWebSearchConfigPath(),
): PiWebAccessStatus {
const searchProvider = normalizeProvider(config.searchProvider) ?? "auto";
const requestProvider = normalizeProvider(config.provider) ?? searchProvider;
const searchProvider =
normalizeProvider(config.searchProvider) ?? normalizeProvider(config.route) ?? normalizeProvider(config.provider) ?? "auto";
const requestProvider = normalizeProvider(config.provider) ?? normalizeProvider(config.route) ?? searchProvider;
const workflow = normalizeWorkflow(config.workflow) ?? "none";
const perplexityConfigured = Boolean(normalizeNonEmptyString(config.perplexityApiKey));
const exaConfigured = Boolean(normalizeNonEmptyString(config.exaApiKey));
const geminiApiConfigured = Boolean(normalizeNonEmptyString(config.geminiApiKey));
const chromeProfile = normalizeNonEmptyString(config.chromeProfile);
const effectiveProvider = searchProvider;
@@ -85,7 +120,9 @@ export function getPiWebAccessStatus(
configPath,
searchProvider,
requestProvider,
workflow,
perplexityConfigured,
exaConfigured,
geminiApiConfigured,
chromeProfile,
routeLabel: formatRouteLabel(effectiveProvider),
@@ -100,7 +137,9 @@ export function formatPiWebAccessDoctorLines(
"web access: pi-web-access",
` search route: ${status.routeLabel}`,
` request route: ${status.requestProvider}`,
` search workflow: ${status.workflow}`,
` perplexity api: ${status.perplexityConfigured ? "configured" : "not configured"}`,
` exa api: ${status.exaConfigured ? "configured" : "not configured"}`,
` gemini api: ${status.geminiApiConfigured ? "configured" : "not configured"}`,
` browser profile: ${status.chromeProfile ?? "default Chromium profile"}`,
` config path: ${status.configPath}`,

View File

@@ -1,13 +1,60 @@
import { getPiWebAccessStatus } from "../pi/web-access.js";
import {
getPiWebAccessStatus,
savePiWebAccessConfig,
type PiWebAccessConfig,
type PiWebSearchProvider,
} from "../pi/web-access.js";
import { printInfo } from "../ui/terminal.js";
const SEARCH_PROVIDERS: PiWebSearchProvider[] = ["auto", "perplexity", "exa", "gemini"];
const PROVIDER_API_KEY_FIELDS: Partial<Record<PiWebSearchProvider, keyof PiWebAccessConfig>> = {
perplexity: "perplexityApiKey",
exa: "exaApiKey",
gemini: "geminiApiKey",
};
export function printSearchStatus(): void {
const status = getPiWebAccessStatus();
printInfo("Managed by: pi-web-access");
printInfo(`Search route: ${status.routeLabel}`);
printInfo(`Request route: ${status.requestProvider}`);
printInfo(`Search workflow: ${status.workflow}`);
printInfo(`Perplexity API configured: ${status.perplexityConfigured ? "yes" : "no"}`);
printInfo(`Exa API configured: ${status.exaConfigured ? "yes" : "no"}`);
printInfo(`Gemini API configured: ${status.geminiApiConfigured ? "yes" : "no"}`);
printInfo(`Browser profile: ${status.chromeProfile ?? "default Chromium profile"}`);
printInfo(`Config path: ${status.configPath}`);
}
export function setSearchProvider(provider: PiWebSearchProvider, apiKey?: string): void {
if (!SEARCH_PROVIDERS.includes(provider)) {
throw new Error(`Usage: feynman search set <${SEARCH_PROVIDERS.join("|")}> [api-key]`);
}
if (apiKey !== undefined && provider === "auto") {
throw new Error("The auto provider does not use an API key. Usage: feynman search set auto");
}
const updates: Partial<Record<keyof PiWebAccessConfig, unknown>> = {
provider,
searchProvider: provider,
workflow: "none",
route: undefined,
};
const apiKeyField = PROVIDER_API_KEY_FIELDS[provider];
if (apiKeyField && apiKey !== undefined) {
updates[apiKeyField] = apiKey;
}
savePiWebAccessConfig(updates);
const status = getPiWebAccessStatus();
console.log(`Web search provider set to ${status.routeLabel}.`);
console.log(`Config path: ${status.configPath}`);
}
export function clearSearchConfig(): void {
savePiWebAccessConfig({ provider: undefined, searchProvider: undefined, route: undefined, workflow: "none" });
const status = getPiWebAccessStatus();
console.log(`Web search provider reset to ${status.routeLabel}.`);
console.log(`Config path: ${status.configPath}`);
}

View File

@@ -10,6 +10,7 @@ import { printInfo, printPanel, printSection } from "../ui/terminal.js";
import { getCurrentModelSpec } from "../model/commands.js";
import { buildModelStatusSnapshotFromRecords, getAvailableModelRecords, getSupportedModelRecords } from "../model/catalog.js";
import { createModelRegistry, getModelsJsonPath } from "../model/registry.js";
import { getConfiguredServiceTier } from "../model/service-tier.js";
function findProvidersMissingApiKey(modelsJsonPath: string): string[] {
try {
@@ -105,6 +106,7 @@ export function runStatus(options: DoctorOptions): void {
printInfo(`Recommended model: ${snapshot.recommendedModel ?? "not available"}`);
printInfo(`alphaXiv: ${snapshot.alphaLoggedIn ? snapshot.alphaUser ?? "configured" : "not configured"}`);
printInfo(`Web access: pi-web-access (${snapshot.webRouteLabel})`);
printInfo(`Service tier: ${getConfiguredServiceTier(options.settingsPath) ?? "not set"}`);
printInfo(`Preview: ${snapshot.previewConfigured ? "configured" : "not configured"}`);
printSection("Paths");
@@ -165,6 +167,7 @@ export function runDoctor(options: DoctorOptions): void {
console.log(`default model valid: ${modelStatus.modelValid ? "yes" : "no"}`);
console.log(`authenticated providers: ${modelStatus.authenticatedProviderCount}`);
console.log(`authenticated models: ${modelStatus.authenticatedModelCount}`);
console.log(`service tier: ${getConfiguredServiceTier(options.settingsPath) ?? "not set"}`);
console.log(`recommended model: ${modelStatus.recommendedModel ?? "not available"}`);
if (modelStatus.recommendedModelReason) {
console.log(` why: ${modelStatus.recommendedModelReason}`);

View File

@@ -1,30 +1,130 @@
import { stdin as input, stdout as output } from "node:process";
import { createInterface } from "node:readline/promises";
import {
confirm as clackConfirm,
intro as clackIntro,
isCancel,
multiselect as clackMultiselect,
outro as clackOutro,
select as clackSelect,
text as clackText,
type Option,
} from "@clack/prompts";
export async function promptText(question: string, defaultValue = ""): Promise<string> {
if (!input.isTTY || !output.isTTY) {
export class SetupCancelledError extends Error {
constructor(message = "setup cancelled") {
super(message);
this.name = "SetupCancelledError";
}
}
export type PromptSelectOption<T = string> = {
value: T;
label: string;
hint?: string;
};
function ensureInteractiveTerminal(): void {
if (!process.stdin.isTTY || !process.stdout.isTTY) {
throw new Error("feynman setup requires an interactive terminal.");
}
const rl = createInterface({ input, output });
try {
const suffix = defaultValue ? ` [${defaultValue}]` : "";
const value = (await rl.question(`${question}${suffix}: `)).trim();
return value || defaultValue;
} finally {
rl.close();
}
function guardCancelled<T>(value: T | symbol): T {
if (isCancel(value)) {
throw new SetupCancelledError();
}
return value;
}
export function isInteractiveTerminal(): boolean {
return Boolean(process.stdin.isTTY && process.stdout.isTTY);
}
export async function promptIntro(title: string): Promise<void> {
ensureInteractiveTerminal();
clackIntro(title);
}
export async function promptOutro(message: string): Promise<void> {
ensureInteractiveTerminal();
clackOutro(message);
}
export async function promptText(question: string, defaultValue = "", placeholder?: string): Promise<string> {
ensureInteractiveTerminal();
const value = guardCancelled(
await clackText({
message: question,
initialValue: defaultValue || undefined,
placeholder: placeholder ?? (defaultValue || undefined),
}),
);
const normalized = String(value ?? "").trim();
return normalized || defaultValue;
}
export async function promptSelect<T>(
question: string,
options: PromptSelectOption<T>[],
initialValue?: T,
): Promise<T> {
ensureInteractiveTerminal();
const selection = guardCancelled(
await clackSelect({
message: question,
options: options.map((option) => ({
value: option.value,
label: option.label,
hint: option.hint,
})) as Option<T>[],
initialValue,
}),
);
return selection;
}
export async function promptChoice(question: string, choices: string[], defaultIndex = 0): Promise<number> {
console.log(question);
for (const [index, choice] of choices.entries()) {
const marker = index === defaultIndex ? "*" : " ";
console.log(` ${marker} ${index + 1}. ${choice}`);
const options = choices.map((choice, index) => ({
value: index,
label: choice,
}));
return promptSelect(question, options, Math.max(0, Math.min(defaultIndex, choices.length - 1)));
}
const answer = await promptText("Select", String(defaultIndex + 1));
const parsed = Number(answer);
if (!Number.isFinite(parsed) || parsed < 1 || parsed > choices.length) {
return defaultIndex;
export async function promptConfirm(question: string, initialValue = true): Promise<boolean> {
ensureInteractiveTerminal();
return guardCancelled(
await clackConfirm({
message: question,
initialValue,
}),
);
}
return parsed - 1;
export async function promptMultiSelect<T>(
question: string,
options: PromptSelectOption<T>[],
initialValues: T[] = [],
): Promise<T[]> {
ensureInteractiveTerminal();
const selection = guardCancelled(
await clackMultiselect({
message: question,
options: options.map((option) => ({
value: option.value,
label: option.label,
hint: option.hint,
})) as Option<T>[],
initialValues,
required: false,
}),
);
return selection;
}

View File

@@ -1,15 +1,24 @@
import { isLoggedIn as isAlphaLoggedIn, login as loginAlpha } from "@companion-ai/alpha-hub/lib";
import { dirname } from "node:path";
import { getDefaultSessionDir, getFeynmanHome } from "../config/paths.js";
import { getPiWebAccessStatus, getPiWebSearchConfigPath } from "../pi/web-access.js";
import { getPiWebAccessStatus } from "../pi/web-access.js";
import { normalizeFeynmanSettings } from "../pi/settings.js";
import type { ThinkingLevel } from "../pi/settings.js";
import { getMissingConfiguredPackages, installPackageSources } from "../pi/package-ops.js";
import { listOptionalPackagePresets } from "../pi/package-presets.js";
import { getCurrentModelSpec, runModelSetup } from "../model/commands.js";
import { buildModelStatusSnapshotFromRecords, getAvailableModelRecords, getSupportedModelRecords } from "../model/catalog.js";
import { PANDOC_FALLBACK_PATHS, resolveExecutable } from "../system/executables.js";
import { setupPreviewDependencies } from "./preview.js";
import { runDoctor } from "./doctor.js";
import { printInfo, printSection, printSuccess } from "../ui/terminal.js";
import {
isInteractiveTerminal,
promptConfirm,
promptIntro,
promptMultiSelect,
promptOutro,
SetupCancelledError,
} from "./prompts.js";
type SetupOptions = {
settingsPath: string;
@@ -21,10 +30,6 @@ type SetupOptions = {
defaultThinkingLevel?: ThinkingLevel;
};
function isInteractiveTerminal(): boolean {
return Boolean(process.stdin.isTTY && process.stdout.isTTY);
}
function printNonInteractiveSetupGuidance(): void {
printInfo("Non-interactive terminal. Use explicit commands:");
printInfo(" feynman model login <provider>");
@@ -34,21 +39,152 @@ function printNonInteractiveSetupGuidance(): void {
printInfo(" feynman doctor");
}
function summarizePackageSources(sources: string[]): string {
if (sources.length <= 3) {
return sources.join(", ");
}
return `${sources.slice(0, 3).join(", ")} +${sources.length - 3} more`;
}
async function maybeInstallBundledPackages(options: SetupOptions): Promise<void> {
const agentDir = dirname(options.authPath);
const { missing, bundled } = getMissingConfiguredPackages(options.workingDir, agentDir, options.appRoot);
const userMissing = missing.filter((entry) => entry.scope === "user").map((entry) => entry.source);
const projectMissing = missing.filter((entry) => entry.scope === "project").map((entry) => entry.source);
printSection("Packages");
if (bundled.length > 0) {
printInfo(`Bundled research packages ready: ${summarizePackageSources(bundled.map((entry) => entry.source))}`);
}
if (missing.length === 0) {
printInfo("No additional package install required.");
return;
}
printInfo(`Missing packages: ${summarizePackageSources(missing.map((entry) => entry.source))}`);
const shouldInstall = await promptConfirm("Install missing Feynman packages now?", true);
if (!shouldInstall) {
printInfo("Skipping package install. Feynman may install missing packages later if needed.");
return;
}
if (userMissing.length > 0) {
try {
await installPackageSources(options.workingDir, agentDir, userMissing);
printSuccess(`Installed bundled packages: ${summarizePackageSources(userMissing)}`);
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
printInfo(message.includes("No supported package manager found")
? "No package manager available for additional installs. The standalone bundle can still run with its shipped packages."
: `Package install skipped: ${message}`);
}
}
if (projectMissing.length > 0) {
try {
await installPackageSources(options.workingDir, agentDir, projectMissing, { local: true });
printSuccess(`Installed project packages: ${summarizePackageSources(projectMissing)}`);
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
printInfo(`Project package install skipped: ${message}`);
}
}
}
async function maybeInstallOptionalPackages(options: SetupOptions): Promise<void> {
const agentDir = dirname(options.authPath);
const presets = listOptionalPackagePresets();
if (presets.length === 0) {
return;
}
const selectedPresets = await promptMultiSelect(
"Optional packages",
presets.map((preset) => ({
value: preset.name,
label: preset.name,
hint: preset.description,
})),
[],
);
if (selectedPresets.length === 0) {
printInfo("No optional packages selected.");
return;
}
for (const presetName of selectedPresets) {
const preset = presets.find((entry) => entry.name === presetName);
if (!preset) continue;
try {
await installPackageSources(options.workingDir, agentDir, preset.sources, {
persist: true,
});
printSuccess(`Installed optional preset: ${preset.name}`);
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
printInfo(message.includes("No supported package manager found")
? `Skipped optional preset ${preset.name}: no package manager available.`
: `Skipped optional preset ${preset.name}: ${message}`);
}
}
}
async function maybeLoginAlpha(): Promise<void> {
if (isAlphaLoggedIn()) {
printInfo("alphaXiv already configured.");
return;
}
const shouldLogin = await promptConfirm("Connect alphaXiv now?", true);
if (!shouldLogin) {
printInfo("Skipping alphaXiv login for now.");
return;
}
try {
await loginAlpha();
printSuccess("alphaXiv login complete");
} catch (error) {
printInfo(`alphaXiv login skipped: ${error instanceof Error ? error.message : String(error)}`);
}
}
async function maybeInstallPreviewDependencies(): Promise<void> {
if (resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS)) {
printInfo("Preview support already configured.");
return;
}
const shouldInstall = await promptConfirm("Install pandoc for preview/export support?", false);
if (!shouldInstall) {
printInfo("Skipping preview dependency install.");
return;
}
try {
const result = setupPreviewDependencies();
printSuccess(result.message);
} catch (error) {
printInfo(`Preview setup skipped: ${error instanceof Error ? error.message : String(error)}`);
}
}
export async function runSetup(options: SetupOptions): Promise<void> {
if (!isInteractiveTerminal()) {
printNonInteractiveSetupGuidance();
return;
}
try {
await promptIntro("Feynman setup");
await runModelSetup(options.settingsPath, options.authPath);
if (!isAlphaLoggedIn()) {
await loginAlpha();
printSuccess("alphaXiv login complete");
}
const result = setupPreviewDependencies();
printSuccess(result.message);
await maybeInstallBundledPackages(options);
await maybeInstallOptionalPackages(options);
await maybeLoginAlpha();
await maybeInstallPreviewDependencies();
normalizeFeynmanSettings(
options.settingsPath,
@@ -67,4 +203,17 @@ export async function runSetup(options: SetupOptions): Promise<void> {
printInfo(`alphaXiv: ${isAlphaLoggedIn() ? "configured" : "not configured"}`);
printInfo(`Preview: ${resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS) ? "configured" : "not configured"}`);
printInfo(`Web: ${getPiWebAccessStatus().routeLabel}`);
if (modelStatus.recommended && !modelStatus.currentValid) {
printInfo(`Recommended model: ${modelStatus.recommended}`);
}
await promptOutro("Feynman is ready");
} catch (error) {
if (error instanceof SetupCancelledError) {
printInfo("Setup cancelled.");
return;
}
throw error;
}
}

View File

@@ -1,5 +1,6 @@
import { spawnSync } from "node:child_process";
import { existsSync } from "node:fs";
import { dirname, delimiter } from "node:path";
const isWindows = process.platform === "win32";
const programFiles = process.env.PROGRAMFILES ?? "C:\\Program Files";
@@ -40,14 +41,20 @@ export function resolveExecutable(name: string, fallbackPaths: string[] = []): s
}
const isWindows = process.platform === "win32";
const env = {
...process.env,
PATH: process.env.PATH ?? "",
};
const result = isWindows
? spawnSync("cmd", ["/c", `where ${name}`], {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
env,
})
: spawnSync("sh", ["-lc", `command -v ${name}`], {
: spawnSync("sh", ["-c", `command -v ${name}`], {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
env,
});
if (result.status === 0) {
@@ -59,3 +66,9 @@ export function resolveExecutable(name: string, fallbackPaths: string[] = []): s
return undefined;
}
export function getPathWithCurrentNode(pathValue = process.env.PATH ?? ""): string {
const nodeDir = dirname(process.execPath);
const parts = pathValue.split(delimiter).filter(Boolean);
return parts.includes(nodeDir) ? pathValue : `${nodeDir}${delimiter}${pathValue}`;
}

View File

@@ -1,4 +1,6 @@
export const MIN_NODE_VERSION = "20.19.0";
export const MAX_NODE_MAJOR = 24;
export const PREFERRED_NODE_MAJOR = 22;
type ParsedNodeVersion = {
major: number;
@@ -22,16 +24,21 @@ function compareNodeVersions(left: ParsedNodeVersion, right: ParsedNodeVersion):
}
export function isSupportedNodeVersion(version = process.versions.node): boolean {
return compareNodeVersions(parseNodeVersion(version), parseNodeVersion(MIN_NODE_VERSION)) >= 0;
const parsed = parseNodeVersion(version);
return compareNodeVersions(parsed, parseNodeVersion(MIN_NODE_VERSION)) >= 0 && parsed.major <= MAX_NODE_MAJOR;
}
export function getUnsupportedNodeVersionLines(version = process.versions.node): string[] {
const isWindows = process.platform === "win32";
const parsed = parseNodeVersion(version);
const rangeText = `Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x`;
return [
`feynman requires Node.js ${MIN_NODE_VERSION} or later (detected ${version}).`,
isWindows
? "Install a newer Node.js from https://nodejs.org, or use the standalone installer:"
: "Switch to Node 20 with `nvm install 20 && nvm use 20`, or use the standalone installer:",
`feynman supports ${rangeText} (detected ${version}).`,
parsed.major > MAX_NODE_MAJOR
? "This newer Node release is not supported yet because native Pi packages may fail to build."
: isWindows
? "Install a supported Node.js release from https://nodejs.org, or use the standalone installer:"
: `Switch to a supported Node release with \`nvm install ${PREFERRED_NODE_MAJOR} && nvm use ${PREFERRED_NODE_MAJOR}\`, or use the standalone installer:`,
isWindows
? "irm https://feynman.is/install.ps1 | iex"
: "curl -fsSL https://feynman.is/install | bash",

View File

@@ -0,0 +1,51 @@
import test from "node:test";
import assert from "node:assert/strict";
import { patchAlphaHubAuthSource } from "../scripts/lib/alpha-hub-auth-patch.mjs";
test("patchAlphaHubAuthSource fixes browser open logic for WSL and Windows", () => {
const input = [
"function openBrowser(url) {",
" try {",
" const plat = platform();",
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
" else if (plat === 'win32') execSync(`start \"\" \"${url}\"`);",
" } catch {}",
"}",
].join("\n");
const patched = patchAlphaHubAuthSource(input);
assert.match(patched, /const isWsl = plat === 'linux'/);
assert.match(patched, /wslview/);
assert.match(patched, /cmd\.exe \/c start/);
assert.match(patched, /cmd \/c start/);
});
test("patchAlphaHubAuthSource includes the auth URL in login output", () => {
const input = "process.stderr.write('Opening browser for alphaXiv login...\\n');";
const patched = patchAlphaHubAuthSource(input);
assert.match(patched, /Auth URL: \$\{authUrl\.toString\(\)\}/);
});
test("patchAlphaHubAuthSource is idempotent", () => {
const input = [
"function openBrowser(url) {",
" try {",
" const plat = platform();",
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
" else if (plat === 'win32') execSync(`start \"\" \"${url}\"`);",
" } catch {}",
"}",
"process.stderr.write('Opening browser for alphaXiv login...\\n');",
].join("\n");
const once = patchAlphaHubAuthSource(input);
const twice = patchAlphaHubAuthSource(once);
assert.equal(twice, once);
});

View File

@@ -0,0 +1,110 @@
import test from "node:test";
import assert from "node:assert/strict";
import { buildModelStatusSnapshotFromRecords } from "../src/model/catalog.js";
test("buildModelStatusSnapshotFromRecords returns empty guidance when model is set and valid", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[{ provider: "anthropic", id: "claude-opus-4-6" }],
[{ provider: "anthropic", id: "claude-opus-4-6" }],
"anthropic/claude-opus-4-6",
);
assert.equal(snapshot.currentValid, true);
assert.equal(snapshot.current, "anthropic/claude-opus-4-6");
assert.equal(snapshot.guidance.length, 0);
});
test("buildModelStatusSnapshotFromRecords emits guidance when no models are available", () => {
const snapshot = buildModelStatusSnapshotFromRecords([], [], undefined);
assert.equal(snapshot.currentValid, false);
assert.equal(snapshot.current, undefined);
assert.equal(snapshot.recommended, undefined);
assert.ok(snapshot.guidance.some((line) => line.includes("No authenticated Pi models")));
});
test("buildModelStatusSnapshotFromRecords emits guidance when no default model is set", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[{ provider: "openai", id: "gpt-5.4" }],
[{ provider: "openai", id: "gpt-5.4" }],
undefined,
);
assert.equal(snapshot.currentValid, false);
assert.equal(snapshot.current, undefined);
assert.ok(snapshot.guidance.some((line) => line.includes("No default research model")));
});
test("buildModelStatusSnapshotFromRecords marks provider as configured only when it has available models", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[
{ provider: "anthropic", id: "claude-opus-4-6" },
{ provider: "openai", id: "gpt-5.4" },
],
[{ provider: "openai", id: "gpt-5.4" }],
"openai/gpt-5.4",
);
const anthropicProvider = snapshot.providers.find((provider) => provider.id === "anthropic");
const openaiProvider = snapshot.providers.find((provider) => provider.id === "openai");
assert.ok(anthropicProvider);
assert.equal(anthropicProvider!.configured, false);
assert.equal(anthropicProvider!.supportedModels, 1);
assert.equal(anthropicProvider!.availableModels, 0);
assert.ok(openaiProvider);
assert.equal(openaiProvider!.configured, true);
assert.equal(openaiProvider!.supportedModels, 1);
assert.equal(openaiProvider!.availableModels, 1);
});
test("buildModelStatusSnapshotFromRecords marks provider as current when selected model belongs to it", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[
{ provider: "anthropic", id: "claude-opus-4-6" },
{ provider: "openai", id: "gpt-5.4" },
],
[
{ provider: "anthropic", id: "claude-opus-4-6" },
{ provider: "openai", id: "gpt-5.4" },
],
"anthropic/claude-opus-4-6",
);
const anthropicProvider = snapshot.providers.find((provider) => provider.id === "anthropic");
const openaiProvider = snapshot.providers.find((provider) => provider.id === "openai");
assert.equal(anthropicProvider!.current, true);
assert.equal(openaiProvider!.current, false);
});
test("buildModelStatusSnapshotFromRecords returns available models sorted by research preference", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[
{ provider: "openai", id: "gpt-5.4" },
{ provider: "anthropic", id: "claude-opus-4-6" },
],
[
{ provider: "openai", id: "gpt-5.4" },
{ provider: "anthropic", id: "claude-opus-4-6" },
],
undefined,
);
assert.equal(snapshot.availableModels[0], "anthropic/claude-opus-4-6");
assert.equal(snapshot.availableModels[1], "openai/gpt-5.4");
assert.equal(snapshot.recommended, "anthropic/claude-opus-4-6");
});
test("buildModelStatusSnapshotFromRecords sets currentValid false when current model is not in available list", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[{ provider: "anthropic", id: "claude-opus-4-6" }],
[],
"anthropic/claude-opus-4-6",
);
assert.equal(snapshot.currentValid, false);
assert.equal(snapshot.current, "anthropic/claude-opus-4-6");
});

View File

@@ -0,0 +1,92 @@
import test from "node:test";
import assert from "node:assert/strict";
import { existsSync, mkdtempSync, rmSync } from "node:fs";
import { tmpdir } from "node:os";
import { join, resolve } from "node:path";
import {
ensureFeynmanHome,
getBootstrapStatePath,
getDefaultSessionDir,
getFeynmanAgentDir,
getFeynmanHome,
getFeynmanMemoryDir,
getFeynmanStateDir,
} from "../src/config/paths.js";
test("getFeynmanHome uses FEYNMAN_HOME env var when set", () => {
const previous = process.env.FEYNMAN_HOME;
try {
process.env.FEYNMAN_HOME = "/custom/home";
assert.equal(getFeynmanHome(), resolve("/custom/home", ".feynman"));
} finally {
if (previous === undefined) {
delete process.env.FEYNMAN_HOME;
} else {
process.env.FEYNMAN_HOME = previous;
}
}
});
test("getFeynmanHome falls back to homedir when FEYNMAN_HOME is unset", () => {
const previous = process.env.FEYNMAN_HOME;
try {
delete process.env.FEYNMAN_HOME;
const home = getFeynmanHome();
assert.ok(home.endsWith(".feynman"), `expected path ending in .feynman, got: ${home}`);
assert.ok(!home.includes("undefined"), `expected no 'undefined' in path, got: ${home}`);
} finally {
if (previous === undefined) {
delete process.env.FEYNMAN_HOME;
} else {
process.env.FEYNMAN_HOME = previous;
}
}
});
test("getFeynmanAgentDir resolves to <home>/agent", () => {
assert.equal(getFeynmanAgentDir("/some/home"), resolve("/some/home", "agent"));
});
test("getFeynmanMemoryDir resolves to <home>/memory", () => {
assert.equal(getFeynmanMemoryDir("/some/home"), resolve("/some/home", "memory"));
});
test("getFeynmanStateDir resolves to <home>/.state", () => {
assert.equal(getFeynmanStateDir("/some/home"), resolve("/some/home", ".state"));
});
test("getDefaultSessionDir resolves to <home>/sessions", () => {
assert.equal(getDefaultSessionDir("/some/home"), resolve("/some/home", "sessions"));
});
test("getBootstrapStatePath resolves to <home>/.state/bootstrap.json", () => {
assert.equal(getBootstrapStatePath("/some/home"), resolve("/some/home", ".state", "bootstrap.json"));
});
test("ensureFeynmanHome creates all required subdirectories", () => {
const root = mkdtempSync(join(tmpdir(), "feynman-paths-"));
try {
const home = join(root, "home");
ensureFeynmanHome(home);
assert.ok(existsSync(home), "home dir should exist");
assert.ok(existsSync(join(home, "agent")), "agent dir should exist");
assert.ok(existsSync(join(home, "memory")), "memory dir should exist");
assert.ok(existsSync(join(home, ".state")), ".state dir should exist");
assert.ok(existsSync(join(home, "sessions")), "sessions dir should exist");
} finally {
rmSync(root, { recursive: true, force: true });
}
});
test("ensureFeynmanHome is idempotent when dirs already exist", () => {
const root = mkdtempSync(join(tmpdir(), "feynman-paths-"));
try {
const home = join(root, "home");
ensureFeynmanHome(home);
assert.doesNotThrow(() => ensureFeynmanHome(home));
} finally {
rmSync(root, { recursive: true, force: true });
}
});

View File

@@ -0,0 +1,135 @@
import test from "node:test";
import assert from "node:assert/strict";
import { readdirSync, readFileSync } from "node:fs";
import { dirname, join, resolve } from "node:path";
import { fileURLToPath } from "node:url";
const repoRoot = resolve(dirname(fileURLToPath(import.meta.url)), "..");
const bannedPatterns = [/ValiChord/i, /Harmony Record/i, /harmony_record_/i];
function collectMarkdownFiles(root: string): string[] {
const files: string[] = [];
for (const entry of readdirSync(root, { withFileTypes: true })) {
const fullPath = join(root, entry.name);
if (entry.isDirectory()) {
files.push(...collectMarkdownFiles(fullPath));
continue;
}
if (entry.isFile() && fullPath.endsWith(".md")) {
files.push(fullPath);
}
}
return files;
}
test("bundled prompts and skills do not contain blocked promotional product content", () => {
for (const filePath of [...collectMarkdownFiles(join(repoRoot, "prompts")), ...collectMarkdownFiles(join(repoRoot, "skills"))]) {
const content = readFileSync(filePath, "utf8");
for (const pattern of bannedPatterns) {
assert.doesNotMatch(content, pattern, `${filePath} contains blocked promotional pattern ${pattern}`);
}
}
});
test("research writing prompts forbid fabricated results and unproven figures", () => {
const draftPrompt = readFileSync(join(repoRoot, "prompts", "draft.md"), "utf8");
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
const writerPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "writer.md"), "utf8");
const verifierPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "verifier.md"), "utf8");
for (const [label, content] of [
["system prompt", systemPrompt],
] as const) {
assert.match(content, /Never (invent|fabricate)/i, `${label} must explicitly forbid invented or fabricated results`);
assert.match(content, /(figure|chart|image|table)/i, `${label} must cover visual/table provenance`);
assert.match(content, /(provenance|source|artifact|script|raw)/i, `${label} must require traceable support`);
}
for (const [label, content] of [
["writer prompt", writerPrompt],
["verifier prompt", verifierPrompt],
["draft prompt", draftPrompt],
] as const) {
assert.match(content, /system prompt.*provenance rule/i, `${label} must point back to the system provenance rule`);
}
assert.match(draftPrompt, /system prompt's provenance rules/i);
assert.match(draftPrompt, /placeholder or proposed experimental plan/i);
assert.match(draftPrompt, /source-backed quantitative data/i);
});
test("deepresearch workflow requires durable artifacts even when blocked", () => {
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
assert.match(systemPrompt, /Do not claim you are only a static model/i);
assert.match(systemPrompt, /write the requested durable artifact/i);
assert.match(deepResearchPrompt, /Do not stop after planning/i);
assert.match(deepResearchPrompt, /not a request to explain or implement/i);
assert.match(deepResearchPrompt, /Do not answer by describing the protocol/i);
assert.match(deepResearchPrompt, /degraded mode/i);
assert.match(deepResearchPrompt, /Verification: BLOCKED/i);
assert.match(deepResearchPrompt, /Never end with only an explanation in chat/i);
});
test("deepresearch citation and review stages are sequential and avoid giant edits", () => {
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
assert.match(deepResearchPrompt, /must complete before any reviewer runs/i);
assert.match(deepResearchPrompt, /Do not run the `verifier` and `reviewer` in the same parallel `subagent` call/i);
assert.match(deepResearchPrompt, /outputs\/\.drafts\/<slug>-cited\.md/i);
assert.match(deepResearchPrompt, /do not issue one giant `edit` tool call/i);
assert.match(deepResearchPrompt, /outputs\/\.drafts\/<slug>-revised\.md/i);
assert.match(deepResearchPrompt, /The final candidate is `outputs\/\.drafts\/<slug>-revised\.md` if it exists/i);
});
test("deepresearch keeps subagent tool calls small and skips subagents for narrow explainers", () => {
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
assert.match(deepResearchPrompt, /including "what is X" explainers/i);
assert.match(deepResearchPrompt, /Make the scale decision before assigning owners/i);
assert.match(deepResearchPrompt, /lead-owned direct search tasks only/i);
assert.match(deepResearchPrompt, /MUST NOT spawn researcher subagents/i);
assert.match(deepResearchPrompt, /Do not inflate a simple explainer into a multi-agent survey/i);
assert.match(deepResearchPrompt, /Skip researcher spawning entirely/i);
assert.match(deepResearchPrompt, /Use multiple search terms\/angles before drafting/i);
assert.match(deepResearchPrompt, /Minimum: 3 distinct queries/i);
assert.match(deepResearchPrompt, /Record the exact search terms used/i);
assert.match(deepResearchPrompt, /<slug>-research-direct\.md/i);
assert.match(deepResearchPrompt, /Do not call `alpha_get_paper`/i);
assert.match(deepResearchPrompt, /do not fetch `\.pdf` URLs/i);
assert.match(deepResearchPrompt, /Keep `subagent` tool-call JSON small and valid/i);
assert.match(deepResearchPrompt, /write a per-researcher brief first/i);
assert.match(deepResearchPrompt, /Do not place multi-paragraph instructions inside the `subagent` JSON/i);
assert.match(deepResearchPrompt, /Do not add extra keys such as `artifacts`/i);
assert.match(deepResearchPrompt, /always set `failFast: false`/i);
assert.match(deepResearchPrompt, /if a PDF parser or paper fetch fails/i);
});
test("workflow prompts do not introduce implicit confirmation gates", () => {
const workflowPrompts = [
"audit.md",
"compare.md",
"deepresearch.md",
"draft.md",
"lit.md",
"review.md",
"summarize.md",
"watch.md",
];
const bannedConfirmationGates = [
/Do you want to proceed/i,
/Wait for confirmation/i,
/wait for user confirmation/i,
/give them a brief chance/i,
/request changes before proceeding/i,
];
for (const fileName of workflowPrompts) {
const content = readFileSync(join(repoRoot, "prompts", fileName), "utf8");
assert.match(content, /continue (immediately|automatically)/i, `${fileName} should keep running after planning`);
for (const pattern of bannedConfirmationGates) {
assert.doesNotMatch(content, pattern, `${fileName} contains confirmation gate ${pattern}`);
}
}
});

View File

@@ -4,9 +4,10 @@ import { mkdtempSync, readFileSync, writeFileSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import { resolveInitialPrompt } from "../src/cli.js";
import { resolveInitialPrompt, shouldRunInteractiveSetup } from "../src/cli.js";
import { buildModelStatusSnapshotFromRecords, chooseRecommendedModel } from "../src/model/catalog.js";
import { setDefaultModelSpec } from "../src/model/commands.js";
import { resolveModelProviderForCommand, setDefaultModelSpec } from "../src/model/commands.js";
import { createModelRegistry } from "../src/model/registry.js";
function createAuthPath(contents: Record<string, unknown>): string {
const root = mkdtempSync(join(tmpdir(), "feynman-auth-"));
@@ -26,6 +27,17 @@ test("chooseRecommendedModel prefers the strongest authenticated research model"
assert.equal(recommendation?.spec, "anthropic/claude-opus-4-6");
});
test("createModelRegistry overlays new Anthropic Opus model before upstream Pi updates", () => {
const authPath = createAuthPath({
anthropic: { type: "api_key", key: "anthropic-test-key" },
});
const registry = createModelRegistry(authPath);
assert.ok(registry.find("anthropic", "claude-opus-4-7"));
assert.equal(registry.getAvailable().some((model) => model.provider === "anthropic" && model.id === "claude-opus-4-7"), true);
});
test("setDefaultModelSpec accepts a unique bare model id from authenticated models", () => {
const authPath = createAuthPath({
openai: { type: "api_key", key: "openai-test-key" },
@@ -42,6 +54,74 @@ test("setDefaultModelSpec accepts a unique bare model id from authenticated mode
assert.equal(settings.defaultModel, "gpt-5.4");
});
test("setDefaultModelSpec accepts provider:model syntax for authenticated models", () => {
const authPath = createAuthPath({
google: { type: "api_key", key: "google-test-key" },
});
const settingsPath = join(mkdtempSync(join(tmpdir(), "feynman-settings-")), "settings.json");
setDefaultModelSpec(settingsPath, authPath, "google:gemini-3-pro-preview");
const settings = JSON.parse(readFileSync(settingsPath, "utf8")) as {
defaultProvider?: string;
defaultModel?: string;
};
assert.equal(settings.defaultProvider, "google");
assert.equal(settings.defaultModel, "gemini-3-pro-preview");
});
test("resolveModelProviderForCommand falls back to API-key providers when OAuth is unavailable", () => {
const authPath = createAuthPath({});
const resolved = resolveModelProviderForCommand(authPath, "google");
assert.equal(resolved?.kind, "api-key");
assert.equal(resolved?.id, "google");
});
test("resolveModelProviderForCommand supports LM Studio as a first-class local provider", () => {
const authPath = createAuthPath({});
const resolved = resolveModelProviderForCommand(authPath, "lm-studio");
assert.equal(resolved?.kind, "api-key");
assert.equal(resolved?.id, "lm-studio");
});
test("resolveModelProviderForCommand supports LiteLLM as a first-class proxy provider", () => {
const authPath = createAuthPath({});
const resolved = resolveModelProviderForCommand(authPath, "litellm");
assert.equal(resolved?.kind, "api-key");
assert.equal(resolved?.id, "litellm");
});
test("resolveModelProviderForCommand prefers OAuth when a provider supports both auth modes", () => {
const authPath = createAuthPath({});
const resolved = resolveModelProviderForCommand(authPath, "anthropic");
assert.equal(resolved?.kind, "oauth");
assert.equal(resolved?.id, "anthropic");
});
test("setDefaultModelSpec prefers the explicitly configured provider when a bare model id is ambiguous", () => {
const authPath = createAuthPath({
openai: { type: "api_key", key: "openai-test-key" },
});
const settingsPath = join(mkdtempSync(join(tmpdir(), "feynman-settings-")), "settings.json");
setDefaultModelSpec(settingsPath, authPath, "gpt-5.4");
const settings = JSON.parse(readFileSync(settingsPath, "utf8")) as {
defaultProvider?: string;
defaultModel?: string;
};
assert.equal(settings.defaultProvider, "openai");
assert.equal(settings.defaultModel, "gpt-5.4");
});
test("buildModelStatusSnapshotFromRecords flags an invalid current model and suggests a replacement", () => {
const snapshot = buildModelStatusSnapshotFromRecords(
[
@@ -57,12 +137,74 @@ test("buildModelStatusSnapshotFromRecords flags an invalid current model and sug
assert.ok(snapshot.guidance.some((line) => line.includes("Configured default model is unavailable")));
});
test("chooseRecommendedModel prefers MiniMax M2.7 over highspeed when that is the authenticated provider", () => {
const authPath = createAuthPath({
minimax: { type: "api_key", key: "minimax-test-key" },
});
const recommendation = chooseRecommendedModel(authPath);
assert.equal(recommendation?.spec, "minimax/MiniMax-M2.7");
});
test("resolveInitialPrompt maps top-level research commands to Pi slash workflows", () => {
const workflows = new Set(["lit", "watch", "jobs", "deepresearch"]);
const workflows = new Set([
"lit",
"watch",
"jobs",
"deepresearch",
"review",
"audit",
"replicate",
"compare",
"draft",
"autoresearch",
"summarize",
"log",
]);
assert.equal(resolveInitialPrompt("lit", ["tool-using", "agents"], undefined, workflows), "/lit tool-using agents");
assert.equal(resolveInitialPrompt("watch", ["openai"], undefined, workflows), "/watch openai");
assert.equal(resolveInitialPrompt("jobs", [], undefined, workflows), "/jobs");
assert.equal(resolveInitialPrompt("deepresearch", ["scaling", "laws"], undefined, workflows), "/deepresearch scaling laws");
assert.equal(resolveInitialPrompt("review", ["paper.md"], undefined, workflows), "/review paper.md");
assert.equal(resolveInitialPrompt("audit", ["2401.12345"], undefined, workflows), "/audit 2401.12345");
assert.equal(resolveInitialPrompt("replicate", ["chain-of-thought"], undefined, workflows), "/replicate chain-of-thought");
assert.equal(resolveInitialPrompt("compare", ["tool", "use"], undefined, workflows), "/compare tool use");
assert.equal(resolveInitialPrompt("draft", ["mechanistic", "interp"], undefined, workflows), "/draft mechanistic interp");
assert.equal(resolveInitialPrompt("autoresearch", ["gsm8k"], undefined, workflows), "/autoresearch gsm8k");
assert.equal(resolveInitialPrompt("summarize", ["README.md"], undefined, workflows), "/summarize README.md");
assert.equal(resolveInitialPrompt("log", [], undefined, workflows), "/log");
assert.equal(resolveInitialPrompt("chat", ["hello"], undefined, workflows), "hello");
assert.equal(resolveInitialPrompt("unknown", ["topic"], undefined, workflows), "unknown topic");
});
test("shouldRunInteractiveSetup triggers on first run when no default model is configured", () => {
const authPath = createAuthPath({});
assert.equal(shouldRunInteractiveSetup(undefined, undefined, true, authPath), true);
});
test("shouldRunInteractiveSetup triggers when the configured default model is unavailable", () => {
const authPath = createAuthPath({
openai: { type: "api_key", key: "openai-test-key" },
});
assert.equal(shouldRunInteractiveSetup(undefined, "anthropic/claude-opus-4-6", true, authPath), true);
});
test("shouldRunInteractiveSetup skips onboarding when the configured default model is available", () => {
const authPath = createAuthPath({
openai: { type: "api_key", key: "openai-test-key" },
});
assert.equal(shouldRunInteractiveSetup(undefined, "openai/gpt-5.4", true, authPath), false);
});
test("shouldRunInteractiveSetup skips onboarding for explicit model overrides or non-interactive terminals", () => {
const authPath = createAuthPath({
openai: { type: "api_key", key: "openai-test-key" },
});
assert.equal(shouldRunInteractiveSetup("openai/gpt-5.4", undefined, true, authPath), false);
assert.equal(shouldRunInteractiveSetup(undefined, undefined, false, authPath), false);
});

View File

@@ -30,3 +30,45 @@ test("upsertProviderConfig creates models.json and merges provider config", () =
assert.equal(parsed.providers.custom.authHeader, true);
assert.deepEqual(parsed.providers.custom.models, [{ id: "llama3.1:8b" }]);
});
test("upsertProviderConfig writes LiteLLM proxy config with master key", () => {
const dir = mkdtempSync(join(tmpdir(), "feynman-litellm-"));
const modelsPath = join(dir, "models.json");
const result = upsertProviderConfig(modelsPath, "litellm", {
baseUrl: "http://localhost:4000/v1",
apiKey: "LITELLM_MASTER_KEY",
api: "openai-completions",
authHeader: true,
models: [{ id: "gpt-4o" }],
});
assert.deepEqual(result, { ok: true });
const parsed = JSON.parse(readFileSync(modelsPath, "utf8")) as any;
assert.equal(parsed.providers.litellm.baseUrl, "http://localhost:4000/v1");
assert.equal(parsed.providers.litellm.apiKey, "LITELLM_MASTER_KEY");
assert.equal(parsed.providers.litellm.api, "openai-completions");
assert.equal(parsed.providers.litellm.authHeader, true);
assert.deepEqual(parsed.providers.litellm.models, [{ id: "gpt-4o" }]);
});
test("upsertProviderConfig writes LiteLLM proxy config without master key", () => {
const dir = mkdtempSync(join(tmpdir(), "feynman-litellm-"));
const modelsPath = join(dir, "models.json");
const result = upsertProviderConfig(modelsPath, "litellm", {
baseUrl: "http://localhost:4000/v1",
apiKey: "local",
api: "openai-completions",
authHeader: false,
models: [{ id: "llama3" }],
});
assert.deepEqual(result, { ok: true });
const parsed = JSON.parse(readFileSync(modelsPath, "utf8")) as any;
assert.equal(parsed.providers.litellm.baseUrl, "http://localhost:4000/v1");
assert.equal(parsed.providers.litellm.apiKey, "local");
assert.equal(parsed.providers.litellm.api, "openai-completions");
assert.equal(parsed.providers.litellm.authHeader, false);
assert.deepEqual(parsed.providers.litellm.models, [{ id: "llama3" }]);
});

View File

@@ -2,6 +2,7 @@ import test from "node:test";
import assert from "node:assert/strict";
import {
MAX_NODE_MAJOR,
MIN_NODE_VERSION,
ensureSupportedNodeVersion,
getUnsupportedNodeVersionLines,
@@ -12,6 +13,8 @@ test("isSupportedNodeVersion enforces the exact minimum floor", () => {
assert.equal(isSupportedNodeVersion("20.19.0"), true);
assert.equal(isSupportedNodeVersion("20.19.0"), true);
assert.equal(isSupportedNodeVersion("21.0.0"), true);
assert.equal(isSupportedNodeVersion(`${MAX_NODE_MAJOR}.9.9`), true);
assert.equal(isSupportedNodeVersion(`${MAX_NODE_MAJOR + 1}.0.0`), false);
assert.equal(isSupportedNodeVersion("20.18.1"), false);
assert.equal(isSupportedNodeVersion("18.17.0"), false);
});
@@ -22,7 +25,7 @@ test("ensureSupportedNodeVersion throws a guided upgrade message", () => {
(error: unknown) =>
error instanceof Error &&
error.message.includes(`Node.js ${MIN_NODE_VERSION}`) &&
error.message.includes("nvm install 20 && nvm use 20") &&
error.message.includes("nvm install 22 && nvm use 22") &&
error.message.includes("https://feynman.is/install"),
);
});
@@ -30,6 +33,13 @@ test("ensureSupportedNodeVersion throws a guided upgrade message", () => {
test("unsupported version guidance reports the detected version", () => {
const lines = getUnsupportedNodeVersionLines("18.17.0");
assert.equal(lines[0], "feynman requires Node.js 20.19.0 or later (detected 18.17.0).");
assert.equal(lines[0], `feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected 18.17.0).`);
assert.ok(lines.some((line) => line.includes("curl -fsSL https://feynman.is/install | bash")));
});
test("unsupported version guidance explains upper-bound failures", () => {
const lines = getUnsupportedNodeVersionLines("25.1.0");
assert.equal(lines[0], `feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected 25.1.0).`);
assert.ok(lines.some((line) => line.includes("native Pi packages may fail to build")));
});

335
tests/package-ops.test.ts Normal file
View File

@@ -0,0 +1,335 @@
import test from "node:test";
import assert from "node:assert/strict";
import { appendFileSync, existsSync, lstatSync, mkdtempSync, mkdirSync, readFileSync, writeFileSync } from "node:fs";
import { tmpdir } from "node:os";
import { join, resolve } from "node:path";
import { installPackageSources, seedBundledWorkspacePackages, updateConfiguredPackages } from "../src/pi/package-ops.js";
function createBundledWorkspace(
appRoot: string,
packageNames: string[],
dependenciesByPackage: Record<string, Record<string, string>> = {},
): void {
for (const packageName of packageNames) {
const packageDir = resolve(appRoot, ".feynman", "npm", "node_modules", packageName);
mkdirSync(packageDir, { recursive: true });
writeFileSync(
join(packageDir, "package.json"),
JSON.stringify({ name: packageName, version: "1.0.0", dependencies: dependenciesByPackage[packageName] }, null, 2) + "\n",
"utf8",
);
}
}
function createInstalledGlobalPackage(homeRoot: string, packageName: string, version = "1.0.0"): void {
const packageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", packageName);
mkdirSync(packageDir, { recursive: true });
writeFileSync(
join(packageDir, "package.json"),
JSON.stringify({ name: packageName, version }, null, 2) + "\n",
"utf8",
);
}
function writeSettings(agentDir: string, settings: Record<string, unknown>): void {
mkdirSync(agentDir, { recursive: true });
writeFileSync(resolve(agentDir, "settings.json"), JSON.stringify(settings, null, 2) + "\n", "utf8");
}
function writeFakeNpmScript(root: string, body: string): string {
const scriptPath = resolve(root, "fake-npm.mjs");
writeFileSync(scriptPath, body, "utf8");
return scriptPath;
}
test("seedBundledWorkspacePackages links bundled packages into the Feynman npm prefix", () => {
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
const agentDir = resolve(homeRoot, "agent");
mkdirSync(agentDir, { recursive: true });
createBundledWorkspace(appRoot, ["pi-subagents", "@samfp/pi-memory"]);
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, [
"npm:pi-subagents",
"npm:@samfp/pi-memory",
]);
assert.deepEqual(seeded.sort(), ["npm:@samfp/pi-memory", "npm:pi-subagents"]);
const globalRoot = resolve(homeRoot, "npm-global", "lib", "node_modules");
assert.equal(existsSync(resolve(globalRoot, "pi-subagents", "package.json")), true);
assert.equal(existsSync(resolve(globalRoot, "@samfp", "pi-memory", "package.json")), true);
});
test("seedBundledWorkspacePackages preserves existing installed packages", () => {
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
const agentDir = resolve(homeRoot, "agent");
const existingPackageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", "pi-subagents");
mkdirSync(agentDir, { recursive: true });
createBundledWorkspace(appRoot, ["pi-subagents"]);
mkdirSync(existingPackageDir, { recursive: true });
writeFileSync(resolve(existingPackageDir, "package.json"), '{"name":"pi-subagents","version":"user"}\n', "utf8");
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, ["npm:pi-subagents"]);
assert.deepEqual(seeded, []);
assert.equal(readFileSync(resolve(existingPackageDir, "package.json"), "utf8"), '{"name":"pi-subagents","version":"user"}\n');
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), false);
});
test("seedBundledWorkspacePackages repairs broken existing bundled packages", () => {
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
const agentDir = resolve(homeRoot, "agent");
const existingPackageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", "pi-markdown-preview");
mkdirSync(agentDir, { recursive: true });
createBundledWorkspace(appRoot, ["pi-markdown-preview", "puppeteer-core"], {
"pi-markdown-preview": { "puppeteer-core": "^24.0.0" },
});
mkdirSync(existingPackageDir, { recursive: true });
writeFileSync(
resolve(existingPackageDir, "package.json"),
JSON.stringify({ name: "pi-markdown-preview", version: "broken", dependencies: { "puppeteer-core": "^24.0.0" } }) + "\n",
"utf8",
);
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, ["npm:pi-markdown-preview"]);
assert.deepEqual(seeded, ["npm:pi-markdown-preview"]);
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), true);
assert.equal(lstatSync(resolve(homeRoot, "npm-global", "lib", "node_modules", "puppeteer-core")).isSymbolicLink(), true);
assert.equal(
readFileSync(resolve(existingPackageDir, "package.json"), "utf8").includes('"version": "1.0.0"'),
true,
);
});
test("installPackageSources filters noisy npm chatter but preserves meaningful output", async () => {
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
const workingDir = resolve(root, "project");
const agentDir = resolve(root, "agent");
mkdirSync(workingDir, { recursive: true });
const scriptPath = writeFakeNpmScript(root, [
`console.log("npm warn deprecated node-domexception@1.0.0: Use your platform's native DOMException instead");`,
'console.log("changed 343 packages in 9s");',
'console.log("59 packages are looking for funding");',
'console.log("run `npm fund` for details");',
'console.error("visible stderr line");',
'console.log("visible stdout line");',
"process.exit(0);",
].join("\n"));
writeSettings(agentDir, {
npmCommand: [process.execPath, scriptPath],
});
let stdout = "";
let stderr = "";
const originalStdoutWrite = process.stdout.write.bind(process.stdout);
const originalStderrWrite = process.stderr.write.bind(process.stderr);
(process.stdout.write as unknown as (chunk: string | Uint8Array) => boolean) = ((chunk: string | Uint8Array) => {
stdout += chunk.toString();
return true;
}) as typeof process.stdout.write;
(process.stderr.write as unknown as (chunk: string | Uint8Array) => boolean) = ((chunk: string | Uint8Array) => {
stderr += chunk.toString();
return true;
}) as typeof process.stderr.write;
try {
const result = await installPackageSources(workingDir, agentDir, ["npm:test-visible-package"]);
assert.deepEqual(result.installed, ["npm:test-visible-package"]);
assert.deepEqual(result.skipped, []);
} finally {
process.stdout.write = originalStdoutWrite;
process.stderr.write = originalStderrWrite;
}
const combined = `${stdout}\n${stderr}`;
assert.match(combined, /visible stdout line/);
assert.match(combined, /visible stderr line/);
assert.doesNotMatch(combined, /node-domexception/);
assert.doesNotMatch(combined, /changed 343 packages/);
assert.doesNotMatch(combined, /packages are looking for funding/);
assert.doesNotMatch(combined, /npm fund/);
});
test("installPackageSources skips native packages on unsupported Node majors before invoking npm", async () => {
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
const workingDir = resolve(root, "project");
const agentDir = resolve(root, "agent");
const markerPath = resolve(root, "npm-invoked.txt");
mkdirSync(workingDir, { recursive: true });
const scriptPath = writeFakeNpmScript(root, [
`import { writeFileSync } from "node:fs";`,
`writeFileSync(${JSON.stringify(markerPath)}, "invoked\\n", "utf8");`,
"process.exit(0);",
].join("\n"));
writeSettings(agentDir, {
npmCommand: [process.execPath, scriptPath],
});
const originalVersion = process.versions.node;
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
try {
const result = await installPackageSources(workingDir, agentDir, ["npm:@kaiserlich-dev/pi-session-search"]);
assert.deepEqual(result.installed, []);
assert.deepEqual(result.skipped, ["npm:@kaiserlich-dev/pi-session-search"]);
assert.equal(existsSync(markerPath), false);
} finally {
Object.defineProperty(process.versions, "node", { value: originalVersion, configurable: true });
}
});
test("installPackageSources disables inherited npm dry-run config for child installs", async () => {
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
const workingDir = resolve(root, "project");
const agentDir = resolve(root, "agent");
const markerPath = resolve(root, "install-env-ok.txt");
mkdirSync(workingDir, { recursive: true });
const scriptPath = writeFakeNpmScript(root, [
`import { writeFileSync } from "node:fs";`,
`if (process.env.npm_config_dry_run !== "false" || process.env.NPM_CONFIG_DRY_RUN !== "false") process.exit(42);`,
`writeFileSync(${JSON.stringify(markerPath)}, "ok\\n", "utf8");`,
"process.exit(0);",
].join("\n"));
writeSettings(agentDir, {
npmCommand: [process.execPath, scriptPath],
});
const originalLower = process.env.npm_config_dry_run;
const originalUpper = process.env.NPM_CONFIG_DRY_RUN;
process.env.npm_config_dry_run = "true";
process.env.NPM_CONFIG_DRY_RUN = "true";
try {
const result = await installPackageSources(workingDir, agentDir, ["npm:test-package"]);
assert.deepEqual(result.installed, ["npm:test-package"]);
assert.equal(existsSync(markerPath), true);
} finally {
if (originalLower === undefined) {
delete process.env.npm_config_dry_run;
} else {
process.env.npm_config_dry_run = originalLower;
}
if (originalUpper === undefined) {
delete process.env.NPM_CONFIG_DRY_RUN;
} else {
process.env.NPM_CONFIG_DRY_RUN = originalUpper;
}
}
});
test("updateConfiguredPackages batches multiple npm updates into a single install per scope", async () => {
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
const workingDir = resolve(root, "project");
const agentDir = resolve(root, "agent");
const logPath = resolve(root, "npm-invocations.jsonl");
mkdirSync(workingDir, { recursive: true });
const scriptPath = writeFakeNpmScript(root, [
`import { appendFileSync } from "node:fs";`,
`import { resolve } from "node:path";`,
`const args = process.argv.slice(2);`,
`if (args.length === 2 && args[0] === "root" && args[1] === "-g") {`,
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
` process.exit(0);`,
`}`,
`if (args.length >= 4 && args[0] === "view" && args[2] === "version" && args[3] === "--json") {`,
` console.log(JSON.stringify("2.0.0"));`,
` process.exit(0);`,
`}`,
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
"process.exit(0);",
].join("\n"));
writeSettings(agentDir, {
npmCommand: [process.execPath, scriptPath],
packages: ["npm:test-one", "npm:test-two"],
});
createInstalledGlobalPackage(root, "test-one", "1.0.0");
createInstalledGlobalPackage(root, "test-two", "1.0.0");
const originalFetch = globalThis.fetch;
globalThis.fetch = (async () => ({
ok: true,
json: async () => ({ version: "2.0.0" }),
})) as unknown as typeof fetch;
try {
const result = await updateConfiguredPackages(workingDir, agentDir);
assert.deepEqual(result.skipped, []);
assert.deepEqual(result.updated.sort(), ["npm:test-one", "npm:test-two"]);
} finally {
globalThis.fetch = originalFetch;
}
const invocations = readFileSync(logPath, "utf8").trim().split("\n").map((line) => JSON.parse(line) as string[]);
assert.equal(invocations.length, 1);
assert.ok(invocations[0]?.includes("install"));
assert.ok(invocations[0]?.includes("test-one@latest"));
assert.ok(invocations[0]?.includes("test-two@latest"));
});
test("updateConfiguredPackages skips native package updates on unsupported Node majors", async () => {
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
const workingDir = resolve(root, "project");
const agentDir = resolve(root, "agent");
const logPath = resolve(root, "npm-invocations.jsonl");
mkdirSync(workingDir, { recursive: true });
const scriptPath = writeFakeNpmScript(root, [
`import { appendFileSync } from "node:fs";`,
`import { resolve } from "node:path";`,
`const args = process.argv.slice(2);`,
`if (args.length === 2 && args[0] === "root" && args[1] === "-g") {`,
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
` process.exit(0);`,
`}`,
`if (args.length >= 4 && args[0] === "view" && args[2] === "version" && args[3] === "--json") {`,
` console.log(JSON.stringify("2.0.0"));`,
` process.exit(0);`,
`}`,
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
"process.exit(0);",
].join("\n"));
writeSettings(agentDir, {
npmCommand: [process.execPath, scriptPath],
packages: ["npm:@kaiserlich-dev/pi-session-search", "npm:test-regular"],
});
createInstalledGlobalPackage(root, "@kaiserlich-dev/pi-session-search", "1.0.0");
createInstalledGlobalPackage(root, "test-regular", "1.0.0");
const originalFetch = globalThis.fetch;
const originalVersion = process.versions.node;
globalThis.fetch = (async () => ({
ok: true,
json: async () => ({ version: "2.0.0" }),
})) as unknown as typeof fetch;
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
try {
const result = await updateConfiguredPackages(workingDir, agentDir);
assert.deepEqual(result.updated, ["npm:test-regular"]);
assert.deepEqual(result.skipped, ["npm:@kaiserlich-dev/pi-session-search"]);
} finally {
globalThis.fetch = originalFetch;
Object.defineProperty(process.versions, "node", { value: originalVersion, configurable: true });
}
const invocations = existsSync(logPath)
? readFileSync(logPath, "utf8").trim().split("\n").filter(Boolean).map((line) => JSON.parse(line) as string[])
: [];
assert.equal(invocations.length, 1);
assert.ok(invocations[0]?.includes("test-regular@latest"));
assert.ok(!invocations[0]?.some((entry) => entry.includes("pi-session-search")));
});

View File

@@ -0,0 +1,42 @@
import test from "node:test";
import assert from "node:assert/strict";
import { patchPiExtensionLoaderSource } from "../scripts/lib/pi-extension-loader-patch.mjs";
test("patchPiExtensionLoaderSource rewrites Windows extension imports to file URLs", () => {
const input = [
'import * as path from "node:path";',
'import { fileURLToPath } from "node:url";',
"async function loadExtensionModule(extensionPath) {",
" const jiti = createJiti(import.meta.url);",
' const module = await jiti.import(extensionPath, { default: true });',
" return module;",
"}",
"",
].join("\n");
const patched = patchPiExtensionLoaderSource(input);
assert.match(patched, /pathToFileURL/);
assert.match(patched, /process\.platform === "win32"/);
assert.match(patched, /path\.isAbsolute\(extensionPath\)/);
assert.match(patched, /jiti\.import\(extensionSpecifier, \{ default: true \}\)/);
});
test("patchPiExtensionLoaderSource is idempotent", () => {
const input = [
'import * as path from "node:path";',
'import { fileURLToPath } from "node:url";',
"async function loadExtensionModule(extensionPath) {",
" const jiti = createJiti(import.meta.url);",
' const module = await jiti.import(extensionPath, { default: true });',
" return module;",
"}",
"",
].join("\n");
const once = patchPiExtensionLoaderSource(input);
const twice = patchPiExtensionLoaderSource(once);
assert.equal(twice, once);
});

View File

@@ -0,0 +1,42 @@
import test from "node:test";
import assert from "node:assert/strict";
import { patchPiGoogleLegacySchemaSource } from "../scripts/lib/pi-google-legacy-schema-patch.mjs";
test("patchPiGoogleLegacySchemaSource rewrites legacy parameters conversion to normalize const", () => {
const input = [
"export function convertTools(tools, useParameters = false) {",
" if (tools.length === 0) return undefined;",
" return [",
" {",
" functionDeclarations: tools.map((tool) => ({",
" name: tool.name,",
" description: tool.description,",
' ...(useParameters ? { parameters: tool.parameters } : { parametersJsonSchema: tool.parameters }),',
" })),",
" },",
" ];",
"}",
"",
].join("\n");
const patched = patchPiGoogleLegacySchemaSource(input);
assert.match(patched, /function normalizeLegacyToolSchema\(schema\)/);
assert.match(patched, /normalized\.enum = \[value\]/);
assert.match(patched, /parameters: normalizeLegacyToolSchema\(tool\.parameters\)/);
});
test("patchPiGoogleLegacySchemaSource is idempotent", () => {
const input = [
"export function convertTools(tools, useParameters = false) {",
' ...(useParameters ? { parameters: tool.parameters } : { parametersJsonSchema: tool.parameters }),',
"}",
"",
].join("\n");
const once = patchPiGoogleLegacySchemaSource(input);
const twice = patchPiGoogleLegacySchemaSource(once);
assert.equal(twice, once);
});

9
tests/pi-launch.test.ts Normal file
View File

@@ -0,0 +1,9 @@
import test from "node:test";
import assert from "node:assert/strict";
import { exitCodeFromSignal } from "../src/pi/launch.js";
test("exitCodeFromSignal maps POSIX signals to conventional shell exit codes", () => {
assert.equal(exitCodeFromSignal("SIGTERM"), 143);
assert.equal(exitCodeFromSignal("SIGSEGV"), 139);
});

View File

@@ -1,7 +1,8 @@
import test from "node:test";
import assert from "node:assert/strict";
import { pathToFileURL } from "node:url";
import { buildPiArgs, buildPiEnv, resolvePiPaths } from "../src/pi/runtime.js";
import { applyFeynmanPackageManagerEnv, buildPiArgs, buildPiEnv, resolvePiPaths, toNodeImportSpecifier } from "../src/pi/runtime.js";
test("buildPiArgs includes configured runtime paths and prompt", () => {
const args = buildPiArgs({
@@ -9,6 +10,7 @@ test("buildPiArgs includes configured runtime paths and prompt", () => {
workingDir: "/workspace",
sessionDir: "/sessions",
feynmanAgentDir: "/home/.feynman/agent",
mode: "rpc",
initialPrompt: "hello",
explicitModelSpec: "openai:gpt-5.4",
thinkingLevel: "medium",
@@ -21,6 +23,8 @@ test("buildPiArgs includes configured runtime paths and prompt", () => {
"/repo/feynman/extensions/research-tools.ts",
"--prompt-template",
"/repo/feynman/prompts",
"--mode",
"rpc",
"--model",
"openai:gpt-5.4",
"--thinking",
@@ -50,6 +54,7 @@ test("buildPiEnv wires Feynman paths into the Pi environment", () => {
assert.equal(env.FEYNMAN_NPM_PREFIX, "/home/.feynman/npm-global");
assert.equal(env.NPM_CONFIG_PREFIX, "/home/.feynman/npm-global");
assert.equal(env.npm_config_prefix, "/home/.feynman/npm-global");
assert.equal(env.FEYNMAN_CODING_AGENT_DIR, "/home/.feynman/agent");
assert.equal(env.PI_CODING_AGENT_DIR, "/home/.feynman/agent");
assert.ok(
env.PATH?.startsWith(
@@ -70,8 +75,47 @@ test("buildPiEnv wires Feynman paths into the Pi environment", () => {
}
});
test("applyFeynmanPackageManagerEnv pins npm globals to the Feynman prefix", () => {
const previousFeynmanPrefix = process.env.FEYNMAN_NPM_PREFIX;
const previousUppercasePrefix = process.env.NPM_CONFIG_PREFIX;
const previousLowercasePrefix = process.env.npm_config_prefix;
try {
const prefix = applyFeynmanPackageManagerEnv("/home/.feynman/agent");
assert.equal(prefix, "/home/.feynman/npm-global");
assert.equal(process.env.FEYNMAN_NPM_PREFIX, "/home/.feynman/npm-global");
assert.equal(process.env.NPM_CONFIG_PREFIX, "/home/.feynman/npm-global");
assert.equal(process.env.npm_config_prefix, "/home/.feynman/npm-global");
} finally {
if (previousFeynmanPrefix === undefined) {
delete process.env.FEYNMAN_NPM_PREFIX;
} else {
process.env.FEYNMAN_NPM_PREFIX = previousFeynmanPrefix;
}
if (previousUppercasePrefix === undefined) {
delete process.env.NPM_CONFIG_PREFIX;
} else {
process.env.NPM_CONFIG_PREFIX = previousUppercasePrefix;
}
if (previousLowercasePrefix === undefined) {
delete process.env.npm_config_prefix;
} else {
process.env.npm_config_prefix = previousLowercasePrefix;
}
}
});
test("resolvePiPaths includes the Promise.withResolvers polyfill path", () => {
const paths = resolvePiPaths("/repo/feynman");
assert.equal(paths.promisePolyfillPath, "/repo/feynman/dist/system/promise-polyfill.js");
});
test("toNodeImportSpecifier converts absolute preload paths to file URLs", () => {
assert.equal(
toNodeImportSpecifier("/repo/feynman/dist/system/promise-polyfill.js"),
pathToFileURL("/repo/feynman/dist/system/promise-polyfill.js").href,
);
assert.equal(toNodeImportSpecifier("tsx"), "tsx");
});

View File

@@ -4,7 +4,13 @@ import { tmpdir } from "node:os";
import { join } from "node:path";
import test from "node:test";
import { CORE_PACKAGE_SOURCES, getOptionalPackagePresetSources, shouldPruneLegacyDefaultPackages } from "../src/pi/package-presets.js";
import {
CORE_PACKAGE_SOURCES,
getOptionalPackagePresetSources,
NATIVE_PACKAGE_SOURCES,
shouldPruneLegacyDefaultPackages,
supportsNativePackageSources,
} from "../src/pi/package-presets.js";
import { normalizeFeynmanSettings, normalizeThinkingLevel } from "../src/pi/settings.js";
test("normalizeThinkingLevel accepts the latest Pi thinking levels", () => {
@@ -71,3 +77,42 @@ test("optional package presets map friendly aliases", () => {
assert.deepEqual(getOptionalPackagePresetSources("search"), undefined);
assert.equal(shouldPruneLegacyDefaultPackages(["npm:custom"]), false);
});
test("supportsNativePackageSources disables sqlite-backed packages on Node 25+", () => {
assert.equal(supportsNativePackageSources("24.8.0"), true);
assert.equal(supportsNativePackageSources("25.0.0"), false);
});
test("normalizeFeynmanSettings prunes native core packages on unsupported Node majors", () => {
const root = mkdtempSync(join(tmpdir(), "feynman-settings-"));
const settingsPath = join(root, "settings.json");
const bundledSettingsPath = join(root, "bundled-settings.json");
const authPath = join(root, "auth.json");
writeFileSync(
settingsPath,
JSON.stringify(
{
packages: [...CORE_PACKAGE_SOURCES],
},
null,
2,
) + "\n",
"utf8",
);
writeFileSync(bundledSettingsPath, "{}\n", "utf8");
writeFileSync(authPath, "{}\n", "utf8");
const originalVersion = process.versions.node;
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
try {
normalizeFeynmanSettings(settingsPath, bundledSettingsPath, "medium", authPath);
} finally {
Object.defineProperty(process.versions, "node", { value: originalVersion, configurable: true });
}
const settings = JSON.parse(readFileSync(settingsPath, "utf8")) as { packages?: string[] };
for (const source of NATIVE_PACKAGE_SOURCES) {
assert.equal(settings.packages?.includes(source), false);
}
});

View File

@@ -0,0 +1,294 @@
import test from "node:test";
import assert from "node:assert/strict";
import { patchPiSubagentsSource, stripPiSubagentBuiltinModelSource } from "../scripts/lib/pi-subagents-patch.mjs";
const CASES = [
{
name: "index.ts config path",
file: "index.ts",
input: [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
"",
].join("\n"),
original: 'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
expected: 'const configPath = path.join(resolvePiAgentDir(), "extensions", "subagent", "config.json");',
},
{
name: "agents.ts user agents dir",
file: "agents.ts",
input: [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const userDir = path.join(os.homedir(), ".pi", "agent", "agents");',
"",
].join("\n"),
original: 'const userDir = path.join(os.homedir(), ".pi", "agent", "agents");',
expected: 'const userDir = path.join(resolvePiAgentDir(), "agents");',
},
{
name: "artifacts.ts sessions dir",
file: "artifacts.ts",
input: [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const sessionsBase = path.join(os.homedir(), ".pi", "agent", "sessions");',
"",
].join("\n"),
original: 'const sessionsBase = path.join(os.homedir(), ".pi", "agent", "sessions");',
expected: 'const sessionsBase = path.join(resolvePiAgentDir(), "sessions");',
},
{
name: "run-history.ts history file",
file: "run-history.ts",
input: [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const HISTORY_PATH = path.join(os.homedir(), ".pi", "agent", "run-history.jsonl");',
"",
].join("\n"),
original: 'const HISTORY_PATH = path.join(os.homedir(), ".pi", "agent", "run-history.jsonl");',
expected: 'const HISTORY_PATH = path.join(resolvePiAgentDir(), "run-history.jsonl");',
},
{
name: "skills.ts agent dir",
file: "skills.ts",
input: [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const AGENT_DIR = path.join(os.homedir(), ".pi", "agent");',
"",
].join("\n"),
original: 'const AGENT_DIR = path.join(os.homedir(), ".pi", "agent");',
expected: "const AGENT_DIR = resolvePiAgentDir();",
},
{
name: "chain-clarify.ts chain save dir",
file: "chain-clarify.ts",
input: [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const dir = path.join(os.homedir(), ".pi", "agent", "agents");',
"",
].join("\n"),
original: 'const dir = path.join(os.homedir(), ".pi", "agent", "agents");',
expected: 'const dir = path.join(resolvePiAgentDir(), "agents");',
},
];
for (const scenario of CASES) {
test(`patchPiSubagentsSource rewrites ${scenario.name}`, () => {
const patched = patchPiSubagentsSource(scenario.file, scenario.input);
assert.match(patched, /function resolvePiAgentDir\(\): string \{/);
assert.match(patched, /process\.env\.FEYNMAN_CODING_AGENT_DIR\?\.trim\(\) \|\| process\.env\.PI_CODING_AGENT_DIR\?\.trim\(\)/);
assert.ok(patched.includes(scenario.expected));
assert.ok(!patched.includes(scenario.original));
});
}
test("patchPiSubagentsSource is idempotent", () => {
const input = [
'import * as os from "node:os";',
'import * as path from "node:path";',
'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
"",
].join("\n");
const once = patchPiSubagentsSource("index.ts", input);
const twice = patchPiSubagentsSource("index.ts", once);
assert.equal(twice, once);
});
test("patchPiSubagentsSource rewrites modern agents.ts discovery paths", () => {
const input = [
'import * as fs from "node:fs";',
'import * as os from "node:os";',
'import * as path from "node:path";',
'export function discoverAgents(cwd: string, scope: AgentScope): AgentDiscoveryResult {',
'\tconst userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
'\tconst userDirNew = path.join(os.homedir(), ".agents");',
'\tconst userAgentsOld = scope === "project" ? [] : loadAgentsFromDir(userDirOld, "user");',
'\tconst userAgentsNew = scope === "project" ? [] : loadAgentsFromDir(userDirNew, "user");',
'\tconst userAgents = [...userAgentsOld, ...userAgentsNew];',
'}',
'export function discoverAgentsAll(cwd: string) {',
'\tconst userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
'\tconst userDirNew = path.join(os.homedir(), ".agents");',
'\tconst user = [',
'\t\t...loadAgentsFromDir(userDirOld, "user"),',
'\t\t...loadAgentsFromDir(userDirNew, "user"),',
'\t];',
'\tconst chains = [',
'\t\t...loadChainsFromDir(userDirOld, "user"),',
'\t\t...loadChainsFromDir(userDirNew, "user"),',
'\t\t...(projectDir ? loadChainsFromDir(projectDir, "project") : []),',
'\t];',
'\tconst userDir = fs.existsSync(userDirNew) ? userDirNew : userDirOld;',
'}',
].join("\n");
const patched = patchPiSubagentsSource("agents.ts", input);
assert.match(patched, /function resolvePiAgentDir\(\): string \{/);
assert.match(patched, /const userDir = path\.join\(resolvePiAgentDir\(\), "agents"\);/);
assert.match(patched, /const userAgents = scope === "project" \? \[\] : loadAgentsFromDir\(userDir, "user"\);/);
assert.ok(!patched.includes('loadAgentsFromDir(userDirOld, "user")'));
assert.ok(!patched.includes('loadChainsFromDir(userDirNew, "user")'));
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
});
test("patchPiSubagentsSource preserves output on top-level parallel tasks", () => {
const input = [
"interface TaskParam {",
"\tagent: string;",
"\ttask: string;",
"\tcwd?: string;",
"\tcount?: number;",
"\tmodel?: string;",
"\tskill?: string | string[] | boolean;",
"}",
"function run(params: { tasks: TaskParam[] }) {",
"\tconst modelOverrides = params.tasks.map(() => undefined);",
"\tconst skillOverrides = params.tasks.map(() => undefined);",
"\tconst parallelTasks = params.tasks.map((task, index) => ({",
"\t\tagent: task.agent,",
"\t\ttask: params.context === \"fork\" ? wrapForkTask(task.task) : task.task,",
"\t\tcwd: task.cwd,",
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
"\t\t...(skillOverrides[index] !== undefined ? { skill: skillOverrides[index] } : {}),",
"\t}));",
"}",
].join("\n");
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
assert.match(patched, /output\?: string \| false;/);
assert.match(patched, /\n\t\toutput: task\.output,/);
assert.doesNotMatch(patched, /resolvePiAgentDir/);
});
test("patchPiSubagentsSource preserves output in async parallel task handoff", () => {
const input = [
"function run(tasks: TaskParam[]) {",
"\tconst modelOverrides = tasks.map(() => undefined);",
"\tconst skillOverrides = tasks.map(() => undefined);",
"\tconst parallelTasks = tasks.map((t, i) => ({",
"\t\tagent: t.agent,",
"\t\ttask: params.context === \"fork\" ? wrapForkTask(taskTexts[i]!) : taskTexts[i]!,",
"\t\tcwd: t.cwd,",
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
"\t\t...(skillOverrides[i] !== undefined ? { skill: skillOverrides[i] } : {}),",
"\t}));",
"}",
].join("\n");
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
assert.match(patched, /\n\t\toutput: t\.output,/);
});
test("patchPiSubagentsSource uses task output when resolving foreground parallel behavior", () => {
const input = [
"async function run(tasks: TaskParam[]) {",
"\tconst skillOverrides = tasks.map((t) => normalizeSkillInput(t.skill));",
"\tif (params.clarify === true && ctx.hasUI) {",
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
"\t\t\tresolveStepBehavior(c, { skills: skillOverrides[i] }),",
"\t\t);",
"\t}",
"\tconst behaviors = agentConfigs.map((config) => resolveStepBehavior(config, {}));",
"}",
].join("\n");
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
assert.match(patched, /resolveStepBehavior\(c, \{ output: tasks\[i\]\?\.output, skills: skillOverrides\[i\] \}\)/);
assert.match(patched, /resolveStepBehavior\(config, \{ output: tasks\[i\]\?\.output, skills: skillOverrides\[i\] \}\)/);
assert.doesNotMatch(patched, /resolveStepBehavior\(config, \{\}\)/);
});
test("patchPiSubagentsSource passes foreground parallel output paths into runSync", () => {
const input = [
"async function runForegroundParallelTasks(input: ForegroundParallelRunInput): Promise<SingleResult[]> {",
"\treturn mapConcurrent(input.tasks, input.concurrencyLimit, async (task, index) => {",
"\t\tconst overrideSkills = input.skillOverrides[index];",
"\t\tconst effectiveSkills = overrideSkills === undefined ? input.behaviors[index]?.skills : overrideSkills;",
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, input.taskTexts[index]!, {",
"\t\t\tcwd: taskCwd,",
"\t\t\tsignal: input.signal,",
"\t\t\tmaxOutput: input.maxOutput,",
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
"\t\t});",
"\t});",
"}",
].join("\n");
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
assert.match(patched, /const outputPath = typeof input\.behaviors\[index\]\?\.output === "string"/);
assert.match(patched, /const taskText = injectSingleOutputInstruction\(input\.taskTexts\[index\]!, outputPath\)/);
assert.match(patched, /runSync\(input\.ctx\.cwd, input\.agents, task\.agent, taskText, \{/);
assert.match(patched, /\n\t\t\toutputPath,/);
});
test("patchPiSubagentsSource documents output in top-level task schema", () => {
const input = [
"export const TaskItem = Type.Object({ ",
"\tagent: Type.String(), ",
"\ttask: Type.String(), ",
"\tcwd: Type.Optional(Type.String()),",
"\tcount: Type.Optional(Type.Integer({ minimum: 1, description: \"Repeat this parallel task N times with the same settings.\" })),",
"\tmodel: Type.Optional(Type.String({ description: \"Override model for this task (e.g. 'google/gemini-3-pro')\" })),",
"\tskill: Type.Optional(SkillOverride),",
"});",
"export const SubagentParams = Type.Object({",
"\ttasks: Type.Optional(Type.Array(TaskItem, { description: \"PARALLEL mode: [{agent, task, count?}, ...]\" })),",
"});",
].join("\n");
const patched = patchPiSubagentsSource("schemas.ts", input);
assert.match(patched, /output: Type\.Optional\(Type\.Any/);
assert.match(patched, /count\?, output\?/);
assert.doesNotMatch(patched, /resolvePiAgentDir/);
});
test("patchPiSubagentsSource documents output in top-level parallel help", () => {
const input = [
'import * as os from "node:os";',
'import * as path from "node:path";',
"const help = `",
"• PARALLEL: { tasks: [{agent,task,count?}, ...], concurrency?: number, worktree?: true } - concurrent execution (worktree: isolate each task in a git worktree)",
"`;",
].join("\n");
const patched = patchPiSubagentsSource("index.ts", input);
assert.match(patched, /output\?/);
assert.match(patched, /per-task file target/);
assert.doesNotMatch(patched, /function resolvePiAgentDir/);
});
test("stripPiSubagentBuiltinModelSource removes built-in model pins", () => {
const input = [
"---",
"name: researcher",
"description: Web researcher",
"model: anthropic/claude-sonnet-4-6",
"tools: read, web_search",
"---",
"",
"Body",
].join("\n");
const patched = stripPiSubagentBuiltinModelSource(input);
assert.ok(!patched.includes("model: anthropic/claude-sonnet-4-6"));
assert.match(patched, /name: researcher/);
assert.match(patched, /tools: read, web_search/);
});

View File

@@ -0,0 +1,72 @@
import test from "node:test";
import assert from "node:assert/strict";
import { patchPiWebAccessSource } from "../scripts/lib/pi-web-access-patch.mjs";
test("patchPiWebAccessSource rewrites legacy Pi web-search config paths", () => {
const input = [
'import { join } from "node:path";',
'import { homedir } from "node:os";',
'const CONFIG_PATH = join(homedir(), ".pi", "web-search.json");',
"",
].join("\n");
const patched = patchPiWebAccessSource("perplexity.ts", input);
assert.match(patched, /FEYNMAN_WEB_SEARCH_CONFIG/);
assert.match(patched, /PI_WEB_SEARCH_CONFIG/);
});
test("patchPiWebAccessSource updates index.ts directory handling", () => {
const input = [
'import { existsSync, mkdirSync } from "node:fs";',
'import { join } from "node:path";',
'import { homedir } from "node:os";',
'const WEB_SEARCH_CONFIG_PATH = join(homedir(), ".pi", "web-search.json");',
'const dir = join(homedir(), ".pi");',
"",
].join("\n");
const patched = patchPiWebAccessSource("index.ts", input);
assert.match(patched, /import \{ dirname, join \} from "node:path";/);
assert.match(patched, /const dir = dirname\(WEB_SEARCH_CONFIG_PATH\);/);
});
test("patchPiWebAccessSource defaults workflow to none for index.ts without disabling explicit summary-review", () => {
const input = [
'function resolveWorkflow(input: unknown, hasUI: boolean): WebSearchWorkflow {',
'\tif (!hasUI) return "none";',
'\tif (typeof input === "string" && input.trim().toLowerCase() === "none") return "none";',
'\treturn "summary-review";',
'}',
'const configWorkflow = loadConfigForExtensionInit().workflow;',
'const workflow = resolveWorkflow(params.workflow ?? configWorkflow, ctx?.hasUI !== false);',
'workflow: Type.Optional(',
'\tStringEnum(["none", "summary-review"], {',
'\t\tdescription: "Search workflow mode: none = no curator, summary-review = open curator with auto summary draft (default)",',
'\t}),',
'),',
"",
].join("\n");
const patched = patchPiWebAccessSource("index.ts", input);
assert.match(patched, /params\.workflow \?\? configWorkflow \?\? "none"/);
assert.match(patched, /return "summary-review";/);
assert.match(patched, /summary-review = open curator with auto summary draft \(opt-in\)/);
});
test("patchPiWebAccessSource is idempotent", () => {
const input = [
'import { join } from "node:path";',
'import { homedir } from "node:os";',
'const CONFIG_PATH = join(homedir(), ".pi", "web-search.json");',
"",
].join("\n");
const once = patchPiWebAccessSource("perplexity.ts", input);
const twice = patchPiWebAccessSource("perplexity.ts", once);
assert.equal(twice, once);
});

View File

@@ -9,6 +9,7 @@ import {
getPiWebAccessStatus,
getPiWebSearchConfigPath,
loadPiWebAccessConfig,
savePiWebAccessConfig,
} from "../src/pi/web-access.js";
test("loadPiWebAccessConfig returns empty config when Pi web config is missing", () => {
@@ -18,7 +19,57 @@ test("loadPiWebAccessConfig returns empty config when Pi web config is missing",
assert.deepEqual(loadPiWebAccessConfig(configPath), {});
});
test("getPiWebSearchConfigPath respects FEYNMAN_HOME semantics", () => {
assert.equal(getPiWebSearchConfigPath("/tmp/custom-home"), "/tmp/custom-home/.feynman/web-search.json");
});
test("savePiWebAccessConfig merges updates and deletes undefined values", () => {
const root = mkdtempSync(join(tmpdir(), "feynman-pi-web-"));
const configPath = getPiWebSearchConfigPath(root);
savePiWebAccessConfig({
provider: "perplexity",
searchProvider: "perplexity",
perplexityApiKey: "pplx_...",
}, configPath);
savePiWebAccessConfig({
provider: undefined,
searchProvider: undefined,
route: undefined,
}, configPath);
assert.deepEqual(loadPiWebAccessConfig(configPath), {
perplexityApiKey: "pplx_...",
});
});
test("getPiWebAccessStatus reads Pi web-access config directly", () => {
const root = mkdtempSync(join(tmpdir(), "feynman-pi-web-"));
const configPath = getPiWebSearchConfigPath(root);
mkdirSync(join(root, ".feynman"), { recursive: true });
writeFileSync(
configPath,
JSON.stringify({
provider: "exa",
searchProvider: "exa",
exaApiKey: "exa_...",
chromeProfile: "Profile 2",
geminiApiKey: "AIza...",
}),
"utf8",
);
const status = getPiWebAccessStatus(loadPiWebAccessConfig(configPath), configPath);
assert.equal(status.routeLabel, "Exa");
assert.equal(status.requestProvider, "exa");
assert.equal(status.workflow, "none");
assert.equal(status.exaConfigured, true);
assert.equal(status.geminiApiConfigured, true);
assert.equal(status.perplexityConfigured, false);
assert.equal(status.chromeProfile, "Profile 2");
});
test("getPiWebAccessStatus reads Gemini routes directly", () => {
const root = mkdtempSync(join(tmpdir(), "feynman-pi-web-"));
const configPath = getPiWebSearchConfigPath(root);
mkdirSync(join(root, ".feynman"), { recursive: true });
@@ -36,11 +87,25 @@ test("getPiWebAccessStatus reads Pi web-access config directly", () => {
const status = getPiWebAccessStatus(loadPiWebAccessConfig(configPath), configPath);
assert.equal(status.routeLabel, "Gemini");
assert.equal(status.requestProvider, "gemini");
assert.equal(status.workflow, "none");
assert.equal(status.exaConfigured, false);
assert.equal(status.geminiApiConfigured, true);
assert.equal(status.perplexityConfigured, false);
assert.equal(status.chromeProfile, "Profile 2");
});
test("getPiWebAccessStatus supports the legacy route key", () => {
const status = getPiWebAccessStatus({
route: "perplexity",
perplexityApiKey: "pplx_...",
});
assert.equal(status.routeLabel, "Perplexity");
assert.equal(status.requestProvider, "perplexity");
assert.equal(status.workflow, "none");
assert.equal(status.perplexityConfigured, true);
});
test("formatPiWebAccessDoctorLines reports Pi-managed web access", () => {
const lines = formatPiWebAccessDoctorLines(
getPiWebAccessStatus({
@@ -50,5 +115,6 @@ test("formatPiWebAccessDoctorLines reports Pi-managed web access", () => {
);
assert.equal(lines[0], "web access: pi-web-access");
assert.ok(lines.some((line) => line.includes("search workflow: none")));
assert.ok(lines.some((line) => line.includes("/tmp/pi-web-search.json")));
});

View File

@@ -0,0 +1,41 @@
import test from "node:test";
import assert from "node:assert/strict";
import { mkdtempSync, readFileSync } from "node:fs";
import { tmpdir } from "node:os";
import { join } from "node:path";
import {
getConfiguredServiceTier,
normalizeServiceTier,
resolveProviderServiceTier,
setConfiguredServiceTier,
} from "../src/model/service-tier.js";
test("normalizeServiceTier accepts supported values only", () => {
assert.equal(normalizeServiceTier("priority"), "priority");
assert.equal(normalizeServiceTier("standard_only"), "standard_only");
assert.equal(normalizeServiceTier("FAST"), undefined);
assert.equal(normalizeServiceTier(undefined), undefined);
});
test("setConfiguredServiceTier persists and clears settings.json values", () => {
const dir = mkdtempSync(join(tmpdir(), "feynman-service-tier-"));
const settingsPath = join(dir, "settings.json");
setConfiguredServiceTier(settingsPath, "priority");
assert.equal(getConfiguredServiceTier(settingsPath), "priority");
const persisted = JSON.parse(readFileSync(settingsPath, "utf8")) as { serviceTier?: string };
assert.equal(persisted.serviceTier, "priority");
setConfiguredServiceTier(settingsPath, undefined);
assert.equal(getConfiguredServiceTier(settingsPath), undefined);
});
test("resolveProviderServiceTier filters unsupported provider+tier pairs", () => {
assert.equal(resolveProviderServiceTier("openai", "priority"), "priority");
assert.equal(resolveProviderServiceTier("openai-codex", "flex"), "flex");
assert.equal(resolveProviderServiceTier("anthropic", "standard_only"), "standard_only");
assert.equal(resolveProviderServiceTier("anthropic", "priority"), undefined);
assert.equal(resolveProviderServiceTier("google", "priority"), undefined);
});

28
tests/skill-paths.test.ts Normal file
View File

@@ -0,0 +1,28 @@
import test from "node:test";
import assert from "node:assert/strict";
import { existsSync, readdirSync, readFileSync } from "node:fs";
import { dirname, join, resolve } from "node:path";
import { fileURLToPath } from "node:url";
const repoRoot = resolve(dirname(fileURLToPath(import.meta.url)), "..");
const skillsRoot = join(repoRoot, "skills");
const markdownPathPattern = /`((?:\.\.?\/)(?:[A-Za-z0-9._-]+\/)*[A-Za-z0-9._-]+\.md)`/g;
const simulatedInstallRoot = join(repoRoot, "__skill-install-root__");
test("all local markdown references in bundled skills resolve in the installed skill layout", () => {
for (const entry of readdirSync(skillsRoot, { withFileTypes: true })) {
if (!entry.isDirectory()) continue;
const skillPath = join(skillsRoot, entry.name, "SKILL.md");
if (!existsSync(skillPath)) continue;
const content = readFileSync(skillPath, "utf8");
for (const match of content.matchAll(markdownPathPattern)) {
const reference = match[1];
const installedSkillDir = join(simulatedInstallRoot, entry.name);
const installedTarget = resolve(installedSkillDir, reference);
const repoTarget = installedTarget.replace(simulatedInstallRoot, repoRoot);
assert.ok(existsSync(repoTarget), `${skillPath} references missing installed markdown file ${reference}`);
}
}
});

File diff suppressed because it is too large Load Diff

View File

@@ -21,6 +21,7 @@
"@tailwindcss/vite": "^4.2.1",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"@vercel/analytics": "^2.0.1",
"astro": "^5.18.1",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
@@ -33,7 +34,21 @@
"tailwindcss": "^4.2.1",
"tw-animate-css": "^1.4.0"
},
"overrides": {
"@modelcontextprotocol/sdk": {
"@hono/node-server": "1.19.14",
"hono": "4.12.14"
},
"router": {
"path-to-regexp": "8.4.2"
},
"defu": "6.1.7",
"vite": "6.4.2",
"brace-expansion": "1.1.13",
"yaml": "2.8.3"
},
"devDependencies": {
"@astrojs/check": "^0.9.8",
"@eslint/js": "^9.39.4",
"eslint": "^9.39.4",
"eslint-plugin-react-hooks": "^7.0.1",

View File

@@ -177,11 +177,7 @@ warn_command_conflict() {
step "Run now: export PATH=\"$INSTALL_BIN_DIR:\$PATH\" && hash -r && feynman"
step "Or launch directly: $expected_path"
case "$resolved_path" in
*"/node_modules/@companion-ai/feynman/"* | *"/node_modules/.bin/feynman")
step "If that path is an old global npm install, remove it with: npm uninstall -g @companion-ai/feynman"
;;
esac
step "If that path is an old package-manager install, remove it or put $INSTALL_BIN_DIR first on PATH."
fi
}
@@ -264,8 +260,8 @@ This usually means the release exists, but not all platform bundles were uploade
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
- pass the latest published version explicitly, e.g.:
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
EOF
exit 1
fi

View File

@@ -146,7 +146,8 @@ archive_metadata="$(resolve_version)"
resolved_version="$(printf '%s\n' "$archive_metadata" | sed -n '1p')"
git_ref="$(printf '%s\n' "$archive_metadata" | sed -n '2p')"
archive_url=""
archive_url="${FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL:-}"
if [ -z "$archive_url" ]; then
case "$git_ref" in
main)
archive_url="https://github.com/getcompanion-ai/feynman/archive/refs/heads/main.tar.gz"
@@ -155,6 +156,7 @@ case "$git_ref" in
archive_url="https://github.com/getcompanion-ai/feynman/archive/refs/tags/${git_ref}.tar.gz"
;;
esac
fi
if [ -z "$archive_url" ]; then
echo "Could not resolve a download URL for ref: $git_ref" >&2
@@ -181,8 +183,8 @@ step "Extracting skills"
tar -xzf "$archive_path" -C "$extract_dir"
source_root="$(find "$extract_dir" -mindepth 1 -maxdepth 1 -type d | head -n 1)"
if [ -z "$source_root" ] || [ ! -d "$source_root/skills" ]; then
echo "Could not find skills/ in downloaded archive." >&2
if [ -z "$source_root" ] || [ ! -d "$source_root/skills" ] || [ ! -d "$source_root/prompts" ]; then
echo "Could not find the bundled skills resources in the downloaded archive." >&2
exit 1
fi
@@ -190,6 +192,10 @@ mkdir -p "$(dirname "$install_dir")"
rm -rf "$install_dir"
mkdir -p "$install_dir"
cp -R "$source_root/skills/." "$install_dir/"
mkdir -p "$install_dir/prompts"
cp -R "$source_root/prompts/." "$install_dir/prompts/"
cp "$source_root/AGENTS.md" "$install_dir/AGENTS.md"
cp "$source_root/CONTRIBUTING.md" "$install_dir/CONTRIBUTING.md"
step "Installed skills to $install_dir"
case "$SCOPE" in

View File

@@ -46,7 +46,7 @@ function Resolve-VersionMetadata {
return [PSCustomObject]@{
ResolvedVersion = $resolvedVersion
GitRef = "v$resolvedVersion"
DownloadUrl = "https://github.com/getcompanion-ai/feynman/archive/refs/tags/v$resolvedVersion.zip"
DownloadUrl = if ($env:FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL) { $env:FEYNMAN_INSTALL_SKILLS_ARCHIVE_URL } else { "https://github.com/getcompanion-ai/feynman/archive/refs/tags/v$resolvedVersion.zip" }
}
}
@@ -92,8 +92,9 @@ try {
}
$skillsSource = Join-Path $sourceRoot.FullName "skills"
if (-not (Test-Path $skillsSource)) {
throw "Could not find skills/ in downloaded archive."
$promptsSource = Join-Path $sourceRoot.FullName "prompts"
if (-not (Test-Path $skillsSource) -or -not (Test-Path $promptsSource)) {
throw "Could not find the bundled skills resources in the downloaded archive."
}
$installParent = Split-Path $installDir -Parent
@@ -107,6 +108,10 @@ try {
New-Item -ItemType Directory -Path $installDir -Force | Out-Null
Copy-Item -Path (Join-Path $skillsSource "*") -Destination $installDir -Recurse -Force
New-Item -ItemType Directory -Path (Join-Path $installDir "prompts") -Force | Out-Null
Copy-Item -Path (Join-Path $promptsSource "*") -Destination (Join-Path $installDir "prompts") -Recurse -Force
Copy-Item -Path (Join-Path $sourceRoot.FullName "AGENTS.md") -Destination (Join-Path $installDir "AGENTS.md") -Force
Copy-Item -Path (Join-Path $sourceRoot.FullName "CONTRIBUTING.md") -Destination (Join-Path $installDir "CONTRIBUTING.md") -Force
Write-Host "==> Installed skills to $installDir"
if ($Scope -eq "Repo") {

View File

@@ -109,8 +109,8 @@ This usually means the release exists, but not all platform bundles were uploade
Workarounds:
- try again after the release finishes publishing
- install via pnpm instead: pnpm add -g @companion-ai/feynman
- install via bun instead: bun add -g @companion-ai/feynman
- pass the latest published version explicitly, e.g.:
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
"@
}
@@ -125,12 +125,18 @@ Workarounds:
New-Item -ItemType Directory -Path $installBinDir -Force | Out-Null
$shimPath = Join-Path $installBinDir "feynman.cmd"
$shimPs1Path = Join-Path $installBinDir "feynman.ps1"
Write-Host "==> Linking feynman into $installBinDir"
@"
@echo off
"$bundleDir\feynman.cmd" %*
CALL "$bundleDir\feynman.cmd" %*
"@ | Set-Content -Path $shimPath -Encoding ASCII
@"
`$BundleDir = "$bundleDir"
& "`$BundleDir\node\node.exe" "`$BundleDir\app\bin\feynman.js" @args
"@ | Set-Content -Path $shimPs1Path -Encoding UTF8
$currentUserPath = [Environment]::GetEnvironmentVariable("Path", "User")
$alreadyOnPath = $false
if ($currentUserPath) {
@@ -153,9 +159,7 @@ Workarounds:
Write-Warning "Current shell resolves feynman to $($resolvedCommand.Source)"
Write-Host "Run in a new shell, or run: `$env:Path = '$installBinDir;' + `$env:Path"
Write-Host "Then run: feynman"
if ($resolvedCommand.Source -like "*node_modules*@companion-ai*feynman*") {
Write-Host "If that path is an old global npm install, remove it with: npm uninstall -g @companion-ai/feynman"
}
Write-Host "If that path is an old package-manager install, remove it or put $installBinDir first on PATH."
}
Write-Host "Feynman $resolvedVersion installed successfully."

View File

@@ -46,4 +46,4 @@ function Badge({
)
}
export { Badge, badgeVariants }
export { Badge }

View File

@@ -64,4 +64,4 @@ function Button({
)
}
export { Button, buttonVariants }
export { Button }

Some files were not shown because too many files have changed in this diff Show More