Compare commits
23 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
501364da45 | ||
|
|
fe24224965 | ||
|
|
9bc59dad53 | ||
|
|
7fd94c028e | ||
|
|
080bf8ad2c | ||
|
|
82cafd10cc | ||
|
|
419bcea3d1 | ||
|
|
d5b6f9cd00 | ||
|
|
8fade18b98 | ||
|
|
66f1fe5ffc | ||
|
|
01c2808606 | ||
|
|
dd3c07633b | ||
|
|
fa259f5cea | ||
|
|
8fc7c0488c | ||
|
|
455de783dc | ||
|
|
01155cadbe | ||
|
|
59af81c613 | ||
|
|
0995f5cc22 | ||
|
|
af6486312d | ||
|
|
8de8054e4f | ||
|
|
5d10285372 | ||
|
|
4f6574f233 | ||
|
|
aa96b5ee14 |
@@ -24,6 +24,8 @@ Operating rules:
|
|||||||
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
||||||
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
||||||
- Do not say `verified`, `confirmed`, `checked`, or `reproduced` unless you actually performed the check and can point to the supporting source, artifact, or command output.
|
- Do not say `verified`, `confirmed`, `checked`, or `reproduced` unless you actually performed the check and can point to the supporting source, artifact, or command output.
|
||||||
|
- Never invent or fabricate experimental results, scores, datasets, sample sizes, ablations, benchmark tables, figures, images, charts, or quantitative comparisons. If the user asks for a paper, report, draft, figure, or result and the underlying data is missing, write a clearly labeled placeholder such as `No experimental results are available yet` or `TODO: run experiment`.
|
||||||
|
- Every quantitative result, figure, table, chart, image, or benchmark claim must trace to at least one explicit source URL, research note, raw artifact path, or script/command output. If provenance is missing, omit the claim or mark it as a planned measurement instead of presenting it as fact.
|
||||||
- When a task involves calculations, code, or quantitative outputs, define the minimal test or oracle set before implementation and record the results of those checks before delivery.
|
- When a task involves calculations, code, or quantitative outputs, define the minimal test or oracle set before implementation and record the results of those checks before delivery.
|
||||||
- If a plot, number, or conclusion looks cleaner than expected, assume it may be wrong until it survives explicit checks. Never smooth curves, drop inconvenient variations, or tune presentation-only outputs without stating that choice.
|
- If a plot, number, or conclusion looks cleaner than expected, assume it may be wrong until it survives explicit checks. Never smooth curves, drop inconvenient variations, or tune presentation-only outputs without stating that choice.
|
||||||
- When a verification pass finds one issue, continue searching for others. Do not stop after the first error unless the whole branch is blocked.
|
- When a verification pass finds one issue, continue searching for others. Do not stop after the first error unless the whole branch is blocked.
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ You receive a draft document and the research files it was built from. Your job
|
|||||||
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
|
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
|
||||||
5. **Verify meaning, not just topic overlap.** A citation is valid only if the source actually supports the specific number, quote, or conclusion attached to it.
|
5. **Verify meaning, not just topic overlap.** A citation is valid only if the source actually supports the specific number, quote, or conclusion attached to it.
|
||||||
6. **Refuse fake certainty.** Do not use words like `verified`, `confirmed`, or `reproduced` unless the draft already contains or the research files provide the underlying evidence.
|
6. **Refuse fake certainty.** Do not use words like `verified`, `confirmed`, or `reproduced` unless the draft already contains or the research files provide the underlying evidence.
|
||||||
|
7. **Never invent or keep fabricated results.** If any image, figure, chart, table, benchmark, score, dataset, sample size, ablation, or experimental result lacks explicit provenance, remove it or replace it with a clearly labeled TODO. Never keep a made-up result because it “looks plausible.”
|
||||||
|
|
||||||
## Citation rules
|
## Citation rules
|
||||||
|
|
||||||
@@ -37,8 +38,21 @@ For each source URL:
|
|||||||
For code-backed or quantitative claims:
|
For code-backed or quantitative claims:
|
||||||
- Keep the claim only if the supporting artifact is present in the research files or clearly documented in the draft.
|
- Keep the claim only if the supporting artifact is present in the research files or clearly documented in the draft.
|
||||||
- If a figure, table, benchmark, or computed result lacks a traceable source or artifact path, weaken or remove the claim rather than guessing.
|
- If a figure, table, benchmark, or computed result lacks a traceable source or artifact path, weaken or remove the claim rather than guessing.
|
||||||
|
- Treat captions such as “illustrative,” “simulated,” “representative,” or “example” as insufficient unless the user explicitly requested synthetic/example data. Otherwise remove the visual and mark the missing experiment.
|
||||||
- Do not preserve polished summaries that outrun the raw evidence.
|
- Do not preserve polished summaries that outrun the raw evidence.
|
||||||
|
|
||||||
|
## Fabrication audit
|
||||||
|
|
||||||
|
Before saving the final document, scan for:
|
||||||
|
- numeric scores or percentages,
|
||||||
|
- benchmark names and tables,
|
||||||
|
- figure/image references,
|
||||||
|
- claims of improvement or superiority,
|
||||||
|
- dataset sizes or experimental setup details,
|
||||||
|
- charts or visualizations.
|
||||||
|
|
||||||
|
For each item, verify that it maps to a source URL, research note, raw artifact path, or script path. If not, remove it or replace it with a TODO. Add a short `Removed Unsupported Claims` section only when you remove material.
|
||||||
|
|
||||||
## Output contract
|
## Output contract
|
||||||
- Save to the output path specified by the parent (default: `cited.md`).
|
- Save to the output path specified by the parent (default: `cited.md`).
|
||||||
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
|
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ You are Feynman's writing subagent.
|
|||||||
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
|
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
|
||||||
4. **Do not promote draft text into fact.** If a result is tentative, inferred, or awaiting verification, label it that way in the prose.
|
4. **Do not promote draft text into fact.** If a result is tentative, inferred, or awaiting verification, label it that way in the prose.
|
||||||
5. **No aesthetic laundering.** Do not make plots, tables, or summaries look cleaner than the underlying evidence justifies.
|
5. **No aesthetic laundering.** Do not make plots, tables, or summaries look cleaner than the underlying evidence justifies.
|
||||||
|
6. **Never fabricate results.** Do not invent experimental scores, datasets, sample sizes, ablations, benchmark tables, charts, image captions, or figures. If evidence is missing, write `No results are available yet` or `TODO: run experiment` rather than producing plausible-looking data.
|
||||||
|
|
||||||
## Output structure
|
## Output structure
|
||||||
|
|
||||||
@@ -36,9 +37,10 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
|||||||
|
|
||||||
## Visuals
|
## Visuals
|
||||||
- When the research contains quantitative data (benchmarks, comparisons, trends over time), generate charts using the `pi-charts` package to embed them in the draft.
|
- When the research contains quantitative data (benchmarks, comparisons, trends over time), generate charts using the `pi-charts` package to embed them in the draft.
|
||||||
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams.
|
- Do not create charts from invented or example data. If values are missing, describe the planned measurement instead.
|
||||||
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui`.
|
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams only when the structure is supported by the supplied evidence.
|
||||||
- Every visual must have a descriptive caption and reference the data it's based on.
|
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui` only for source-backed data.
|
||||||
|
- Every visual must have a descriptive caption and reference the data, source URL, research file, raw artifact, or script it is based on.
|
||||||
- Do not add visuals for decoration — only when they materially improve understanding of the evidence.
|
- Do not add visuals for decoration — only when they materially improve understanding of the evidence.
|
||||||
|
|
||||||
## Operating rules
|
## Operating rules
|
||||||
@@ -48,6 +50,7 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
|||||||
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
||||||
- Do NOT add a Sources section — the verifier agent builds that.
|
- Do NOT add a Sources section — the verifier agent builds that.
|
||||||
- Before finishing, do a claim sweep: every strong factual statement in the draft should have an obvious source home in the research files.
|
- Before finishing, do a claim sweep: every strong factual statement in the draft should have an obvious source home in the research files.
|
||||||
|
- Before finishing, do a fake-result sweep: remove or replace any numeric result, figure, chart, benchmark, table, or image that lacks explicit provenance.
|
||||||
|
|
||||||
## Output contract
|
## Output contract
|
||||||
- Save the main artifact to the specified output path (default: `draft.md`).
|
- Save the main artifact to the specified output path (default: `draft.md`).
|
||||||
|
|||||||
63
.github/workflows/publish.yml
vendored
63
.github/workflows/publish.yml
vendored
@@ -10,15 +10,18 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
version-check:
|
version-check:
|
||||||
runs-on: blacksmith-4vcpu-ubuntu-2404
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
outputs:
|
outputs:
|
||||||
version: ${{ steps.version.outputs.version }}
|
version: ${{ steps.version.outputs.version }}
|
||||||
should_release: ${{ steps.version.outputs.should_release }}
|
should_release: ${{ steps.version.outputs.should_release }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v6
|
- uses: actions/checkout@v6
|
||||||
- uses: actions/setup-node@v5
|
- uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24.14.0
|
node-version: 24
|
||||||
|
registry-url: "https://registry.npmjs.org"
|
||||||
- id: version
|
- id: version
|
||||||
shell: bash
|
shell: bash
|
||||||
env:
|
env:
|
||||||
@@ -32,6 +35,40 @@ jobs:
|
|||||||
echo "should_release=true" >> "$GITHUB_OUTPUT"
|
echo "should_release=true" >> "$GITHUB_OUTPUT"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
verify:
|
||||||
|
needs: version-check
|
||||||
|
if: needs.version-check.outputs.should_release == 'true'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v6
|
||||||
|
- uses: actions/setup-node@v6
|
||||||
|
with:
|
||||||
|
node-version: 24
|
||||||
|
registry-url: "https://registry.npmjs.org"
|
||||||
|
- run: npm ci
|
||||||
|
- run: npm test
|
||||||
|
- run: npm pack
|
||||||
|
|
||||||
|
publish-npm:
|
||||||
|
needs:
|
||||||
|
- version-check
|
||||||
|
- verify
|
||||||
|
if: needs.version-check.outputs.should_release == 'true' && needs.verify.result == 'success'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
id-token: write
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v6
|
||||||
|
- uses: actions/setup-node@v6
|
||||||
|
with:
|
||||||
|
node-version: 24
|
||||||
|
registry-url: "https://registry.npmjs.org"
|
||||||
|
- run: npm ci
|
||||||
|
- run: npm publish --provenance --access public
|
||||||
|
|
||||||
build-native-bundles:
|
build-native-bundles:
|
||||||
needs: version-check
|
needs: version-check
|
||||||
if: needs.version-check.outputs.should_release == 'true'
|
if: needs.version-check.outputs.should_release == 'true'
|
||||||
@@ -40,19 +77,21 @@ jobs:
|
|||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- id: linux-x64
|
- id: linux-x64
|
||||||
os: blacksmith-4vcpu-ubuntu-2404
|
os: ubuntu-latest
|
||||||
- id: darwin-x64
|
- id: darwin-x64
|
||||||
os: macos-15-intel
|
os: macos-15-intel
|
||||||
- id: darwin-arm64
|
- id: darwin-arm64
|
||||||
os: macos-14
|
os: macos-14
|
||||||
- id: win32-x64
|
- id: win32-x64
|
||||||
os: blacksmith-4vcpu-windows-2025
|
os: windows-latest
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v6
|
- uses: actions/checkout@v6
|
||||||
- uses: actions/setup-node@v5
|
- uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24.14.0
|
node-version: 24
|
||||||
- run: npm ci --ignore-scripts
|
- run: npm ci --ignore-scripts
|
||||||
- run: npm run build
|
- run: npm run build
|
||||||
- run: npm run build:native-bundle
|
- run: npm run build:native-bundle
|
||||||
@@ -83,9 +122,10 @@ jobs:
|
|||||||
release-github:
|
release-github:
|
||||||
needs:
|
needs:
|
||||||
- version-check
|
- version-check
|
||||||
|
- publish-npm
|
||||||
- build-native-bundles
|
- build-native-bundles
|
||||||
if: needs.version-check.outputs.should_release == 'true' && needs.build-native-bundles.result == 'success'
|
if: needs.version-check.outputs.should_release == 'true' && needs.publish-npm.result == 'success' && needs.build-native-bundles.result == 'success'
|
||||||
runs-on: blacksmith-4vcpu-ubuntu-2404
|
runs-on: ubuntu-latest
|
||||||
permissions:
|
permissions:
|
||||||
contents: write
|
contents: write
|
||||||
steps:
|
steps:
|
||||||
@@ -93,7 +133,8 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
path: release-assets
|
path: release-assets
|
||||||
merge-multiple: true
|
merge-multiple: true
|
||||||
- shell: bash
|
- name: Create GitHub release
|
||||||
|
shell: bash
|
||||||
env:
|
env:
|
||||||
GH_REPO: ${{ github.repository }}
|
GH_REPO: ${{ github.repository }}
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
@@ -105,7 +146,7 @@ jobs:
|
|||||||
--title "v$VERSION" \
|
--title "v$VERSION" \
|
||||||
--notes "Standalone Feynman bundles for native installation." \
|
--notes "Standalone Feynman bundles for native installation." \
|
||||||
--draft=false \
|
--draft=false \
|
||||||
--target "$GITHUB_SHA"
|
--latest
|
||||||
else
|
else
|
||||||
gh release create "v$VERSION" release-assets/* \
|
gh release create "v$VERSION" release-assets/* \
|
||||||
--title "v$VERSION" \
|
--title "v$VERSION" \
|
||||||
|
|||||||
145
CHANGELOG.md
145
CHANGELOG.md
@@ -15,6 +15,78 @@ Use this file to track chronology, not release notes. Keep entries short, factua
|
|||||||
- Blockers: ...
|
- Blockers: ...
|
||||||
- Next: ...
|
- Next: ...
|
||||||
|
|
||||||
|
### 2026-04-12 00:00 local — capital-france
|
||||||
|
|
||||||
|
- Objective: Run an unattended deep-research workflow for the question "What is the capital of France?"
|
||||||
|
- Changed: Created plan artifact at `outputs/.plans/capital-france.md`; scoped the workflow as a narrow fact-verification run with direct lead-agent evidence gathering instead of researcher subagents.
|
||||||
|
- Verified: Read existing `CHANGELOG.md` and recalled prior saved plan memory for `capital-france` before finalizing the new run plan.
|
||||||
|
- Failed / learned: None yet.
|
||||||
|
- Blockers: Need at least two current independent authoritative sources and a quick ambiguity check before drafting.
|
||||||
|
- Next: Collect current official/public sources, resolve any legal nuance, then draft and verify the brief.
|
||||||
|
|
||||||
|
### 2026-04-12 00:20 local — capital-france
|
||||||
|
|
||||||
|
- Objective: Complete evidence gathering and ambiguity check for the capital-of-France workflow.
|
||||||
|
- Changed: Wrote `notes/capital-france-research-web.md` and `notes/capital-france-legal-context.md`; identified Insee (2024) and a Sénat report as the two main corroborating sources.
|
||||||
|
- Verified: Cross-read current public French sources that explicitly describe Paris as the capital/capital city of France; found no current contradiction.
|
||||||
|
- Failed / learned: The Presidency homepage was useful contextual support but not explicit enough to carry the core claim alone.
|
||||||
|
- Blockers: Need citation pass and final review pass before promotion.
|
||||||
|
- Next: Draft the brief, then run verifier and reviewer passes.
|
||||||
|
|
||||||
|
### 2026-04-12 00:35 local — capital-france
|
||||||
|
|
||||||
|
- Objective: Move from gathered evidence to a citable draft.
|
||||||
|
- Changed: Wrote `outputs/.drafts/capital-france-draft.md` and updated the plan ledger to mark drafting complete.
|
||||||
|
- Verified: Kept the core claim narrowly scoped to what the Insee and Sénat sources explicitly support; treated the Élysée page as contextual only.
|
||||||
|
- Failed / learned: None.
|
||||||
|
- Blockers: Need verifier URL/citation pass and reviewer verification pass before final promotion.
|
||||||
|
- Next: Run verifier on the draft, then review and promote the final brief.
|
||||||
|
|
||||||
|
### 2026-04-12 00:50 local — capital-france
|
||||||
|
|
||||||
|
- Objective: Complete citation, verification, and final promotion for the capital-of-France workflow.
|
||||||
|
- Changed: Produced `outputs/capital-france-brief.md`, ran verification into `notes/capital-france-verification.md`, promoted the final brief to `outputs/capital-france.md`, and wrote `outputs/capital-france.provenance.md`.
|
||||||
|
- Verified: Reviewer found no FATAL or MAJOR issues. Core claim remains backed by two independent French public-institution sources, with Insee as the primary explicit source and the Sénat report as corroboration.
|
||||||
|
- Failed / learned: The runtime did not expose a named `verifier` subagent, so I used an available worker in a verifier-equivalent role and recorded that deviation in the plan.
|
||||||
|
- Blockers: None.
|
||||||
|
- Next: If needed, extend the brief with deeper legal-historical sourcing, but the narrow factual question is sufficiently answered.
|
||||||
|
|
||||||
|
### 2026-04-12 10:05 local — capital-france
|
||||||
|
|
||||||
|
- Objective: Run the citation-verification pass on the capital-of-France draft and promote a final cited brief.
|
||||||
|
- Changed: Verified the three draft source URLs were live (HTTP 200 at check time), added numbered inline citations, downgraded unsupported phrasing around the Élysée/context and broad ambiguity claims, and wrote `outputs/capital-france-brief.md`.
|
||||||
|
- Verified: Confirmed Insee explicitly says Paris is the capital of France; confirmed the Sénat report describes Paris’s capital status and the presence of national institutions; confirmed the Élysée homepage is contextual only and not explicit enough to carry the core claim.
|
||||||
|
- Failed / learned: The draft wording about the Presidency being seated in Paris was not directly supported by the cited homepage, so it was removed rather than carried forward.
|
||||||
|
- Blockers: Reviewer pass still pending if the workflow requires an adversarial final check.
|
||||||
|
- Next: If needed, run a final reviewer pass; otherwise use `outputs/capital-france-brief.md` as the canonical brief.
|
||||||
|
|
||||||
|
### 2026-04-12 10:20 local — capital-france
|
||||||
|
|
||||||
|
- Objective: Close the workflow with final review, final artifact promotion, and provenance.
|
||||||
|
- Changed: Ran a reviewer pass recorded in `notes/capital-france-verification.md`; promoted the cited brief into `outputs/capital-france.md`; wrote `outputs/capital-france.provenance.md`; updated the run plan to mark all tasks complete.
|
||||||
|
- Verified: Reviewer verdict was PASS WITH MINOR REVISIONS only; those minor wording fixes were applied before delivery.
|
||||||
|
- Failed / learned: The runtime did not expose a project-named `verifier` agent, so the citation pass used an available worker agent as a verifier-equivalent step.
|
||||||
|
- Blockers: None.
|
||||||
|
- Next: Optional only — produce a legal memorandum on the basis of Paris's capital status if requested.
|
||||||
|
|
||||||
|
### 2026-04-14 12:00 local — capital-belgium
|
||||||
|
|
||||||
|
- Objective: Run a deep-research workflow for the question "What is the capital of Belgium?"
|
||||||
|
- Changed: Created plan artifact at `outputs/.plans/capital-belgium.md`; gathered evidence into `notes/capital-belgium-research-web.md` from Belgium.be, FPS Foreign Affairs, Britannica, and a Belgian Senate constitution check.
|
||||||
|
- Verified: Found two explicit current Belgian government statements that Brussels is the federal capital of Belgium, plus independent Britannica corroboration; no conflicting nuance surfaced in the consulted legal text.
|
||||||
|
- Failed / learned: This is narrow enough that researcher subagents would add overhead without increasing evidence quality.
|
||||||
|
- Blockers: Need draft, citation/URL verification pass, final review pass, and promotion.
|
||||||
|
- Next: Draft the brief, run verifier-equivalent and reviewer passes, then promote final output with provenance.
|
||||||
|
|
||||||
|
### 2026-04-14 12:25 local — capital-belgium
|
||||||
|
|
||||||
|
- Objective: Complete citation, verification, and final promotion for the capital-of-Belgium workflow.
|
||||||
|
- Changed: Wrote `outputs/.drafts/capital-belgium-draft.md`; produced cited brief `outputs/capital-belgium-brief.md`; ran verification into `notes/capital-belgium-verification.md`; promoted final output to `outputs/capital-belgium.md`; wrote `outputs/capital-belgium.provenance.md`; updated the plan ledger and verification log.
|
||||||
|
- Verified: Core claim is now backed by Belgium.be, Belgian Foreign Affairs, Britannica, and direct constitutional text from Senate-hosted Article 194 stating that Brussels is the capital of Belgium and the seat of the federal government.
|
||||||
|
- Failed / learned: The runtime did not expose a named `verifier` subagent, so a worker performed a verifier-equivalent citation/URL check; reviewer surfaced a stronger constitutional source than the first draft had emphasized.
|
||||||
|
- Blockers: None.
|
||||||
|
- Next: Optional only — if requested, expand this into a legal-historical note on Brussels’s capital status and the distinction between city, region, and federal institutions.
|
||||||
|
|
||||||
### 2026-03-25 00:00 local — scaling-laws
|
### 2026-03-25 00:00 local — scaling-laws
|
||||||
|
|
||||||
- Objective: Set up a deep research workflow for scaling laws.
|
- Objective: Set up a deep research workflow for scaling laws.
|
||||||
@@ -167,3 +239,76 @@ Use this file to track chronology, not release notes. Keep entries short, factua
|
|||||||
- Failed / learned: Website typecheck was previously a no-op prompt because `@astrojs/check` was missing; installing it exposed dev-audit findings that needed explicit overrides before the full website audit was clean.
|
- Failed / learned: Website typecheck was previously a no-op prompt because `@astrojs/check` was missing; installing it exposed dev-audit findings that needed explicit overrides before the full website audit was clean.
|
||||||
- Blockers: Docker Desktop remained unreliable after restart attempts, so this pass still does not include a second successful public-installer Linux Docker run.
|
- Blockers: Docker Desktop remained unreliable after restart attempts, so this pass still does not include a second successful public-installer Linux Docker run.
|
||||||
- Next: Push the RPC/website verification commit and keep future Docker/public-installer validation separate from repo correctness unless Docker is stable.
|
- Next: Push the RPC/website verification commit and keep future Docker/public-installer validation separate from repo correctness unless Docker is stable.
|
||||||
|
|
||||||
|
### 2026-04-12 09:32 PDT — pi-0.66.1-upgrade-pass
|
||||||
|
|
||||||
|
- Objective: Update Feynman from Pi `0.64.0` to the current `0.66.1` packages and absorb any downstream SDK/runtime compatibility changes instead of leaving the repo pinned behind upstream.
|
||||||
|
- Changed: Bumped `@mariozechner/pi-ai` and `@mariozechner/pi-coding-agent` to `0.66.1` plus `@companion-ai/alpha-hub` to `0.1.3` in `package.json` and `package-lock.json`; updated `extensions/research-tools.ts` to stop listening for the removed `session_switch` extension event and rely on `session_start`, which now carries startup/reload/new/resume/fork reasons in Pi `0.66.x`.
|
||||||
|
- Verified: Ran `npm test`, `npm run typecheck`, and `npm run build` successfully after the upgrade; smoke-ran `node bin/feynman.js --version`, `node bin/feynman.js doctor`, and `node bin/feynman.js status` successfully; checked upstream package diffs and confirmed the breaking change that affected this repo was the typed extension lifecycle change in `pi-coding-agent`, while `pi-ai` mainly brought refreshed provider/model catalog code including Bedrock/OpenAI provider updates and new generated model entries.
|
||||||
|
- Failed / learned: `ctx7` resolved Pi correctly to `/badlogic/pi-mono`, but its docs snapshot was not release-note oriented; the concrete downstream-impact analysis came from the actual `0.64.0` → `0.66.1` package diffs and local validation, not from prose docs alone.
|
||||||
|
- Failed / learned: The first post-upgrade CLI smoke test failed before Feynman startup because `@companion-ai/alpha-hub@0.1.2` shipped a zero-byte `src/lib/auth.js`; bumping to `0.1.3` fixed that adjacent runtime blocker.
|
||||||
|
- Blockers: `npm install` reports two high-severity vulnerabilities remain in the dependency tree; this pass focused on the Pi upgrade and did not remediate unrelated audit findings.
|
||||||
|
- Next: Push the Pi upgrade, then decide whether to layer the pending model-command fixes on top of this branch or land them separately to keep the dependency bump easy to review.
|
||||||
|
|
||||||
|
### 2026-04-12 13:00 PDT — model-command-and-bedrock-fix-pass
|
||||||
|
|
||||||
|
- Objective: Finish the remaining user-facing model-management regressions instead of stopping at the Pi dependency bump.
|
||||||
|
- Changed: Updated `src/model/commands.ts` so `feynman model login <provider>` resolves both OAuth and API-key providers; `feynman model logout <provider>` clears either auth mode; `feynman model set` accepts both `provider/model` and `provider:model`; ambiguous bare model IDs now prefer explicitly configured providers from auth storage; added an `amazon-bedrock` setup path that validates the AWS credential chain with the AWS SDK and stores Pi's `<authenticated>` sentinel so Bedrock models appear in `model list`; synced `src/cli.ts`, `metadata/commands.mjs`, `README.md`, and the website docs to the new behavior.
|
||||||
|
- Verified: Added regression tests in `tests/model-harness.test.ts` for `provider:model`, API-key provider resolution, and ambiguous bare-ID handling; ran `npm test`, `npm run typecheck`, `npm run build`, and `cd website && npm run build`; exercised command-level flows against throwaway `FEYNMAN_HOME` directories: interactive `node bin/feynman.js model login google`, `node bin/feynman.js model set google:gemini-3-pro-preview`, `node bin/feynman.js model set gpt-5.4` with only OpenAI configured, and `node bin/feynman.js model login amazon-bedrock`; confirmed `model list` shows Bedrock models after the new setup path; ran a live one-shot prompt `node bin/feynman.js --prompt "Reply with exactly OK"` and got `OK`.
|
||||||
|
- Failed / learned: The website build still emits duplicate-id warnings for a handful of docs pages, but it completes successfully; those warnings predate this pass and were not introduced by the model-command edits.
|
||||||
|
- Blockers: The Bedrock path is verified with the current shell's AWS credential chain, not with a fresh machine lacking AWS config; broader upstream Pi behavior around IMDS/default-profile autodiscovery without the sentinel is still outside this repo.
|
||||||
|
- Next: Commit and push the combined Pi/model/docs maintenance branch, then decide whether to tackle the deeper search/deepresearch hang issues separately or leave them for focused repro work.
|
||||||
|
|
||||||
|
### 2026-04-12 13:35 PDT — workflow-unattended-and-search-curator-fix-pass
|
||||||
|
|
||||||
|
- Objective: Fix the remaining workflow deadlocks instead of leaving `deepresearch` and terminal web search half-functional after the maintenance push.
|
||||||
|
- Changed: Updated the built-in research workflow prompts (`deepresearch`, `lit`, `review`, `audit`, `compare`, `draft`, `watch`) so they present the plan and continue automatically rather than blocking for approval; extended the `pi-web-access` runtime patch so Feynman rewrites its default workflow from browser-based `summary-review` to `none`; added explicit `workflow: "none"` persistence in `src/search/commands.ts` and `src/pi/web-access.ts`, plus surfaced the workflow in doctor/status-style output.
|
||||||
|
- Verified: Reproduced the original `deepresearch` failure mode in print mode, where the run created `outputs/.plans/capital-france.md` and then stopped waiting for user confirmation; after the prompt changes, reran `deepresearch "What is the capital of France?"` and confirmed it progressed beyond planning and produced `outputs/.drafts/capital-france-draft.md`; inspected `pi-web-access@0.10.6` and confirmed the exact `waiting for summary approval...` string and `summary-review` default live in that package; added regression tests for the new `pi-web-access` patch and workflow-none status handling; reran `npm test`, `npm run typecheck`, and `npm run build`; smoke-tested `feynman search set exa exa_test_key` under a throwaway `FEYNMAN_HOME` and confirmed it writes `"workflow": "none"` to `web-search.json`.
|
||||||
|
- Failed / learned: The long-running deepresearch session still spends substantial time in later reasoning/writing steps for even a narrow query, but the plan-confirmation deadlock itself is resolved; the remaining slowness is model/workflow behavior, not the original stop-after-plan bug.
|
||||||
|
- Blockers: I did not install and execute the full optional `pi-session-search` package locally, so the terminal `summary approval` fix is validated by source inspection plus the Feynman patch path and config persistence rather than a local end-to-end package install.
|
||||||
|
- Next: Commit and push the workflow/search fix pass, then close or answer the remaining deepresearch/search issues with the specific root causes and shipped fixes.
|
||||||
|
|
||||||
|
### 2026-04-12 14:05 PDT — final-artifact-hardening-pass
|
||||||
|
|
||||||
|
- Objective: Reduce the chance of unattended research workflows stopping at intermediate artifacts like `<slug>-brief.md` without promoting the final deliverable and provenance sidecar.
|
||||||
|
- Changed: Tightened `prompts/deepresearch.md` so the agent must verify on disk that the plan, draft, cited brief, promoted final output, and provenance sidecar all exist before stopping; tightened `prompts/lit.md` so it explicitly checks for the final output plus provenance sidecar instead of stopping at an intermediate cited draft.
|
||||||
|
- Verified: Cross-read the current deepresearch/lit deliver steps after the earlier unattended-run reproductions and confirmed the missing enforcement point was the final on-disk artifact check, not the naming convention itself.
|
||||||
|
- Failed / learned: This is still prompt-level enforcement rather than a deterministic post-processing hook, so it improves completion reliability but does not provide the same guarantees as a dedicated artifact-finalization wrapper.
|
||||||
|
- Blockers: I did not rerun a full broad deepresearch workflow end-to-end after this prompt-only hardening because those runs are materially longer and more expensive than the narrow reproductions already used to isolate the earlier deadlocks.
|
||||||
|
- Next: Commit and push the prompt hardening, then, if needed, add a deterministic wrapper around final artifact promotion instead of relying only on prompt adherence.
|
||||||
|
|
||||||
|
### 2026-04-14 09:30 PDT — wsl-login-and-uninstall-docs-pass
|
||||||
|
|
||||||
|
- Objective: Fix the remaining WSL setup blocker and close the last actionable support issue instead of leaving the tracker open after the earlier workflow/model fixes.
|
||||||
|
- Changed: Added a dedicated alpha-hub auth patch helper and tests; extended the alphaXiv login patch so WSL uses `wslview` when available and falls back to `cmd.exe /c start`, while also printing the auth URL explicitly for manual copy/paste if browser launch still fails; documented standalone uninstall steps in `README.md` and `website/src/content/docs/getting-started/installation.md`.
|
||||||
|
- Verified: Added regression tests for the alpha-hub auth patch, reran `npm test`, `npm run typecheck`, and `npm run build`, and smoke-checked the patched alpha-hub source rewrite to confirm it injects both the WSL browser path and the explicit auth URL logging.
|
||||||
|
- Failed / learned: This repo can patch alpha-hub's login UX reliably, but it still does not ship a destructive `feynman uninstall` command; the practical fix for the support issue is documented uninstall steps rather than a rushed cross-platform remover.
|
||||||
|
- Blockers: I did not run a true WSL shell here, so the WSL fix is validated by the deterministic source patch plus tests rather than an actual Windows-hosted browser-launch repro.
|
||||||
|
- Next: Push the WSL/login pass and close the stale issues and PRs that are already superseded by `main`.
|
||||||
|
|
||||||
|
### 2026-04-14 09:35 PDT — review-findings-and-audit-cleanup
|
||||||
|
|
||||||
|
- Objective: Fix the remaining concrete issues found in the deeper review pass instead of stopping at tracker cleanup.
|
||||||
|
- Changed: Updated the `pi-web-access` patch so Feynman defaults search workflow to `none` without disabling explicit `summary-review`; softened the research workflow prompts so only unattended/one-shot runs auto-continue while interactive users still get a chance to request plan changes; corrected uninstall docs to mention `~/.ahub` alongside `~/.feynman`; bumped the root `basic-ftp` override from `5.2.1` to `5.2.2`.
|
||||||
|
- Verified: Ran `npm test`, `npm run typecheck`, `npm run build`, `cd website && npm run build`, and `npm audit`; root audit is now clean.
|
||||||
|
- Failed / learned: Astro still emits a duplicate-content-id warning for `website/src/content/docs/getting-started/installation.md`, but the website build succeeds and I did not identify a low-risk repo-side fix for that warning in this pass.
|
||||||
|
- Blockers: The duplicate-id warning remains as a build warning only, not a failing correctness gate.
|
||||||
|
- Next: If desired, isolate the Astro duplicate-id warning separately with a minimal reproduction rather than mixing it into runtime/CLI maintenance.
|
||||||
|
|
||||||
|
### 2026-04-14 10:55 PDT — summarize-workflow-restore
|
||||||
|
|
||||||
|
- Objective: Restore the useful summarization workflow that had been closed in PR `#69` without being merged.
|
||||||
|
- Changed: Added `prompts/summarize.md` as a top-level CLI workflow so `feynman summarize <source>` is available again; kept the RLM-based tiering approach from the original proposal and aligned Tier 3 confirmation behavior with the repo's unattended-run conventions.
|
||||||
|
- Verified: Confirmed `feynman summarize <source>` appears in CLI help; ran `node bin/feynman.js summarize /tmp/feynman-summary-smoke.txt` against a local smoke file and verified it produced `outputs/feynman-summary-smoke-summary.md` plus the raw fetched note artifact under `outputs/.notes/`.
|
||||||
|
- Failed / learned: None in the restored Tier 1 path; broader Tier 2/Tier 3 behavior still depends on runtime/model/tool availability, just like the other prompt-driven workflows.
|
||||||
|
- Blockers: None for the prompt restoration itself.
|
||||||
|
- Next: If desired, add dedicated docs for `summarize` and decide whether to reopen PR `#69` for historical continuity or leave it closed as superseded by the landed equivalent on `main`.
|
||||||
|
|
||||||
|
### 2026-04-12 13:20 PDT — capital-france (citation verification brief)
|
||||||
|
|
||||||
|
- Objective: Verify citations in the capital-of-France draft and produce a cited verifier brief.
|
||||||
|
- Changed: Read `outputs/.drafts/capital-france-draft.md`, `notes/capital-france-research-web.md`, and `notes/capital-france-legal-context.md`; fetched the three draft URLs directly; wrote `notes/capital-france-brief.md` with inline numbered citations and a numbered direct-URL sources list.
|
||||||
|
- Verified: Confirmed the Insee, Sénat, and Élysée URLs were reachable on 2026-04-12; confirmed Insee and Sénat support the core claim that Paris is the capital of France; marked the Élysée homepage as contextual-only support.
|
||||||
|
- Failed / learned: The Élysée homepage does not explicitly state the core claim, so it should not be used as sole evidence for capital status.
|
||||||
|
- Blockers: None for the verifier brief; any stronger legal memo would still need a more direct constitutional/statutory basis if that specific question is asked.
|
||||||
|
- Next: Promote the brief into the final output or downgrade/remove any claim that leans on the Élysée URL alone.
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ If you need to change how bundled subagents behave, edit `.feynman/agents/*.md`.
|
|||||||
## Before You Open a PR
|
## Before You Open a PR
|
||||||
|
|
||||||
1. Start from the latest `main`.
|
1. Start from the latest `main`.
|
||||||
2. Use Node.js `20.19.0` or newer. The repo expects `.nvmrc`, `package.json` engines, `website/package.json` engines, and the runtime version guard to stay aligned.
|
2. Use Node.js `22.x` for local development. The supported runtime range is Node.js `20.19.0` through `24.x`; `.nvmrc` pins the preferred local version while `package.json`, `website/package.json`, and the runtime version guard define the broader supported range.
|
||||||
3. Install dependencies from the repo root:
|
3. Install dependencies from the repo root:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
18
README.md
18
README.md
@@ -25,10 +25,14 @@ curl -fsSL https://feynman.is/install | bash
|
|||||||
irm https://feynman.is/install.ps1 | iex
|
irm https://feynman.is/install.ps1 | iex
|
||||||
```
|
```
|
||||||
|
|
||||||
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.17`.
|
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.20`.
|
||||||
|
|
||||||
The installer downloads a standalone native bundle with its own Node.js runtime.
|
The installer downloads a standalone native bundle with its own Node.js runtime.
|
||||||
|
|
||||||
|
To upgrade the standalone app later, rerun the installer. `feynman update` only refreshes installed Pi packages inside Feynman's environment; it does not replace the standalone runtime bundle itself.
|
||||||
|
|
||||||
|
To uninstall the standalone app, remove the launcher and runtime bundle, then optionally remove `~/.feynman` if you also want to delete settings, sessions, and installed package state. If you also want to delete alphaXiv login state, remove `~/.ahub`. See the installation guide for platform-specific paths.
|
||||||
|
|
||||||
Local models are supported through the custom-provider flow. For Ollama, run `feynman setup`, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at `http://localhost:11434/v1`.
|
Local models are supported through the custom-provider flow. For Ollama, run `feynman setup`, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at `http://localhost:11434/v1`.
|
||||||
|
|
||||||
### Skills Only
|
### Skills Only
|
||||||
@@ -138,6 +142,18 @@ Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alpha
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### Star History
|
||||||
|
|
||||||
|
<a href="https://www.star-history.com/?repos=getcompanion-ai%2Ffeynman&type=date&legend=top-left">
|
||||||
|
<picture>
|
||||||
|
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&theme=dark&legend=top-left" />
|
||||||
|
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
|
||||||
|
<img alt="Star History Chart" src="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
|
||||||
|
</picture>
|
||||||
|
</a>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Contributing
|
### Contributing
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full contributor guide.
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full contributor guide.
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ import { resolve } from "node:path";
|
|||||||
import { pathToFileURL } from "node:url";
|
import { pathToFileURL } from "node:url";
|
||||||
|
|
||||||
const MIN_NODE_VERSION = "20.19.0";
|
const MIN_NODE_VERSION = "20.19.0";
|
||||||
|
const MAX_NODE_MAJOR = 24;
|
||||||
|
const PREFERRED_NODE_MAJOR = 22;
|
||||||
|
|
||||||
function parseNodeVersion(version) {
|
function parseNodeVersion(version) {
|
||||||
const [major = "0", minor = "0", patch = "0"] = version.replace(/^v/, "").split(".");
|
const [major = "0", minor = "0", patch = "0"] = version.replace(/^v/, "").split(".");
|
||||||
@@ -19,12 +21,15 @@ function compareNodeVersions(left, right) {
|
|||||||
return left.patch - right.patch;
|
return left.patch - right.patch;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (compareNodeVersions(parseNodeVersion(process.versions.node), parseNodeVersion(MIN_NODE_VERSION)) < 0) {
|
const parsedNodeVersion = parseNodeVersion(process.versions.node);
|
||||||
|
if (compareNodeVersions(parsedNodeVersion, parseNodeVersion(MIN_NODE_VERSION)) < 0 || parsedNodeVersion.major > MAX_NODE_MAJOR) {
|
||||||
const isWindows = process.platform === "win32";
|
const isWindows = process.platform === "win32";
|
||||||
console.error(`feynman requires Node.js ${MIN_NODE_VERSION} or later (detected ${process.versions.node}).`);
|
console.error(`feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected ${process.versions.node}).`);
|
||||||
console.error(isWindows
|
console.error(parsedNodeVersion.major > MAX_NODE_MAJOR
|
||||||
? "Install a newer Node.js from https://nodejs.org, or use the standalone installer:"
|
? "This newer Node release is not supported yet because native Pi packages may fail to build."
|
||||||
: "Switch to Node 20 with `nvm install 20 && nvm use 20`, or use the standalone installer:");
|
: isWindows
|
||||||
|
? "Install a supported Node.js release from https://nodejs.org, or use the standalone installer:"
|
||||||
|
: `Switch to a supported Node release with \`nvm install ${PREFERRED_NODE_MAJOR} && nvm use ${PREFERRED_NODE_MAJOR}\`, or use the standalone installer:`);
|
||||||
console.error(isWindows
|
console.error(isWindows
|
||||||
? "irm https://feynman.is/install.ps1 | iex"
|
? "irm https://feynman.is/install.ps1 | iex"
|
||||||
: "curl -fsSL https://feynman.is/install | bash");
|
: "curl -fsSL https://feynman.is/install | bash");
|
||||||
|
|||||||
@@ -11,14 +11,11 @@ import { registerServiceTierControls } from "./research-tools/service-tier.js";
|
|||||||
export default function researchTools(pi: ExtensionAPI): void {
|
export default function researchTools(pi: ExtensionAPI): void {
|
||||||
const cache: { agentSummaryPromise?: Promise<{ agents: string[]; chains: string[] }> } = {};
|
const cache: { agentSummaryPromise?: Promise<{ agents: string[]; chains: string[] }> } = {};
|
||||||
|
|
||||||
|
// Pi 0.66.x folds post-switch/resume lifecycle into session_start.
|
||||||
pi.on("session_start", async (_event, ctx) => {
|
pi.on("session_start", async (_event, ctx) => {
|
||||||
await installFeynmanHeader(pi, ctx, cache);
|
await installFeynmanHeader(pi, ctx, cache);
|
||||||
});
|
});
|
||||||
|
|
||||||
pi.on("session_switch", async (_event, ctx) => {
|
|
||||||
await installFeynmanHeader(pi, ctx, cache);
|
|
||||||
});
|
|
||||||
|
|
||||||
registerAlphaTools(pi);
|
registerAlphaTools(pi);
|
||||||
registerDiscoveryCommands(pi);
|
registerDiscoveryCommands(pi);
|
||||||
registerFeynmanModelCommand(pi);
|
registerFeynmanModelCommand(pi);
|
||||||
|
|||||||
@@ -86,9 +86,9 @@ export const cliCommandSections = [
|
|||||||
title: "Model Management",
|
title: "Model Management",
|
||||||
commands: [
|
commands: [
|
||||||
{ usage: "feynman model list", description: "List available models in Pi auth storage." },
|
{ usage: "feynman model list", description: "List available models in Pi auth storage." },
|
||||||
{ usage: "feynman model login [id]", description: "Login to a Pi OAuth model provider." },
|
{ usage: "feynman model login [id]", description: "Authenticate a model provider with OAuth or API-key setup." },
|
||||||
{ usage: "feynman model logout [id]", description: "Logout from a Pi OAuth model provider." },
|
{ usage: "feynman model logout [id]", description: "Clear stored auth for a model provider." },
|
||||||
{ usage: "feynman model set <provider/model>", description: "Set the default model." },
|
{ usage: "feynman model set <provider/model>", description: "Set the default model (also accepts provider:model)." },
|
||||||
{ usage: "feynman model tier [value]", description: "View or set the request service tier override." },
|
{ usage: "feynman model tier [value]", description: "View or set the request service tier override." },
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
@@ -118,7 +118,7 @@ export const legacyFlags = [
|
|||||||
{ usage: "--alpha-login", description: "Sign in to alphaXiv and exit." },
|
{ usage: "--alpha-login", description: "Sign in to alphaXiv and exit." },
|
||||||
{ usage: "--alpha-logout", description: "Clear alphaXiv auth and exit." },
|
{ usage: "--alpha-logout", description: "Clear alphaXiv auth and exit." },
|
||||||
{ usage: "--alpha-status", description: "Show alphaXiv auth status and exit." },
|
{ usage: "--alpha-status", description: "Show alphaXiv auth status and exit." },
|
||||||
{ usage: "--model <provider:model>", description: "Force a specific model." },
|
{ usage: "--model <provider/model|provider:model>", description: "Force a specific model." },
|
||||||
{ usage: "--service-tier <tier>", description: "Override request service tier for this run." },
|
{ usage: "--service-tier <tier>", description: "Override request service tier for this run." },
|
||||||
{ usage: "--thinking <level>", description: "Set thinking level: off | minimal | low | medium | high | xhigh." },
|
{ usage: "--thinking <level>", description: "Set thinking level: off | minimal | low | medium | high | xhigh." },
|
||||||
{ usage: "--cwd <path>", description: "Set the working directory for tools." },
|
{ usage: "--cwd <path>", description: "Set the working directory for tools." },
|
||||||
|
|||||||
113
package-lock.json
generated
113
package-lock.json
generated
@@ -1,18 +1,19 @@
|
|||||||
{
|
{
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.17",
|
"version": "0.2.20",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.17",
|
"version": "0.2.20",
|
||||||
"hasInstallScript": true,
|
"hasInstallScript": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@companion-ai/alpha-hub": "^0.1.2",
|
"@clack/prompts": "^1.2.0",
|
||||||
"@mariozechner/pi-ai": "^0.64.0",
|
"@companion-ai/alpha-hub": "^0.1.3",
|
||||||
"@mariozechner/pi-coding-agent": "^0.64.0",
|
"@mariozechner/pi-ai": "^0.66.1",
|
||||||
|
"@mariozechner/pi-coding-agent": "^0.66.1",
|
||||||
"@sinclair/typebox": "^0.34.48",
|
"@sinclair/typebox": "^0.34.48",
|
||||||
"dotenv": "^17.3.1"
|
"dotenv": "^17.3.1"
|
||||||
},
|
},
|
||||||
@@ -780,10 +781,32 @@
|
|||||||
"url": "https://github.com/sponsors/Borewit"
|
"url": "https://github.com/sponsors/Borewit"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"node_modules/@clack/core": {
|
||||||
|
"version": "1.2.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/@clack/core/-/core-1.2.0.tgz",
|
||||||
|
"integrity": "sha512-qfxof/3T3t9DPU/Rj3OmcFyZInceqj/NVtO9rwIuJqCUgh32gwPjpFQQp/ben07qKlhpwq7GzfWpST4qdJ5Drg==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"fast-wrap-ansi": "^0.1.3",
|
||||||
|
"sisteransi": "^1.0.5"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"node_modules/@clack/prompts": {
|
||||||
|
"version": "1.2.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/@clack/prompts/-/prompts-1.2.0.tgz",
|
||||||
|
"integrity": "sha512-4jmztR9fMqPMjz6H/UZXj0zEmE43ha1euENwkckKKel4XpSfokExPo5AiVStdHSAlHekz4d0CA/r45Ok1E4D3w==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"@clack/core": "1.2.0",
|
||||||
|
"fast-string-width": "^1.1.0",
|
||||||
|
"fast-wrap-ansi": "^0.1.3",
|
||||||
|
"sisteransi": "^1.0.5"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@companion-ai/alpha-hub": {
|
"node_modules/@companion-ai/alpha-hub": {
|
||||||
"version": "0.1.2",
|
"version": "0.1.3",
|
||||||
"resolved": "https://registry.npmjs.org/@companion-ai/alpha-hub/-/alpha-hub-0.1.2.tgz",
|
"resolved": "https://registry.npmjs.org/@companion-ai/alpha-hub/-/alpha-hub-0.1.3.tgz",
|
||||||
"integrity": "sha512-YAFh4B6loo7lKRjW3UFsdoiW3ZRvLdSdP7liDsHhCxY1dzfbxNU8vDAloodiK4ieDVRqMBTmG9NYbnsb4NZUGw==",
|
"integrity": "sha512-g/JoqeGDCoSvkgs1ZSTYJhbTak0zVanQyoYOvf2tDgfqJ09gfkqmSGFDmiP4PkTn1bocPqywZIABgmv25x1uYA==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@modelcontextprotocol/sdk": "^1.27.1",
|
"@modelcontextprotocol/sdk": "^1.27.1",
|
||||||
@@ -1469,21 +1492,21 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@mariozechner/pi-agent-core": {
|
"node_modules/@mariozechner/pi-agent-core": {
|
||||||
"version": "0.64.0",
|
"version": "0.66.1",
|
||||||
"resolved": "https://registry.npmjs.org/@mariozechner/pi-agent-core/-/pi-agent-core-0.64.0.tgz",
|
"resolved": "https://registry.npmjs.org/@mariozechner/pi-agent-core/-/pi-agent-core-0.66.1.tgz",
|
||||||
"integrity": "sha512-IN/sIxWOD0v1OFVXHB605SGiZhO5XdEWG5dO8EAV08n3jz/p12o4OuYGvhGXmHhU28WXa/FGWC+FO5xiIih8Uw==",
|
"integrity": "sha512-Nj54A7SuB/EQi8r3Gs+glFOr9wz/a9uxYFf0pCLf2DE7VmzA9O7WSejrvArna17K6auftLSdNyRRe2bIO0qezg==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@mariozechner/pi-ai": "^0.64.0"
|
"@mariozechner/pi-ai": "^0.66.1"
|
||||||
},
|
},
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=20.0.0"
|
"node": ">=20.0.0"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@mariozechner/pi-ai": {
|
"node_modules/@mariozechner/pi-ai": {
|
||||||
"version": "0.64.0",
|
"version": "0.66.1",
|
||||||
"resolved": "https://registry.npmjs.org/@mariozechner/pi-ai/-/pi-ai-0.64.0.tgz",
|
"resolved": "https://registry.npmjs.org/@mariozechner/pi-ai/-/pi-ai-0.66.1.tgz",
|
||||||
"integrity": "sha512-Z/Jnf+JSVDPLRcxJsa8XhYTJKIqKekNueaCpBLGQHgizL1F9RQ1Rur3rIfZpfXkt2cLu/AIPtOs223ueuoWaWg==",
|
"integrity": "sha512-7IZHvpsFdKEBkTmjNrdVL7JLUJVIpha6bwTr12cZ5XyDrxij06wP6Ncpnf4HT5BXAzD5w2JnoqTOSbMEIZj3dg==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@anthropic-ai/sdk": "^0.73.0",
|
"@anthropic-ai/sdk": "^0.73.0",
|
||||||
@@ -1508,15 +1531,15 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@mariozechner/pi-coding-agent": {
|
"node_modules/@mariozechner/pi-coding-agent": {
|
||||||
"version": "0.64.0",
|
"version": "0.66.1",
|
||||||
"resolved": "https://registry.npmjs.org/@mariozechner/pi-coding-agent/-/pi-coding-agent-0.64.0.tgz",
|
"resolved": "https://registry.npmjs.org/@mariozechner/pi-coding-agent/-/pi-coding-agent-0.66.1.tgz",
|
||||||
"integrity": "sha512-Q4tcqSqFGQtOgCtRyIp1D80Nv2if13Q2pfbnrOlaT/mix90mLcZGML9jKVnT1jGSy5GMYudU1HsS7cx53kxb0g==",
|
"integrity": "sha512-cNmatT+5HvYzQ78cRhRih00wCeUTH/fFx9ecJh5AbN7axgWU+bwiZYy0cjrTsGVgMGF4xMYlPRn/Nze9JEB+/w==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@mariozechner/jiti": "^2.6.2",
|
"@mariozechner/jiti": "^2.6.2",
|
||||||
"@mariozechner/pi-agent-core": "^0.64.0",
|
"@mariozechner/pi-agent-core": "^0.66.1",
|
||||||
"@mariozechner/pi-ai": "^0.64.0",
|
"@mariozechner/pi-ai": "^0.66.1",
|
||||||
"@mariozechner/pi-tui": "^0.64.0",
|
"@mariozechner/pi-tui": "^0.66.1",
|
||||||
"@silvia-odwyer/photon-node": "^0.3.4",
|
"@silvia-odwyer/photon-node": "^0.3.4",
|
||||||
"ajv": "^8.17.1",
|
"ajv": "^8.17.1",
|
||||||
"chalk": "^5.5.0",
|
"chalk": "^5.5.0",
|
||||||
@@ -1545,9 +1568,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@mariozechner/pi-tui": {
|
"node_modules/@mariozechner/pi-tui": {
|
||||||
"version": "0.64.0",
|
"version": "0.66.1",
|
||||||
"resolved": "https://registry.npmjs.org/@mariozechner/pi-tui/-/pi-tui-0.64.0.tgz",
|
"resolved": "https://registry.npmjs.org/@mariozechner/pi-tui/-/pi-tui-0.66.1.tgz",
|
||||||
"integrity": "sha512-W1qLry9MAuN/V3YJmMv/BJa0VaYv721NkXPg/DGItdqWxuDc+1VdNbyAnRwxblNkIpXVUWL26x64BlyFXpxmkg==",
|
"integrity": "sha512-hNFN42ebjwtfGooqoUwM+QaPR1XCyqPuueuP3aLOWS1bZ2nZP/jq8MBuGNrmMw1cgiDcotvOlSNj3BatzEOGsw==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@types/mime-types": "^2.1.4",
|
"@types/mime-types": "^2.1.4",
|
||||||
@@ -2530,9 +2553,9 @@
|
|||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/basic-ftp": {
|
"node_modules/basic-ftp": {
|
||||||
"version": "5.2.1",
|
"version": "5.2.2",
|
||||||
"resolved": "https://registry.npmjs.org/basic-ftp/-/basic-ftp-5.2.1.tgz",
|
"resolved": "https://registry.npmjs.org/basic-ftp/-/basic-ftp-5.2.2.tgz",
|
||||||
"integrity": "sha512-0yaL8JdxTknKDILitVpfYfV2Ob6yb3udX/hK97M7I3jOeznBNxQPtVvTUtnhUkyHlxFWyr5Lvknmgzoc7jf+1Q==",
|
"integrity": "sha512-1tDrzKsdCg70WGvbFss/ulVAxupNauGnOlgpyjKzeQxzyllBLS0CGLV7tjIXTK3ZQA9/FBEm9qyFFN1bciA6pw==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=10.0.0"
|
"node": ">=10.0.0"
|
||||||
@@ -3206,6 +3229,21 @@
|
|||||||
"integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==",
|
"integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==",
|
||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
|
"node_modules/fast-string-truncated-width": {
|
||||||
|
"version": "1.2.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/fast-string-truncated-width/-/fast-string-truncated-width-1.2.1.tgz",
|
||||||
|
"integrity": "sha512-Q9acT/+Uu3GwGj+5w/zsGuQjh9O1TyywhIwAxHudtWrgF09nHOPrvTLhQevPbttcxjr/SNN7mJmfOw/B1bXgow==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
|
"node_modules/fast-string-width": {
|
||||||
|
"version": "1.1.0",
|
||||||
|
"resolved": "https://registry.npmjs.org/fast-string-width/-/fast-string-width-1.1.0.tgz",
|
||||||
|
"integrity": "sha512-O3fwIVIH5gKB38QNbdg+3760ZmGz0SZMgvwJbA1b2TGXceKE6A2cOlfogh1iw8lr049zPyd7YADHy+B7U4W9bQ==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"fast-string-truncated-width": "^1.2.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/fast-uri": {
|
"node_modules/fast-uri": {
|
||||||
"version": "3.1.0",
|
"version": "3.1.0",
|
||||||
"resolved": "https://registry.npmjs.org/fast-uri/-/fast-uri-3.1.0.tgz",
|
"resolved": "https://registry.npmjs.org/fast-uri/-/fast-uri-3.1.0.tgz",
|
||||||
@@ -3222,6 +3260,15 @@
|
|||||||
],
|
],
|
||||||
"license": "BSD-3-Clause"
|
"license": "BSD-3-Clause"
|
||||||
},
|
},
|
||||||
|
"node_modules/fast-wrap-ansi": {
|
||||||
|
"version": "0.1.6",
|
||||||
|
"resolved": "https://registry.npmjs.org/fast-wrap-ansi/-/fast-wrap-ansi-0.1.6.tgz",
|
||||||
|
"integrity": "sha512-HlUwET7a5gqjURj70D5jl7aC3Zmy4weA1SHUfM0JFI0Ptq987NH2TwbBFLoERhfwk+E+eaq4EK3jXoT+R3yp3w==",
|
||||||
|
"license": "MIT",
|
||||||
|
"dependencies": {
|
||||||
|
"fast-string-width": "^1.1.0"
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/fast-xml-builder": {
|
"node_modules/fast-xml-builder": {
|
||||||
"version": "1.1.4",
|
"version": "1.1.4",
|
||||||
"resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.4.tgz",
|
"resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.4.tgz",
|
||||||
@@ -3844,9 +3891,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/koffi": {
|
"node_modules/koffi": {
|
||||||
"version": "2.15.2",
|
"version": "2.15.6",
|
||||||
"resolved": "https://registry.npmjs.org/koffi/-/koffi-2.15.2.tgz",
|
"resolved": "https://registry.npmjs.org/koffi/-/koffi-2.15.6.tgz",
|
||||||
"integrity": "sha512-r9tjJLVRSOhCRWdVyQlF3/Ugzeg13jlzS4czS82MAgLff4W+BcYOW7g8Y62t9O5JYjYOLAjAovAZDNlDfZNu+g==",
|
"integrity": "sha512-WQBpM5uo74UQ17UpsFN+PUOrQQg4/nYdey4SGVluQun2drYYfePziLLWdSmFb4wSdWlJC1aimXQnjhPCheRKuw==",
|
||||||
"hasInstallScript": true,
|
"hasInstallScript": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"optional": true,
|
"optional": true,
|
||||||
@@ -4611,6 +4658,12 @@
|
|||||||
"integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==",
|
"integrity": "sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==",
|
||||||
"license": "ISC"
|
"license": "ISC"
|
||||||
},
|
},
|
||||||
|
"node_modules/sisteransi": {
|
||||||
|
"version": "1.0.5",
|
||||||
|
"resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
|
||||||
|
"integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg==",
|
||||||
|
"license": "MIT"
|
||||||
|
},
|
||||||
"node_modules/smart-buffer": {
|
"node_modules/smart-buffer": {
|
||||||
"version": "4.2.0",
|
"version": "4.2.0",
|
||||||
"resolved": "https://registry.npmjs.org/smart-buffer/-/smart-buffer-4.2.0.tgz",
|
"resolved": "https://registry.npmjs.org/smart-buffer/-/smart-buffer-4.2.0.tgz",
|
||||||
|
|||||||
15
package.json
15
package.json
@@ -1,11 +1,11 @@
|
|||||||
{
|
{
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.17",
|
"version": "0.2.20",
|
||||||
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=20.19.0"
|
"node": ">=20.19.0 <25"
|
||||||
},
|
},
|
||||||
"bin": {
|
"bin": {
|
||||||
"feynman": "bin/feynman.js"
|
"feynman": "bin/feynman.js"
|
||||||
@@ -59,14 +59,15 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@companion-ai/alpha-hub": "^0.1.2",
|
"@clack/prompts": "^1.2.0",
|
||||||
"@mariozechner/pi-ai": "^0.64.0",
|
"@companion-ai/alpha-hub": "^0.1.3",
|
||||||
"@mariozechner/pi-coding-agent": "^0.64.0",
|
"@mariozechner/pi-ai": "^0.66.1",
|
||||||
|
"@mariozechner/pi-coding-agent": "^0.66.1",
|
||||||
"@sinclair/typebox": "^0.34.48",
|
"@sinclair/typebox": "^0.34.48",
|
||||||
"dotenv": "^17.3.1"
|
"dotenv": "^17.3.1"
|
||||||
},
|
},
|
||||||
"overrides": {
|
"overrides": {
|
||||||
"basic-ftp": "5.2.1",
|
"basic-ftp": "5.2.2",
|
||||||
"@modelcontextprotocol/sdk": {
|
"@modelcontextprotocol/sdk": {
|
||||||
"@hono/node-server": "1.19.13",
|
"@hono/node-server": "1.19.13",
|
||||||
"hono": "4.12.12"
|
"hono": "4.12.12"
|
||||||
@@ -79,7 +80,7 @@
|
|||||||
"proxy-agent": {
|
"proxy-agent": {
|
||||||
"pac-proxy-agent": {
|
"pac-proxy-agent": {
|
||||||
"get-uri": {
|
"get-uri": {
|
||||||
"basic-ftp": "5.2.1"
|
"basic-ftp": "5.2.2"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Audit the paper and codebase for: $@
|
|||||||
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
|
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||||
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
|
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
|
||||||
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
||||||
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Compare sources for: $@
|
|||||||
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
|
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||||
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
|
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
|
||||||
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
||||||
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.
|
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.
|
||||||
|
|||||||
@@ -51,7 +51,7 @@ If `CHANGELOG.md` exists, read the most recent relevant entries before finalizin
|
|||||||
|
|
||||||
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
|
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
|
||||||
|
|
||||||
Present the plan to the user and ask them to confirm before proceeding. If the user wants changes, revise the plan first.
|
Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting in the terminal, give them a brief chance to request plan changes before proceeding.
|
||||||
|
|
||||||
## 2. Scale decision
|
## 2. Scale decision
|
||||||
|
|
||||||
@@ -182,6 +182,15 @@ Write a provenance record alongside it as `<slug>.provenance.md`:
|
|||||||
- **Research files:** [list of intermediate <slug>-research-*.md files]
|
- **Research files:** [list of intermediate <slug>-research-*.md files]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Before you stop, verify on disk that all of these exist:
|
||||||
|
- `outputs/.plans/<slug>.md`
|
||||||
|
- `outputs/.drafts/<slug>-draft.md`
|
||||||
|
- `<slug>-brief.md` intermediate cited brief
|
||||||
|
- `outputs/<slug>.md` or `papers/<slug>.md` final promoted deliverable
|
||||||
|
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md` provenance sidecar
|
||||||
|
|
||||||
|
Do not stop at `<slug>-brief.md` alone. If the cited brief exists but the promoted final output or provenance sidecar does not, create them before responding.
|
||||||
|
|
||||||
## Background execution
|
## Background execution
|
||||||
|
|
||||||
If the user wants unattended execution or the sweep will clearly take a while:
|
If the user wants unattended execution or the sweep will clearly take a while:
|
||||||
|
|||||||
@@ -9,11 +9,12 @@ Write a paper-style draft for: $@
|
|||||||
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Present the outline to the user and confirm before proceeding.
|
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Present the outline to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||||
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
|
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
|
||||||
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
||||||
- Use clean Markdown with LaTeX where equations materially help.
|
- Use clean Markdown with LaTeX where equations materially help.
|
||||||
- Generate charts with `pi-charts` for quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines. Every figure needs a caption.
|
- Follow the system prompt's provenance rules for all results, figures, charts, images, tables, benchmarks, and quantitative comparisons. If evidence is missing, leave a placeholder or proposed experimental plan instead of claiming an outcome.
|
||||||
|
- Generate charts with `pi-charts` only for source-backed quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines only when the structure is supported by sources. Every figure needs a provenance-bearing caption.
|
||||||
- Before delivery, sweep the draft for any claim that sounds stronger than its support. Mark tentative results as tentative and remove unsupported numerics instead of letting the verifier discover them later.
|
- Before delivery, sweep the draft for any claim that sounds stronger than its support. Mark tentative results as tentative and remove unsupported numerics instead of letting the verifier discover them later.
|
||||||
- Save exactly one draft to `papers/<slug>.md`.
|
- Save exactly one draft to `papers/<slug>.md`.
|
||||||
- End with a `Sources` appendix with direct URLs for all primary references.
|
- End with a `Sources` appendix with direct URLs for all primary references.
|
||||||
|
|||||||
@@ -10,9 +10,9 @@ Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 wo
|
|||||||
|
|
||||||
## Workflow
|
## Workflow
|
||||||
|
|
||||||
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
|
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||||
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`. Do not silently skip assigned questions; mark them `done`, `blocked`, or `superseded`.
|
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`. Do not silently skip assigned questions; mark them `done`, `blocked`, or `superseded`.
|
||||||
3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. Before finishing the draft, sweep every strong claim against the verification log and downgrade anything that is inferred or single-source critical.
|
3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. Before finishing the draft, sweep every strong claim against the verification log and downgrade anything that is inferred or single-source critical.
|
||||||
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
|
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
|
||||||
5. **Verify** — Spawn the `reviewer` agent to check the cited draft for unsupported claims, logical gaps, zombie sections, and single-source critical findings. Fix FATAL issues before delivering. Note MAJOR issues in Open Questions. If FATAL issues were found, run one more verification pass after the fixes.
|
5. **Verify** — Spawn the `reviewer` agent to check the cited draft for unsupported claims, logical gaps, zombie sections, and single-source critical findings. Fix FATAL issues before delivering. Note MAJOR issues in Open Questions. If FATAL issues were found, run one more verification pass after the fixes.
|
||||||
6. **Deliver** — Save the final literature review to `outputs/<slug>.md`. Write a provenance record alongside it as `outputs/<slug>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used.
|
6. **Deliver** — Save the final literature review to `outputs/<slug>.md`. Write a provenance record alongside it as `outputs/<slug>.provenance.md` listing: date, sources consulted vs. accepted vs. rejected, verification status, and intermediate research files used. Before you stop, verify on disk that both files exist; do not stop at an intermediate cited draft alone.
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Review this AI research artifact: $@
|
|||||||
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Present the plan to the user and confirm before proceeding.
|
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||||
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
|
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
|
||||||
- Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
|
- Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
|
||||||
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
||||||
|
|||||||
165
prompts/summarize.md
Normal file
165
prompts/summarize.md
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
---
|
||||||
|
description: Summarize any URL, local file, or PDF using the RLM pattern — source stored on disk, never injected raw into context.
|
||||||
|
args: <source>
|
||||||
|
section: Research Workflows
|
||||||
|
topLevelCli: true
|
||||||
|
---
|
||||||
|
Summarize the following source: $@
|
||||||
|
|
||||||
|
Derive a short slug from the source filename or URL domain (lowercase, hyphens, no filler words, ≤5 words — e.g. `attention-is-all-you-need`). Use this slug for all files in this run.
|
||||||
|
|
||||||
|
## Why this uses the RLM pattern
|
||||||
|
|
||||||
|
Standard summarization injects the full document into context. Above ~15k tokens, early content degrades as the window fills (context rot). This workflow keeps the document on disk as an external variable and reads only bounded windows — so context pressure is proportional to the window size, not the document size.
|
||||||
|
|
||||||
|
Tier 1 (< 8k chars) is a deliberate exception: direct injection is safe at ~2k tokens and windowed reading would add unnecessary friction.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1 — Fetch, validate, measure
|
||||||
|
|
||||||
|
Run all guards before any tier logic. A failure here is cheap; a failure mid-Tier-3 is not.
|
||||||
|
|
||||||
|
- **GitHub repo URL** (`https://github.com/owner/repo` — exactly 4 slashes): fetch the raw README instead. Try `https://raw.githubusercontent.com/{owner}/{repo}/main/README.md`, then `/master/README.md`. A repo HTML page is not the document the user wants to summarize.
|
||||||
|
- **Remote URL**: fetch to disk with `curl -sL -o outputs/.notes/<slug>-raw.txt <url>`. Do NOT use fetch_content — its return value enters context directly, bypassing the RLM external-variable principle.
|
||||||
|
- **Local file or PDF**: copy or extract to `outputs/.notes/<slug>-raw.txt`. For PDFs, extract text via `pdftotext` or equivalent before measuring.
|
||||||
|
- **Empty or failed fetch**: if the file is < 50 bytes after fetching, stop and surface the error to the user — do not proceed to tier selection.
|
||||||
|
- **Binary content**: if the file is > 1 KB but contains < 100 readable text characters, stop and tell the user the content appears binary or unextracted.
|
||||||
|
- **Existing output**: if `outputs/<slug>-summary.md` already exists, ask the user whether to overwrite or use a different slug. Do not proceed until confirmed.
|
||||||
|
|
||||||
|
Measure decoded text characters (not bytes — UTF-8 multi-byte chars would overcount). Log: `[summarize] source=<source> slug=<slug> chars=<count>`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2 — Choose tier
|
||||||
|
|
||||||
|
| Chars | Tier | Strategy |
|
||||||
|
|---|---|---|
|
||||||
|
| < 8 000 | 1 | Direct read — full content enters context (safe at ~2k tokens) |
|
||||||
|
| 8 000 – 60 000 | 2 | RLM-lite — windowed bash extraction, progressive notes to disk |
|
||||||
|
| > 60 000 | 3 | Full RLM — bash chunking + parallel researcher subagents |
|
||||||
|
|
||||||
|
Log: `[summarize] tier=<N> chars=<count>`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier 1 — Direct read
|
||||||
|
|
||||||
|
Read `outputs/.notes/<slug>-raw.txt` in full. Summarize directly using the output format. Write to `outputs/<slug>-summary.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier 2 — RLM-lite windowed read
|
||||||
|
|
||||||
|
The document stays on disk. Extract 6 000-char windows via bash:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# WHY f.seek/f.read: the read tool uses line offsets, not char offsets.
|
||||||
|
# For exact char-boundary windowing across arbitrary text, bash is required.
|
||||||
|
with open("outputs/.notes/<slug>-raw.txt", encoding="utf-8") as f:
|
||||||
|
f.seek(n * 6000)
|
||||||
|
window = f.read(6000)
|
||||||
|
```
|
||||||
|
|
||||||
|
For each window:
|
||||||
|
1. Extract key claims and evidence.
|
||||||
|
2. Append to `outputs/.notes/<slug>-notes.md` before reading the next window. This is the checkpoint: if the session is interrupted, processed windows survive.
|
||||||
|
3. Log: `[summarize] window <N>/<total> done`
|
||||||
|
|
||||||
|
Synthesize `outputs/.notes/<slug>-notes.md` into `outputs/<slug>-summary.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tier 3 — Full RLM parallel chunks
|
||||||
|
|
||||||
|
Each chunk gets a fresh researcher subagent context window — context rot is impossible because no subagent sees more than 6 000 chars.
|
||||||
|
|
||||||
|
WHY 500-char overlap: academic papers contain multi-sentence arguments that span chunk boundaries. 500 chars (~80 words) ensures a cross-boundary claim appears fully in at least one adjacent chunk.
|
||||||
|
|
||||||
|
### 3a. Chunk the document
|
||||||
|
|
||||||
|
```python
|
||||||
|
import os
|
||||||
|
os.makedirs("outputs/.notes", exist_ok=True)
|
||||||
|
|
||||||
|
with open("outputs/.notes/<slug>-raw.txt", encoding="utf-8") as f:
|
||||||
|
text = f.read()
|
||||||
|
|
||||||
|
chunk_size, overlap = 6000, 500
|
||||||
|
chunks, i = [], 0
|
||||||
|
while i < len(text):
|
||||||
|
chunks.append(text[i : i + chunk_size])
|
||||||
|
i += chunk_size - overlap
|
||||||
|
|
||||||
|
for n, chunk in enumerate(chunks):
|
||||||
|
# Zero-pad index so files sort correctly (chunk-002 before chunk-010)
|
||||||
|
with open(f"outputs/.notes/<slug>-chunk-{n:03d}.txt", "w", encoding="utf-8") as f:
|
||||||
|
f.write(chunk)
|
||||||
|
|
||||||
|
print(f"[summarize] chunks={len(chunks)} chunk_size={chunk_size} overlap={overlap}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3b. Confirm before spawning
|
||||||
|
|
||||||
|
If this is an unattended or one-shot run, continue automatically. Otherwise tell the user: "Source is ~<chars> chars -> <N> chunks -> <N> researcher subagents. This may take several minutes. Proceed?" Wait for confirmation before launching Tier 3.
|
||||||
|
|
||||||
|
### 3c. Dispatch researcher subagents
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tasks": [{
|
||||||
|
"agent": "researcher",
|
||||||
|
"task": "Read ONLY `outputs/.notes/<slug>-chunk-NNN.txt`. Extract: (1) key claims, (2) methodology or technical approach, (3) cited evidence. Do NOT use web_search or fetch external URLs — this is single-source summarization. If a claim appears to start or end mid-sentence at the file boundary, mark it BOUNDARY PARTIAL. Write to `outputs/.notes/<slug>-summary-chunk-NNN.md`.",
|
||||||
|
"output": "outputs/.notes/<slug>-summary-chunk-NNN.md"
|
||||||
|
}],
|
||||||
|
"concurrency": 4,
|
||||||
|
"failFast": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3d. Aggregate
|
||||||
|
|
||||||
|
After all subagents return, verify every expected `outputs/.notes/<slug>-summary-chunk-NNN.md` exists. Note any missing chunk indices — they will appear in the Coverage gaps section of the output. Do not abort on partial coverage; a partial summary with gaps noted is more useful than no summary.
|
||||||
|
|
||||||
|
When synthesizing:
|
||||||
|
- **Deduplicate**: a claim in multiple chunks is one claim — keep the most complete formulation.
|
||||||
|
- **Resolve boundary conflicts**: for adjacent-chunk contradictions, prefer the version with more supporting context.
|
||||||
|
- **Remove BOUNDARY PARTIAL markers** where a complete version exists in a neighbouring chunk.
|
||||||
|
|
||||||
|
Write to `outputs/<slug>-summary.md`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output format
|
||||||
|
|
||||||
|
All tiers produce the same artifact at `outputs/<slug>-summary.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Summary: [document title or source filename]
|
||||||
|
|
||||||
|
**Source:** [URL or file path]
|
||||||
|
**Date:** [YYYY-MM-DD]
|
||||||
|
**Tier:** [1 / 2 (N windows) / 3 (N chunks)]
|
||||||
|
|
||||||
|
## Key Claims
|
||||||
|
[3-7 most important assertions, each as a bullet]
|
||||||
|
|
||||||
|
## Methodology
|
||||||
|
[Approach, dataset, evaluation, baselines — omit for non-research documents]
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
[What the source explicitly flags as weak, incomplete, or out of scope]
|
||||||
|
|
||||||
|
## Verdict
|
||||||
|
[One paragraph: what this document establishes, its credibility, who should read it]
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
1. [Title or filename] — [URL or file path]
|
||||||
|
|
||||||
|
## Coverage gaps *(Tier 3 only — omit if all chunks succeeded)*
|
||||||
|
[Missing chunk indices and their approximate byte ranges]
|
||||||
|
```
|
||||||
|
|
||||||
|
Before you stop, verify on disk that `outputs/<slug>-summary.md` exists.
|
||||||
|
|
||||||
|
Sources contains only the single source confirmed reachable in Step 1. No verifier subagent is needed — there are no URLs constructed from memory to verify.
|
||||||
@@ -9,7 +9,7 @@ Create a research watch for: $@
|
|||||||
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user and confirm before proceeding.
|
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
||||||
- Start with a baseline sweep of the topic.
|
- Start with a baseline sweep of the topic.
|
||||||
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
|
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
|
||||||
- Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.
|
- Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
const MIN_NODE_VERSION = "20.19.0";
|
const MIN_NODE_VERSION = "20.19.0";
|
||||||
|
const MAX_NODE_MAJOR = 24;
|
||||||
|
const PREFERRED_NODE_MAJOR = 22;
|
||||||
|
|
||||||
function parseNodeVersion(version) {
|
function parseNodeVersion(version) {
|
||||||
const [major = "0", minor = "0", patch = "0"] = version.replace(/^v/, "").split(".");
|
const [major = "0", minor = "0", patch = "0"] = version.replace(/^v/, "").split(".");
|
||||||
@@ -16,16 +18,20 @@ function compareNodeVersions(left, right) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function isSupportedNodeVersion(version = process.versions.node) {
|
function isSupportedNodeVersion(version = process.versions.node) {
|
||||||
return compareNodeVersions(parseNodeVersion(version), parseNodeVersion(MIN_NODE_VERSION)) >= 0;
|
const parsed = parseNodeVersion(version);
|
||||||
|
return compareNodeVersions(parsed, parseNodeVersion(MIN_NODE_VERSION)) >= 0 && parsed.major <= MAX_NODE_MAJOR;
|
||||||
}
|
}
|
||||||
|
|
||||||
function getUnsupportedNodeVersionLines(version = process.versions.node) {
|
function getUnsupportedNodeVersionLines(version = process.versions.node) {
|
||||||
const isWindows = process.platform === "win32";
|
const isWindows = process.platform === "win32";
|
||||||
|
const parsed = parseNodeVersion(version);
|
||||||
return [
|
return [
|
||||||
`feynman requires Node.js ${MIN_NODE_VERSION} or later (detected ${version}).`,
|
`feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected ${version}).`,
|
||||||
isWindows
|
parsed.major > MAX_NODE_MAJOR
|
||||||
? "Install a newer Node.js from https://nodejs.org, or use the standalone installer:"
|
? "This newer Node release is not supported yet because native Pi packages may fail to build."
|
||||||
: "Switch to Node 20 with `nvm install 20 && nvm use 20`, or use the standalone installer:",
|
: isWindows
|
||||||
|
? "Install a supported Node.js release from https://nodejs.org, or use the standalone installer:"
|
||||||
|
: `Switch to a supported Node release with \`nvm install ${PREFERRED_NODE_MAJOR} && nvm use ${PREFERRED_NODE_MAJOR}\`, or use the standalone installer:`,
|
||||||
isWindows
|
isWindows
|
||||||
? "irm https://feynman.is/install.ps1 | iex"
|
? "irm https://feynman.is/install.ps1 | iex"
|
||||||
: "curl -fsSL https://feynman.is/install | bash",
|
: "curl -fsSL https://feynman.is/install | bash",
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.16
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.20
|
||||||
"@
|
"@
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.16
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.20
|
||||||
EOF
|
EOF
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|||||||
1
scripts/lib/alpha-hub-auth-patch.d.mts
Normal file
1
scripts/lib/alpha-hub-auth-patch.d.mts
Normal file
@@ -0,0 +1 @@
|
|||||||
|
export declare function patchAlphaHubAuthSource(source: string): string;
|
||||||
66
scripts/lib/alpha-hub-auth-patch.mjs
Normal file
66
scripts/lib/alpha-hub-auth-patch.mjs
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
const LEGACY_SUCCESS_HTML = "'<html><body><h2>Logged in to Alpha Hub</h2><p>You can close this tab.</p></body></html>'";
|
||||||
|
const LEGACY_ERROR_HTML = "'<html><body><h2>Login failed</h2><p>You can close this tab.</p></body></html>'";
|
||||||
|
|
||||||
|
const bodyAttr = 'style="font-family:system-ui,sans-serif;text-align:center;padding-top:20vh;background:#050a08;color:#f0f5f2"';
|
||||||
|
const logo = '<h1 style="font-family:monospace;font-size:48px;color:#34d399;margin:0">feynman</h1>';
|
||||||
|
|
||||||
|
const FEYNMAN_SUCCESS_HTML = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#34d399;margin-top:16px">Logged in</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
|
||||||
|
const FEYNMAN_ERROR_HTML = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#ef4444;margin-top:16px">Login failed</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
|
||||||
|
|
||||||
|
const CURRENT_OPEN_BROWSER = [
|
||||||
|
"function openBrowser(url) {",
|
||||||
|
" try {",
|
||||||
|
" const plat = platform();",
|
||||||
|
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
|
||||||
|
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
|
||||||
|
" else if (plat === 'win32') execSync(`start \"\" \"${url}\"`);",
|
||||||
|
" } catch {}",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const PATCHED_OPEN_BROWSER = [
|
||||||
|
"function openBrowser(url) {",
|
||||||
|
" try {",
|
||||||
|
" const plat = platform();",
|
||||||
|
" const isWsl = plat === 'linux' && (Boolean(process.env.WSL_DISTRO_NAME) || Boolean(process.env.WSL_INTEROP));",
|
||||||
|
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
|
||||||
|
" else if (isWsl) {",
|
||||||
|
" try {",
|
||||||
|
" execSync(`wslview \"${url}\"`);",
|
||||||
|
" } catch {",
|
||||||
|
" execSync(`cmd.exe /c start \"\" \"${url}\"`);",
|
||||||
|
" }",
|
||||||
|
" }",
|
||||||
|
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
|
||||||
|
" else if (plat === 'win32') execSync(`cmd /c start \"\" \"${url}\"`);",
|
||||||
|
" } catch {}",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const LEGACY_WIN_OPEN = "else if (plat === 'win32') execSync(`start \"${url}\"`);";
|
||||||
|
const FIXED_WIN_OPEN = "else if (plat === 'win32') execSync(`cmd /c start \"\" \"${url}\"`);";
|
||||||
|
|
||||||
|
const OPEN_BROWSER_LOG = "process.stderr.write('Opening browser for alphaXiv login...\\n');";
|
||||||
|
const OPEN_BROWSER_LOG_WITH_URL = "process.stderr.write(`Opening browser for alphaXiv login...\\nAuth URL: ${authUrl.toString()}\\n`);";
|
||||||
|
|
||||||
|
export function patchAlphaHubAuthSource(source) {
|
||||||
|
let patched = source;
|
||||||
|
|
||||||
|
if (patched.includes(LEGACY_SUCCESS_HTML)) {
|
||||||
|
patched = patched.replace(LEGACY_SUCCESS_HTML, FEYNMAN_SUCCESS_HTML);
|
||||||
|
}
|
||||||
|
if (patched.includes(LEGACY_ERROR_HTML)) {
|
||||||
|
patched = patched.replace(LEGACY_ERROR_HTML, FEYNMAN_ERROR_HTML);
|
||||||
|
}
|
||||||
|
if (patched.includes(CURRENT_OPEN_BROWSER)) {
|
||||||
|
patched = patched.replace(CURRENT_OPEN_BROWSER, PATCHED_OPEN_BROWSER);
|
||||||
|
}
|
||||||
|
if (patched.includes(LEGACY_WIN_OPEN)) {
|
||||||
|
patched = patched.replace(LEGACY_WIN_OPEN, FIXED_WIN_OPEN);
|
||||||
|
}
|
||||||
|
if (patched.includes(OPEN_BROWSER_LOG)) {
|
||||||
|
patched = patched.replace(OPEN_BROWSER_LOG, OPEN_BROWSER_LOG_WITH_URL);
|
||||||
|
}
|
||||||
|
|
||||||
|
return patched;
|
||||||
|
}
|
||||||
@@ -83,6 +83,66 @@ export function patchPiSubagentsSource(relativePath, source) {
|
|||||||
'const userDir = path.join(os.homedir(), ".pi", "agent", "agents");',
|
'const userDir = path.join(os.homedir(), ".pi", "agent", "agents");',
|
||||||
'const userDir = path.join(resolvePiAgentDir(), "agents");',
|
'const userDir = path.join(resolvePiAgentDir(), "agents");',
|
||||||
);
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
'export function discoverAgents(cwd: string, scope: AgentScope): AgentDiscoveryResult {',
|
||||||
|
'\tconst userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
|
||||||
|
'\tconst userDirNew = path.join(os.homedir(), ".agents");',
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
'export function discoverAgents(cwd: string, scope: AgentScope): AgentDiscoveryResult {',
|
||||||
|
'\tconst userDir = path.join(resolvePiAgentDir(), "agents");',
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
'\tconst userAgentsOld = scope === "project" ? [] : loadAgentsFromDir(userDirOld, "user");',
|
||||||
|
'\tconst userAgentsNew = scope === "project" ? [] : loadAgentsFromDir(userDirNew, "user");',
|
||||||
|
'\tconst userAgents = [...userAgentsOld, ...userAgentsNew];',
|
||||||
|
].join("\n"),
|
||||||
|
'\tconst userAgents = scope === "project" ? [] : loadAgentsFromDir(userDir, "user");',
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
'const userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
|
||||||
|
'const userDirNew = path.join(os.homedir(), ".agents");',
|
||||||
|
].join("\n"),
|
||||||
|
'const userDir = path.join(resolvePiAgentDir(), "agents");',
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
'\tconst user = [',
|
||||||
|
'\t\t...loadAgentsFromDir(userDirOld, "user"),',
|
||||||
|
'\t\t...loadAgentsFromDir(userDirNew, "user"),',
|
||||||
|
'\t];',
|
||||||
|
].join("\n"),
|
||||||
|
'\tconst user = loadAgentsFromDir(userDir, "user");',
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
'\tconst chains = [',
|
||||||
|
'\t\t...loadChainsFromDir(userDirOld, "user"),',
|
||||||
|
'\t\t...loadChainsFromDir(userDirNew, "user"),',
|
||||||
|
'\t\t...(projectDir ? loadChainsFromDir(projectDir, "project") : []),',
|
||||||
|
'\t];',
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
'\tconst chains = [',
|
||||||
|
'\t\t...loadChainsFromDir(userDir, "user"),',
|
||||||
|
'\t\t...(projectDir ? loadChainsFromDir(projectDir, "project") : []),',
|
||||||
|
'\t];',
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
'\tconst userDir = fs.existsSync(userDirNew) ? userDirNew : userDirOld;',
|
||||||
|
'\tconst userDir = path.join(resolvePiAgentDir(), "agents");',
|
||||||
|
);
|
||||||
break;
|
break;
|
||||||
case "artifacts.ts":
|
case "artifacts.ts":
|
||||||
patched = replaceAll(
|
patched = replaceAll(
|
||||||
|
|||||||
@@ -16,14 +16,30 @@ const PATCHED_CONFIG_EXPR =
|
|||||||
|
|
||||||
export function patchPiWebAccessSource(relativePath, source) {
|
export function patchPiWebAccessSource(relativePath, source) {
|
||||||
let patched = source;
|
let patched = source;
|
||||||
|
let changed = false;
|
||||||
|
|
||||||
if (patched.includes(PATCHED_CONFIG_EXPR)) {
|
if (!patched.includes(PATCHED_CONFIG_EXPR)) {
|
||||||
return patched;
|
patched = patched.split(LEGACY_CONFIG_EXPR).join(PATCHED_CONFIG_EXPR);
|
||||||
|
changed = patched !== source;
|
||||||
}
|
}
|
||||||
|
|
||||||
patched = patched.split(LEGACY_CONFIG_EXPR).join(PATCHED_CONFIG_EXPR);
|
if (relativePath === "index.ts") {
|
||||||
|
const workflowDefaultOriginal = 'const workflow = resolveWorkflow(params.workflow ?? configWorkflow, ctx?.hasUI !== false);';
|
||||||
|
const workflowDefaultPatched = 'const workflow = resolveWorkflow(params.workflow ?? configWorkflow ?? "none", ctx?.hasUI !== false);';
|
||||||
|
if (patched.includes(workflowDefaultOriginal)) {
|
||||||
|
patched = patched.replace(workflowDefaultOriginal, workflowDefaultPatched);
|
||||||
|
changed = true;
|
||||||
|
}
|
||||||
|
if (patched.includes('summary-review = open curator with auto summary draft (default)')) {
|
||||||
|
patched = patched.replace(
|
||||||
|
'summary-review = open curator with auto summary draft (default)',
|
||||||
|
'summary-review = open curator with auto summary draft (opt-in)',
|
||||||
|
);
|
||||||
|
changed = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (relativePath === "index.ts" && patched !== source) {
|
if (relativePath === "index.ts" && changed) {
|
||||||
patched = patched.replace('import { join } from "node:path";', 'import { dirname, join } from "node:path";');
|
patched = patched.replace('import { join } from "node:path";', 'import { dirname, join } from "node:path";');
|
||||||
patched = patched.replace('const dir = join(homedir(), ".pi");', "const dir = dirname(WEB_SEARCH_CONFIG_PATH);");
|
patched = patched.replace('const dir = join(homedir(), ".pi");', "const dir = dirname(WEB_SEARCH_CONFIG_PATH);");
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
import { spawnSync } from "node:child_process";
|
import { spawnSync } from "node:child_process";
|
||||||
import { existsSync, mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs";
|
import { existsSync, lstatSync, mkdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||||
import { createRequire } from "node:module";
|
import { createRequire } from "node:module";
|
||||||
import { homedir } from "node:os";
|
import { homedir } from "node:os";
|
||||||
import { dirname, resolve } from "node:path";
|
import { delimiter, dirname, resolve } from "node:path";
|
||||||
import { fileURLToPath } from "node:url";
|
import { fileURLToPath } from "node:url";
|
||||||
import { FEYNMAN_LOGO_HTML } from "../logo.mjs";
|
import { FEYNMAN_LOGO_HTML } from "../logo.mjs";
|
||||||
|
import { patchAlphaHubAuthSource } from "./lib/alpha-hub-auth-patch.mjs";
|
||||||
import { patchPiExtensionLoaderSource } from "./lib/pi-extension-loader-patch.mjs";
|
import { patchPiExtensionLoaderSource } from "./lib/pi-extension-loader-patch.mjs";
|
||||||
import { patchPiGoogleLegacySchemaSource } from "./lib/pi-google-legacy-schema-patch.mjs";
|
import { patchPiGoogleLegacySchemaSource } from "./lib/pi-google-legacy-schema-patch.mjs";
|
||||||
import { PI_WEB_ACCESS_PATCH_TARGETS, patchPiWebAccessSource } from "./lib/pi-web-access-patch.mjs";
|
import { PI_WEB_ACCESS_PATCH_TARGETS, patchPiWebAccessSource } from "./lib/pi-web-access-patch.mjs";
|
||||||
@@ -87,7 +88,30 @@ const piMemoryPath = resolve(workspaceRoot, "@samfp", "pi-memory", "src", "index
|
|||||||
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
||||||
const workspaceDir = resolve(appRoot, ".feynman", "npm");
|
const workspaceDir = resolve(appRoot, ".feynman", "npm");
|
||||||
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
||||||
|
const workspaceManifestPath = resolve(workspaceDir, ".runtime-manifest.json");
|
||||||
const workspaceArchivePath = resolve(appRoot, ".feynman", "runtime-workspace.tgz");
|
const workspaceArchivePath = resolve(appRoot, ".feynman", "runtime-workspace.tgz");
|
||||||
|
const globalNodeModulesRoot = resolve(feynmanNpmPrefix, "lib", "node_modules");
|
||||||
|
const PRUNE_VERSION = 3;
|
||||||
|
const NATIVE_PACKAGE_SPECS = new Set([
|
||||||
|
"@kaiserlich-dev/pi-session-search",
|
||||||
|
"@samfp/pi-memory",
|
||||||
|
]);
|
||||||
|
const FILTERED_INSTALL_OUTPUT_PATTERNS = [
|
||||||
|
/npm warn deprecated node-domexception@1\.0\.0/i,
|
||||||
|
/npm notice/i,
|
||||||
|
/^(added|removed|changed) \d+ packages?( in .+)?$/i,
|
||||||
|
/^\d+ packages are looking for funding$/i,
|
||||||
|
/^run `npm fund` for details$/i,
|
||||||
|
];
|
||||||
|
|
||||||
|
function arraysMatch(left, right) {
|
||||||
|
return left.length === right.length && left.every((value, index) => value === right[index]);
|
||||||
|
}
|
||||||
|
|
||||||
|
function supportsNativePackageSources(version = process.versions.node) {
|
||||||
|
const [major = "0"] = version.replace(/^v/, "").split(".");
|
||||||
|
return (Number.parseInt(major, 10) || 0) <= 24;
|
||||||
|
}
|
||||||
|
|
||||||
function createInstallCommand(packageManager, packageSpecs) {
|
function createInstallCommand(packageManager, packageSpecs) {
|
||||||
switch (packageManager) {
|
switch (packageManager) {
|
||||||
@@ -99,6 +123,7 @@ function createInstallCommand(packageManager, packageSpecs) {
|
|||||||
"--prefer-offline",
|
"--prefer-offline",
|
||||||
"--no-audit",
|
"--no-audit",
|
||||||
"--no-fund",
|
"--no-fund",
|
||||||
|
"--legacy-peer-deps",
|
||||||
"--loglevel",
|
"--loglevel",
|
||||||
"error",
|
"error",
|
||||||
...packageSpecs,
|
...packageSpecs,
|
||||||
@@ -141,12 +166,24 @@ function installWorkspacePackages(packageSpecs) {
|
|||||||
|
|
||||||
const result = spawnSync(packageManager, createInstallCommand(packageManager, packageSpecs), {
|
const result = spawnSync(packageManager, createInstallCommand(packageManager, packageSpecs), {
|
||||||
cwd: workspaceDir,
|
cwd: workspaceDir,
|
||||||
stdio: ["ignore", "ignore", "pipe"],
|
stdio: ["ignore", "pipe", "pipe"],
|
||||||
timeout: 300000,
|
timeout: 300000,
|
||||||
|
env: {
|
||||||
|
...process.env,
|
||||||
|
PATH: getPathWithCurrentNode(process.env.PATH),
|
||||||
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
|
for (const stream of [result.stdout, result.stderr]) {
|
||||||
|
if (!stream?.length) continue;
|
||||||
|
for (const line of stream.toString().split(/\r?\n/)) {
|
||||||
|
if (!line.trim()) continue;
|
||||||
|
if (FILTERED_INSTALL_OUTPUT_PATTERNS.some((pattern) => pattern.test(line.trim()))) continue;
|
||||||
|
process.stderr.write(`${line}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (result.status !== 0) {
|
if (result.status !== 0) {
|
||||||
if (result.stderr?.length) process.stderr.write(result.stderr);
|
|
||||||
process.stderr.write(`[feynman] ${packageManager} failed while setting up bundled packages.\n`);
|
process.stderr.write(`[feynman] ${packageManager} failed while setting up bundled packages.\n`);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@@ -159,6 +196,102 @@ function parsePackageName(spec) {
|
|||||||
return match?.[1] ?? spec;
|
return match?.[1] ?? spec;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function filterUnsupportedPackageSpecs(packageSpecs) {
|
||||||
|
if (supportsNativePackageSources()) return packageSpecs;
|
||||||
|
return packageSpecs.filter((spec) => !NATIVE_PACKAGE_SPECS.has(parsePackageName(spec)));
|
||||||
|
}
|
||||||
|
|
||||||
|
function workspaceContainsPackages(packageSpecs) {
|
||||||
|
return packageSpecs.every((spec) => existsSync(resolve(workspaceRoot, parsePackageName(spec))));
|
||||||
|
}
|
||||||
|
|
||||||
|
function workspaceMatchesRuntime(packageSpecs) {
|
||||||
|
if (!existsSync(workspaceManifestPath)) return false;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const manifest = JSON.parse(readFileSync(workspaceManifestPath, "utf8"));
|
||||||
|
if (!Array.isArray(manifest.packageSpecs)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (!arraysMatch(manifest.packageSpecs, packageSpecs)) {
|
||||||
|
if (!(workspaceContainsPackages(packageSpecs) && packageSpecs.every((spec) => manifest.packageSpecs.includes(spec)))) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!supportsNativePackageSources() && workspaceContainsPackages(packageSpecs)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
if (
|
||||||
|
manifest.nodeAbi !== process.versions.modules ||
|
||||||
|
manifest.platform !== process.platform ||
|
||||||
|
manifest.arch !== process.arch ||
|
||||||
|
manifest.pruneVersion !== PRUNE_VERSION
|
||||||
|
) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return packageSpecs.every((spec) => existsSync(resolve(workspaceRoot, parsePackageName(spec))));
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function writeWorkspaceManifest(packageSpecs) {
|
||||||
|
writeFileSync(
|
||||||
|
workspaceManifestPath,
|
||||||
|
JSON.stringify(
|
||||||
|
{
|
||||||
|
packageSpecs,
|
||||||
|
generatedAt: new Date().toISOString(),
|
||||||
|
nodeAbi: process.versions.modules,
|
||||||
|
nodeVersion: process.version,
|
||||||
|
platform: process.platform,
|
||||||
|
arch: process.arch,
|
||||||
|
pruneVersion: PRUNE_VERSION,
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
2,
|
||||||
|
) + "\n",
|
||||||
|
"utf8",
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureParentDir(path) {
|
||||||
|
mkdirSync(dirname(path), { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
function linkPointsTo(linkPath, targetPath) {
|
||||||
|
try {
|
||||||
|
if (!lstatSync(linkPath).isSymbolicLink()) return false;
|
||||||
|
return resolve(dirname(linkPath), readlinkSync(linkPath)) === targetPath;
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureBundledPackageLinks(packageSpecs) {
|
||||||
|
if (!workspaceMatchesRuntime(packageSpecs)) return;
|
||||||
|
|
||||||
|
for (const spec of packageSpecs) {
|
||||||
|
const packageName = parsePackageName(spec);
|
||||||
|
const sourcePath = resolve(workspaceRoot, packageName);
|
||||||
|
const targetPath = resolve(globalNodeModulesRoot, packageName);
|
||||||
|
if (!existsSync(sourcePath)) continue;
|
||||||
|
if (linkPointsTo(targetPath, sourcePath)) continue;
|
||||||
|
try {
|
||||||
|
if (lstatSync(targetPath).isSymbolicLink()) {
|
||||||
|
rmSync(targetPath, { force: true });
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
if (existsSync(targetPath)) continue;
|
||||||
|
|
||||||
|
ensureParentDir(targetPath);
|
||||||
|
try {
|
||||||
|
symlinkSync(sourcePath, targetPath, process.platform === "win32" ? "junction" : "dir");
|
||||||
|
} catch {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function restorePackagedWorkspace(packageSpecs) {
|
function restorePackagedWorkspace(packageSpecs) {
|
||||||
if (!existsSync(workspaceArchivePath)) return false;
|
if (!existsSync(workspaceArchivePath)) return false;
|
||||||
|
|
||||||
@@ -184,24 +317,26 @@ function restorePackagedWorkspace(packageSpecs) {
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
function refreshPackagedWorkspace(packageSpecs) {
|
|
||||||
return installWorkspacePackages(packageSpecs);
|
|
||||||
}
|
|
||||||
|
|
||||||
function resolveExecutable(name, fallbackPaths = []) {
|
function resolveExecutable(name, fallbackPaths = []) {
|
||||||
for (const candidate of fallbackPaths) {
|
for (const candidate of fallbackPaths) {
|
||||||
if (existsSync(candidate)) return candidate;
|
if (existsSync(candidate)) return candidate;
|
||||||
}
|
}
|
||||||
|
|
||||||
const isWindows = process.platform === "win32";
|
const isWindows = process.platform === "win32";
|
||||||
|
const env = {
|
||||||
|
...process.env,
|
||||||
|
PATH: process.env.PATH ?? "",
|
||||||
|
};
|
||||||
const result = isWindows
|
const result = isWindows
|
||||||
? spawnSync("cmd", ["/c", `where ${name}`], {
|
? spawnSync("cmd", ["/c", `where ${name}`], {
|
||||||
encoding: "utf8",
|
encoding: "utf8",
|
||||||
stdio: ["ignore", "pipe", "ignore"],
|
stdio: ["ignore", "pipe", "ignore"],
|
||||||
|
env,
|
||||||
})
|
})
|
||||||
: spawnSync("sh", ["-lc", `command -v ${name}`], {
|
: spawnSync("sh", ["-c", `command -v ${name}`], {
|
||||||
encoding: "utf8",
|
encoding: "utf8",
|
||||||
stdio: ["ignore", "pipe", "ignore"],
|
stdio: ["ignore", "pipe", "ignore"],
|
||||||
|
env,
|
||||||
});
|
});
|
||||||
if (result.status === 0) {
|
if (result.status === 0) {
|
||||||
const resolved = result.stdout.trim().split(/\r?\n/)[0];
|
const resolved = result.stdout.trim().split(/\r?\n/)[0];
|
||||||
@@ -210,6 +345,12 @@ function resolveExecutable(name, fallbackPaths = []) {
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function getPathWithCurrentNode(pathValue = process.env.PATH ?? "") {
|
||||||
|
const nodeDir = dirname(process.execPath);
|
||||||
|
const parts = pathValue.split(delimiter).filter(Boolean);
|
||||||
|
return parts.includes(nodeDir) ? pathValue : `${nodeDir}${delimiter}${pathValue}`;
|
||||||
|
}
|
||||||
|
|
||||||
function ensurePackageWorkspace() {
|
function ensurePackageWorkspace() {
|
||||||
if (!existsSync(settingsPath)) return;
|
if (!existsSync(settingsPath)) return;
|
||||||
|
|
||||||
@@ -219,10 +360,17 @@ function ensurePackageWorkspace() {
|
|||||||
.filter((v) => typeof v === "string" && v.startsWith("npm:"))
|
.filter((v) => typeof v === "string" && v.startsWith("npm:"))
|
||||||
.map((v) => v.slice(4))
|
.map((v) => v.slice(4))
|
||||||
: [];
|
: [];
|
||||||
|
const supportedPackageSpecs = filterUnsupportedPackageSpecs(packageSpecs);
|
||||||
|
|
||||||
if (packageSpecs.length === 0) return;
|
if (supportedPackageSpecs.length === 0) return;
|
||||||
if (existsSync(resolve(workspaceRoot, parsePackageName(packageSpecs[0])))) return;
|
if (workspaceMatchesRuntime(supportedPackageSpecs)) {
|
||||||
if (restorePackagedWorkspace(packageSpecs) && refreshPackagedWorkspace(packageSpecs)) return;
|
ensureBundledPackageLinks(supportedPackageSpecs);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (restorePackagedWorkspace(packageSpecs) && workspaceMatchesRuntime(supportedPackageSpecs)) {
|
||||||
|
ensureBundledPackageLinks(supportedPackageSpecs);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
mkdirSync(workspaceDir, { recursive: true });
|
mkdirSync(workspaceDir, { recursive: true });
|
||||||
writeFileSync(
|
writeFileSync(
|
||||||
@@ -239,7 +387,7 @@ function ensurePackageWorkspace() {
|
|||||||
process.stderr.write(`\r${frames[frame++ % frames.length]} setting up feynman... ${elapsed}s`);
|
process.stderr.write(`\r${frames[frame++ % frames.length]} setting up feynman... ${elapsed}s`);
|
||||||
}, 80);
|
}, 80);
|
||||||
|
|
||||||
const result = installWorkspacePackages(packageSpecs);
|
const result = installWorkspacePackages(supportedPackageSpecs);
|
||||||
|
|
||||||
clearInterval(spinner);
|
clearInterval(spinner);
|
||||||
const elapsed = Math.round((Date.now() - start) / 1000);
|
const elapsed = Math.round((Date.now() - start) / 1000);
|
||||||
@@ -247,7 +395,9 @@ function ensurePackageWorkspace() {
|
|||||||
if (!result) {
|
if (!result) {
|
||||||
process.stderr.write(`\r✗ setup failed (${elapsed}s)\n`);
|
process.stderr.write(`\r✗ setup failed (${elapsed}s)\n`);
|
||||||
} else {
|
} else {
|
||||||
process.stderr.write(`\r✓ feynman ready (${elapsed}s)\n`);
|
process.stderr.write("\r\x1b[2K");
|
||||||
|
writeWorkspaceManifest(supportedPackageSpecs);
|
||||||
|
ensureBundledPackageLinks(supportedPackageSpecs);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -620,25 +770,11 @@ const alphaHubAuthPath = findPackageRoot("@companion-ai/alpha-hub")
|
|||||||
: null;
|
: null;
|
||||||
|
|
||||||
if (alphaHubAuthPath && existsSync(alphaHubAuthPath)) {
|
if (alphaHubAuthPath && existsSync(alphaHubAuthPath)) {
|
||||||
let source = readFileSync(alphaHubAuthPath, "utf8");
|
const source = readFileSync(alphaHubAuthPath, "utf8");
|
||||||
const oldSuccess = "'<html><body><h2>Logged in to Alpha Hub</h2><p>You can close this tab.</p></body></html>'";
|
const patched = patchAlphaHubAuthSource(source);
|
||||||
const oldError = "'<html><body><h2>Login failed</h2><p>You can close this tab.</p></body></html>'";
|
if (patched !== source) {
|
||||||
const bodyAttr = `style="font-family:system-ui,sans-serif;text-align:center;padding-top:20vh;background:#050a08;color:#f0f5f2"`;
|
writeFileSync(alphaHubAuthPath, patched, "utf8");
|
||||||
const logo = `<h1 style="font-family:monospace;font-size:48px;color:#34d399;margin:0">feynman</h1>`;
|
|
||||||
const newSuccess = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#34d399;margin-top:16px">Logged in</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
|
|
||||||
const newError = `'<html><body ${bodyAttr}>${logo}<h2 style="color:#ef4444;margin-top:16px">Login failed</h2><p style="color:#8aaa9a">You can close this tab.</p></body></html>'`;
|
|
||||||
if (source.includes(oldSuccess)) {
|
|
||||||
source = source.replace(oldSuccess, newSuccess);
|
|
||||||
}
|
}
|
||||||
if (source.includes(oldError)) {
|
|
||||||
source = source.replace(oldError, newError);
|
|
||||||
}
|
|
||||||
const brokenWinOpen = "else if (plat === 'win32') execSync(`start \"${url}\"`);";
|
|
||||||
const fixedWinOpen = "else if (plat === 'win32') execSync(`cmd /c start \"\" \"${url}\"`);";
|
|
||||||
if (source.includes(brokenWinOpen)) {
|
|
||||||
source = source.replace(brokenWinOpen, fixedWinOpen);
|
|
||||||
}
|
|
||||||
writeFileSync(alphaHubAuthPath, source, "utf8");
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (existsSync(piMemoryPath)) {
|
if (existsSync(piMemoryPath)) {
|
||||||
|
|||||||
128
src/cli.ts
128
src/cli.ts
@@ -1,6 +1,6 @@
|
|||||||
import "dotenv/config";
|
import "dotenv/config";
|
||||||
|
|
||||||
import { readFileSync } from "node:fs";
|
import { existsSync, readFileSync } from "node:fs";
|
||||||
import { dirname, resolve } from "node:path";
|
import { dirname, resolve } from "node:path";
|
||||||
import { parseArgs } from "node:util";
|
import { parseArgs } from "node:util";
|
||||||
import { fileURLToPath } from "node:url";
|
import { fileURLToPath } from "node:url";
|
||||||
@@ -11,11 +11,13 @@ import {
|
|||||||
login as loginAlpha,
|
login as loginAlpha,
|
||||||
logout as logoutAlpha,
|
logout as logoutAlpha,
|
||||||
} from "@companion-ai/alpha-hub/lib";
|
} from "@companion-ai/alpha-hub/lib";
|
||||||
import { DefaultPackageManager, SettingsManager } from "@mariozechner/pi-coding-agent";
|
import { SettingsManager } from "@mariozechner/pi-coding-agent";
|
||||||
|
|
||||||
import { syncBundledAssets } from "./bootstrap/sync.js";
|
import { syncBundledAssets } from "./bootstrap/sync.js";
|
||||||
import { ensureFeynmanHome, getDefaultSessionDir, getFeynmanAgentDir, getFeynmanHome } from "./config/paths.js";
|
import { ensureFeynmanHome, getDefaultSessionDir, getFeynmanAgentDir, getFeynmanHome } from "./config/paths.js";
|
||||||
import { launchPiChat } from "./pi/launch.js";
|
import { launchPiChat } from "./pi/launch.js";
|
||||||
|
import { installPackageSources, updateConfiguredPackages } from "./pi/package-ops.js";
|
||||||
|
import { MAX_NATIVE_PACKAGE_NODE_MAJOR } from "./pi/package-presets.js";
|
||||||
import { CORE_PACKAGE_SOURCES, getOptionalPackagePresetSources, listOptionalPackagePresets } from "./pi/package-presets.js";
|
import { CORE_PACKAGE_SOURCES, getOptionalPackagePresetSources, listOptionalPackagePresets } from "./pi/package-presets.js";
|
||||||
import { normalizeFeynmanSettings, normalizeThinkingLevel, parseModelSpec } from "./pi/settings.js";
|
import { normalizeFeynmanSettings, normalizeThinkingLevel, parseModelSpec } from "./pi/settings.js";
|
||||||
import { applyFeynmanPackageManagerEnv } from "./pi/runtime.js";
|
import { applyFeynmanPackageManagerEnv } from "./pi/runtime.js";
|
||||||
@@ -28,6 +30,7 @@ import {
|
|||||||
printModelList,
|
printModelList,
|
||||||
setDefaultModelSpec,
|
setDefaultModelSpec,
|
||||||
} from "./model/commands.js";
|
} from "./model/commands.js";
|
||||||
|
import { buildModelStatusSnapshotFromRecords, getAvailableModelRecords, getSupportedModelRecords } from "./model/catalog.js";
|
||||||
import { clearSearchConfig, printSearchStatus, setSearchProvider } from "./search/commands.js";
|
import { clearSearchConfig, printSearchStatus, setSearchProvider } from "./search/commands.js";
|
||||||
import type { PiWebSearchProvider } from "./pi/web-access.js";
|
import type { PiWebSearchProvider } from "./pi/web-access.js";
|
||||||
import { runDoctor, runStatus } from "./setup/doctor.js";
|
import { runDoctor, runStatus } from "./setup/doctor.js";
|
||||||
@@ -130,7 +133,7 @@ async function handleModelCommand(subcommand: string | undefined, args: string[]
|
|||||||
|
|
||||||
if (subcommand === "login") {
|
if (subcommand === "login") {
|
||||||
if (args[0]) {
|
if (args[0]) {
|
||||||
// Specific provider given - use OAuth login directly
|
// Specific provider given - resolve OAuth vs API-key setup automatically
|
||||||
await loginModelProvider(feynmanAuthPath, args[0], feynmanSettingsPath);
|
await loginModelProvider(feynmanAuthPath, args[0], feynmanSettingsPath);
|
||||||
} else {
|
} else {
|
||||||
// No provider specified - show auth method choice
|
// No provider specified - show auth method choice
|
||||||
@@ -147,7 +150,7 @@ async function handleModelCommand(subcommand: string | undefined, args: string[]
|
|||||||
if (subcommand === "set") {
|
if (subcommand === "set") {
|
||||||
const spec = args[0];
|
const spec = args[0];
|
||||||
if (!spec) {
|
if (!spec) {
|
||||||
throw new Error("Usage: feynman model set <provider/model>");
|
throw new Error("Usage: feynman model set <provider/model|provider:model>");
|
||||||
}
|
}
|
||||||
setDefaultModelSpec(feynmanSettingsPath, feynmanAuthPath, spec);
|
setDefaultModelSpec(feynmanSettingsPath, feynmanAuthPath, spec);
|
||||||
return;
|
return;
|
||||||
@@ -180,27 +183,30 @@ async function handleModelCommand(subcommand: string | undefined, args: string[]
|
|||||||
}
|
}
|
||||||
|
|
||||||
async function handleUpdateCommand(workingDir: string, feynmanAgentDir: string, source?: string): Promise<void> {
|
async function handleUpdateCommand(workingDir: string, feynmanAgentDir: string, source?: string): Promise<void> {
|
||||||
applyFeynmanPackageManagerEnv(feynmanAgentDir);
|
try {
|
||||||
const settingsManager = SettingsManager.create(workingDir, feynmanAgentDir);
|
const result = await updateConfiguredPackages(workingDir, feynmanAgentDir, source);
|
||||||
const packageManager = new DefaultPackageManager({
|
if (result.updated.length === 0) {
|
||||||
cwd: workingDir,
|
console.log("All packages up to date.");
|
||||||
agentDir: feynmanAgentDir,
|
return;
|
||||||
settingsManager,
|
|
||||||
});
|
|
||||||
|
|
||||||
packageManager.setProgressCallback((event) => {
|
|
||||||
if (event.type === "start") {
|
|
||||||
console.log(`Updating ${event.source}...`);
|
|
||||||
} else if (event.type === "complete") {
|
|
||||||
console.log(`Updated ${event.source}`);
|
|
||||||
} else if (event.type === "error") {
|
|
||||||
console.error(`Failed to update ${event.source}: ${event.message ?? "unknown error"}`);
|
|
||||||
}
|
}
|
||||||
});
|
|
||||||
|
|
||||||
await packageManager.update(source);
|
for (const updatedSource of result.updated) {
|
||||||
await settingsManager.flush();
|
console.log(`Updated ${updatedSource}`);
|
||||||
console.log("All packages up to date.");
|
}
|
||||||
|
for (const skippedSource of result.skipped) {
|
||||||
|
console.log(`Skipped ${skippedSource} on Node ${process.versions.node} (native packages are only supported through Node ${MAX_NATIVE_PACKAGE_NODE_MAJOR}.x).`);
|
||||||
|
}
|
||||||
|
console.log("All packages up to date.");
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
if (message.includes("No supported package manager found")) {
|
||||||
|
console.log("No package manager is available for live package updates.");
|
||||||
|
console.log("If you installed the standalone app, rerun the installer to get newer bundled packages.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async function handlePackagesCommand(subcommand: string | undefined, args: string[], workingDir: string, feynmanAgentDir: string): Promise<void> {
|
async function handlePackagesCommand(subcommand: string | undefined, args: string[], workingDir: string, feynmanAgentDir: string): Promise<void> {
|
||||||
@@ -244,30 +250,44 @@ async function handlePackagesCommand(subcommand: string | undefined, args: strin
|
|||||||
throw new Error(`Unknown package preset: ${target}`);
|
throw new Error(`Unknown package preset: ${target}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
const packageManager = new DefaultPackageManager({
|
const appRoot = resolve(dirname(fileURLToPath(import.meta.url)), "..");
|
||||||
cwd: workingDir,
|
const isStandaloneBundle = !existsSync(resolve(appRoot, ".feynman", "runtime-workspace.tgz")) && existsSync(resolve(appRoot, ".feynman", "npm"));
|
||||||
agentDir: feynmanAgentDir,
|
if (target === "generative-ui" && process.platform === "darwin" && isStandaloneBundle) {
|
||||||
settingsManager,
|
console.log("The generative-ui preset is currently unavailable in the standalone macOS bundle.");
|
||||||
});
|
console.log("Its native glimpseui dependency fails to compile reliably in that environment.");
|
||||||
packageManager.setProgressCallback((event) => {
|
console.log("If you need generative-ui, install Feynman through npm instead of the standalone bundle.");
|
||||||
if (event.type === "start") {
|
return;
|
||||||
console.log(`Installing ${event.source}...`);
|
}
|
||||||
} else if (event.type === "complete") {
|
|
||||||
console.log(`Installed ${event.source}`);
|
|
||||||
} else if (event.type === "error") {
|
|
||||||
console.error(`Failed to install ${event.source}: ${event.message ?? "unknown error"}`);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
|
const pendingSources = sources.filter((source) => !configuredSources.has(source));
|
||||||
for (const source of sources) {
|
for (const source of sources) {
|
||||||
if (configuredSources.has(source)) {
|
if (configuredSources.has(source)) {
|
||||||
console.log(`${source} already installed`);
|
console.log(`${source} already installed`);
|
||||||
continue;
|
|
||||||
}
|
}
|
||||||
await packageManager.install(source);
|
|
||||||
}
|
}
|
||||||
await settingsManager.flush();
|
|
||||||
console.log("Optional packages installed.");
|
if (pendingSources.length === 0) {
|
||||||
|
console.log("Optional packages installed.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await installPackageSources(workingDir, feynmanAgentDir, pendingSources, { persist: true });
|
||||||
|
for (const skippedSource of result.skipped) {
|
||||||
|
console.log(`Skipped ${skippedSource} on Node ${process.versions.node} (native packages are only supported through Node ${MAX_NATIVE_PACKAGE_NODE_MAJOR}.x).`);
|
||||||
|
}
|
||||||
|
await settingsManager.flush();
|
||||||
|
console.log("Optional packages installed.");
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
if (message.includes("No supported package manager found")) {
|
||||||
|
console.log("No package manager is available for optional package installs.");
|
||||||
|
console.log("Install npm, pnpm, or bun, or rerun the standalone installer for bundled package updates.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
function handleSearchCommand(subcommand: string | undefined, args: string[]): void {
|
function handleSearchCommand(subcommand: string | undefined, args: string[]): void {
|
||||||
@@ -326,6 +346,24 @@ export function resolveInitialPrompt(
|
|||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export function shouldRunInteractiveSetup(
|
||||||
|
explicitModelSpec: string | undefined,
|
||||||
|
currentModelSpec: string | undefined,
|
||||||
|
isInteractiveTerminal: boolean,
|
||||||
|
authPath: string,
|
||||||
|
): boolean {
|
||||||
|
if (explicitModelSpec || !isInteractiveTerminal) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const status = buildModelStatusSnapshotFromRecords(
|
||||||
|
getSupportedModelRecords(authPath),
|
||||||
|
getAvailableModelRecords(authPath),
|
||||||
|
currentModelSpec,
|
||||||
|
);
|
||||||
|
return !status.currentValid;
|
||||||
|
}
|
||||||
|
|
||||||
export async function main(): Promise<void> {
|
export async function main(): Promise<void> {
|
||||||
const here = dirname(fileURLToPath(import.meta.url));
|
const here = dirname(fileURLToPath(import.meta.url));
|
||||||
const appRoot = resolve(here, "..");
|
const appRoot = resolve(here, "..");
|
||||||
@@ -498,7 +536,13 @@ export async function main(): Promise<void> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!explicitModelSpec && !getCurrentModelSpec(feynmanSettingsPath) && process.stdin.isTTY && process.stdout.isTTY) {
|
const currentModelSpec = getCurrentModelSpec(feynmanSettingsPath);
|
||||||
|
if (shouldRunInteractiveSetup(
|
||||||
|
explicitModelSpec,
|
||||||
|
currentModelSpec,
|
||||||
|
Boolean(process.stdin.isTTY && process.stdout.isTTY),
|
||||||
|
feynmanAuthPath,
|
||||||
|
)) {
|
||||||
await runSetup({
|
await runSetup({
|
||||||
settingsPath: feynmanSettingsPath,
|
settingsPath: feynmanSettingsPath,
|
||||||
bundledSettingsPath,
|
bundledSettingsPath,
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import { exec as execCallback } from "node:child_process";
|
|||||||
import { promisify } from "node:util";
|
import { promisify } from "node:util";
|
||||||
|
|
||||||
import { readJson } from "../pi/settings.js";
|
import { readJson } from "../pi/settings.js";
|
||||||
import { promptChoice, promptText } from "../setup/prompts.js";
|
import { promptChoice, promptSelect, promptText, type PromptSelectOption } from "../setup/prompts.js";
|
||||||
import { openUrl } from "../system/open-url.js";
|
import { openUrl } from "../system/open-url.js";
|
||||||
import { printInfo, printSection, printSuccess, printWarning } from "../ui/terminal.js";
|
import { printInfo, printSection, printSuccess, printWarning } from "../ui/terminal.js";
|
||||||
import {
|
import {
|
||||||
@@ -55,13 +55,22 @@ async function selectOAuthProvider(authPath: string, action: "login" | "logout")
|
|||||||
return providers[0];
|
return providers[0];
|
||||||
}
|
}
|
||||||
|
|
||||||
const choices = providers.map((provider) => `${provider.id} — ${provider.name ?? provider.id}`);
|
const selection = await promptSelect<OAuthProviderInfo | "cancel">(
|
||||||
choices.push("Cancel");
|
`Choose an OAuth provider to ${action}:`,
|
||||||
const selection = await promptChoice(`Choose an OAuth provider to ${action}:`, choices, 0);
|
[
|
||||||
if (selection >= providers.length) {
|
...providers.map((provider) => ({
|
||||||
|
value: provider,
|
||||||
|
label: provider.name ?? provider.id,
|
||||||
|
hint: provider.id,
|
||||||
|
})),
|
||||||
|
{ value: "cancel", label: "Cancel" },
|
||||||
|
],
|
||||||
|
providers[0],
|
||||||
|
);
|
||||||
|
if (selection === "cancel") {
|
||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
return providers[selection];
|
return selection;
|
||||||
}
|
}
|
||||||
|
|
||||||
type ApiKeyProviderInfo = {
|
type ApiKeyProviderInfo = {
|
||||||
@@ -71,10 +80,11 @@ type ApiKeyProviderInfo = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
|
const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
|
||||||
{ id: "__custom__", label: "Custom provider (baseUrl + API key)" },
|
|
||||||
{ id: "openai", label: "OpenAI Platform API", envVar: "OPENAI_API_KEY" },
|
{ id: "openai", label: "OpenAI Platform API", envVar: "OPENAI_API_KEY" },
|
||||||
{ id: "anthropic", label: "Anthropic API", envVar: "ANTHROPIC_API_KEY" },
|
{ id: "anthropic", label: "Anthropic API", envVar: "ANTHROPIC_API_KEY" },
|
||||||
{ id: "google", label: "Google Gemini API", envVar: "GEMINI_API_KEY" },
|
{ id: "google", label: "Google Gemini API", envVar: "GEMINI_API_KEY" },
|
||||||
|
{ id: "__custom__", label: "Custom provider (local/self-hosted/proxy)" },
|
||||||
|
{ id: "amazon-bedrock", label: "Amazon Bedrock (AWS credential chain)" },
|
||||||
{ id: "openrouter", label: "OpenRouter", envVar: "OPENROUTER_API_KEY" },
|
{ id: "openrouter", label: "OpenRouter", envVar: "OPENROUTER_API_KEY" },
|
||||||
{ id: "zai", label: "Z.AI / GLM", envVar: "ZAI_API_KEY" },
|
{ id: "zai", label: "Z.AI / GLM", envVar: "ZAI_API_KEY" },
|
||||||
{ id: "kimi-coding", label: "Kimi / Moonshot", envVar: "KIMI_API_KEY" },
|
{ id: "kimi-coding", label: "Kimi / Moonshot", envVar: "KIMI_API_KEY" },
|
||||||
@@ -91,16 +101,47 @@ const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
|
|||||||
{ id: "azure-openai-responses", label: "Azure OpenAI (Responses)", envVar: "AZURE_OPENAI_API_KEY" },
|
{ id: "azure-openai-responses", label: "Azure OpenAI (Responses)", envVar: "AZURE_OPENAI_API_KEY" },
|
||||||
];
|
];
|
||||||
|
|
||||||
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
|
function resolveApiKeyProvider(input: string): ApiKeyProviderInfo | undefined {
|
||||||
const choices = API_KEY_PROVIDERS.map(
|
const normalizedInput = normalizeProviderId(input);
|
||||||
(provider) => `${provider.id} — ${provider.label}${provider.envVar ? ` (${provider.envVar})` : ""}`,
|
if (!normalizedInput) {
|
||||||
);
|
|
||||||
choices.push("Cancel");
|
|
||||||
const selection = await promptChoice("Choose an API-key provider:", choices, 0);
|
|
||||||
if (selection >= API_KEY_PROVIDERS.length) {
|
|
||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
return API_KEY_PROVIDERS[selection];
|
return API_KEY_PROVIDERS.find((provider) => provider.id === normalizedInput);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function resolveModelProviderForCommand(
|
||||||
|
authPath: string,
|
||||||
|
input: string,
|
||||||
|
): { kind: "oauth" | "api-key"; id: string } | undefined {
|
||||||
|
const oauthProvider = resolveOAuthProvider(authPath, input);
|
||||||
|
if (oauthProvider) {
|
||||||
|
return { kind: "oauth", id: oauthProvider.id };
|
||||||
|
}
|
||||||
|
|
||||||
|
const apiKeyProvider = resolveApiKeyProvider(input);
|
||||||
|
if (apiKeyProvider) {
|
||||||
|
return { kind: "api-key", id: apiKeyProvider.id };
|
||||||
|
}
|
||||||
|
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
|
||||||
|
const options: PromptSelectOption<ApiKeyProviderInfo | "cancel">[] = API_KEY_PROVIDERS.map((provider) => ({
|
||||||
|
value: provider,
|
||||||
|
label: provider.label,
|
||||||
|
hint: provider.id === "__custom__"
|
||||||
|
? "Ollama, vLLM, LM Studio, proxies"
|
||||||
|
: provider.envVar ?? provider.id,
|
||||||
|
}));
|
||||||
|
options.push({ value: "cancel", label: "Cancel" });
|
||||||
|
|
||||||
|
const defaultProvider = API_KEY_PROVIDERS.find((provider) => provider.id === "openai") ?? API_KEY_PROVIDERS[0];
|
||||||
|
const selection = await promptSelect("Choose an API-key provider:", options, defaultProvider);
|
||||||
|
if (selection === "cancel") {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
return selection;
|
||||||
}
|
}
|
||||||
|
|
||||||
type CustomProviderSetup = {
|
type CustomProviderSetup = {
|
||||||
@@ -447,13 +488,66 @@ async function verifyCustomProvider(setup: CustomProviderSetup, authPath: string
|
|||||||
printInfo("Verification: skipped network probe for this API mode.");
|
printInfo("Verification: skipped network probe for this API mode.");
|
||||||
}
|
}
|
||||||
|
|
||||||
async function configureApiKeyProvider(authPath: string): Promise<boolean> {
|
async function verifyBedrockCredentialChain(): Promise<void> {
|
||||||
const provider = await selectApiKeyProvider();
|
const { defaultProvider } = await import("@aws-sdk/credential-provider-node");
|
||||||
|
const credentials = await defaultProvider({})();
|
||||||
|
if (!credentials?.accessKeyId || !credentials?.secretAccessKey) {
|
||||||
|
throw new Error("AWS credential chain resolved without usable Bedrock credentials.");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function configureBedrockProvider(authPath: string): Promise<boolean> {
|
||||||
|
printSection("AWS Credentials: Amazon Bedrock");
|
||||||
|
printInfo("Feynman will verify the AWS SDK credential chain used by Pi's Bedrock provider.");
|
||||||
|
printInfo("Supported sources include AWS_PROFILE, ~/.aws credentials/config, SSO, ECS/IRSA, and EC2 instance roles.");
|
||||||
|
|
||||||
|
try {
|
||||||
|
await verifyBedrockCredentialChain();
|
||||||
|
AuthStorage.create(authPath).set("amazon-bedrock", { type: "api_key", key: "<authenticated>" });
|
||||||
|
printSuccess("Verified AWS credential chain and marked Amazon Bedrock as configured.");
|
||||||
|
printInfo("Use `feynman model list` to see available Bedrock models.");
|
||||||
|
return true;
|
||||||
|
} catch (error) {
|
||||||
|
printWarning(`AWS credential verification failed: ${error instanceof Error ? error.message : String(error)}`);
|
||||||
|
printInfo("Configure AWS credentials first, for example:");
|
||||||
|
printInfo(" export AWS_PROFILE=default");
|
||||||
|
printInfo(" # or set AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY");
|
||||||
|
printInfo(" # or use an EC2/ECS/IRSA role with valid Bedrock access");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function maybeSetRecommendedDefaultModel(settingsPath: string | undefined, authPath: string): void {
|
||||||
|
if (!settingsPath) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const currentSpec = getCurrentModelSpec(settingsPath);
|
||||||
|
const available = getAvailableModelRecords(authPath);
|
||||||
|
const currentValid = currentSpec ? available.some((m) => `${m.provider}/${m.id}` === currentSpec) : false;
|
||||||
|
|
||||||
|
if ((!currentSpec || !currentValid) && available.length > 0) {
|
||||||
|
const recommended = chooseRecommendedModel(authPath);
|
||||||
|
if (recommended) {
|
||||||
|
setDefaultModelSpec(settingsPath, authPath, recommended.spec);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function configureApiKeyProvider(authPath: string, providerId?: string): Promise<boolean> {
|
||||||
|
const provider = providerId ? resolveApiKeyProvider(providerId) : await selectApiKeyProvider();
|
||||||
if (!provider) {
|
if (!provider) {
|
||||||
|
if (providerId) {
|
||||||
|
throw new Error(`Unknown API-key model provider: ${providerId}`);
|
||||||
|
}
|
||||||
printInfo("API key setup cancelled.");
|
printInfo("API key setup cancelled.");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (provider.id === "amazon-bedrock") {
|
||||||
|
return configureBedrockProvider(authPath);
|
||||||
|
}
|
||||||
|
|
||||||
if (provider.id === "__custom__") {
|
if (provider.id === "__custom__") {
|
||||||
const setup = await promptCustomProviderSetup();
|
const setup = await promptCustomProviderSetup();
|
||||||
if (!setup) {
|
if (!setup) {
|
||||||
@@ -512,7 +606,7 @@ async function configureApiKeyProvider(authPath: string): Promise<boolean> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
function resolveAvailableModelSpec(authPath: string, input: string): string | undefined {
|
function resolveAvailableModelSpec(authPath: string, input: string): string | undefined {
|
||||||
const normalizedInput = input.trim().toLowerCase();
|
const normalizedInput = input.trim().replace(/^([^/:]+):(.+)$/, "$1/$2").toLowerCase();
|
||||||
if (!normalizedInput) {
|
if (!normalizedInput) {
|
||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
@@ -528,6 +622,17 @@ function resolveAvailableModelSpec(authPath: string, input: string): string | un
|
|||||||
return `${exactIdMatches[0]!.provider}/${exactIdMatches[0]!.id}`;
|
return `${exactIdMatches[0]!.provider}/${exactIdMatches[0]!.id}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// When multiple providers expose the same bare model ID, prefer providers the
|
||||||
|
// user explicitly configured in auth storage.
|
||||||
|
if (exactIdMatches.length > 1) {
|
||||||
|
const authData = readJson(authPath) as Record<string, unknown>;
|
||||||
|
const configuredProviders = new Set(Object.keys(authData));
|
||||||
|
const configuredMatches = exactIdMatches.filter((model) => configuredProviders.has(model.provider));
|
||||||
|
if (configuredMatches.length === 1) {
|
||||||
|
return `${configuredMatches[0]!.provider}/${configuredMatches[0]!.id}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -566,30 +671,22 @@ export function printModelList(settingsPath: string, authPath: string): void {
|
|||||||
|
|
||||||
export async function authenticateModelProvider(authPath: string, settingsPath?: string): Promise<boolean> {
|
export async function authenticateModelProvider(authPath: string, settingsPath?: string): Promise<boolean> {
|
||||||
const choices = [
|
const choices = [
|
||||||
"API key (OpenAI, Anthropic, Google, custom provider, ...)",
|
"OAuth login (recommended: ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
|
||||||
"OAuth login (ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
|
"API key or custom provider (OpenAI, Anthropic, Google, local/self-hosted, ...)",
|
||||||
"Cancel",
|
"Cancel",
|
||||||
];
|
];
|
||||||
const selection = await promptChoice("How do you want to authenticate?", choices, 0);
|
const selection = await promptChoice("How do you want to authenticate?", choices, 0);
|
||||||
|
|
||||||
if (selection === 0) {
|
if (selection === 0) {
|
||||||
const configured = await configureApiKeyProvider(authPath);
|
return loginModelProvider(authPath, undefined, settingsPath);
|
||||||
if (configured && settingsPath) {
|
|
||||||
const currentSpec = getCurrentModelSpec(settingsPath);
|
|
||||||
const available = getAvailableModelRecords(authPath);
|
|
||||||
const currentValid = currentSpec ? available.some((m) => `${m.provider}/${m.id}` === currentSpec) : false;
|
|
||||||
if ((!currentSpec || !currentValid) && available.length > 0) {
|
|
||||||
const recommended = chooseRecommendedModel(authPath);
|
|
||||||
if (recommended) {
|
|
||||||
setDefaultModelSpec(settingsPath, authPath, recommended.spec);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return configured;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (selection === 1) {
|
if (selection === 1) {
|
||||||
return loginModelProvider(authPath, undefined, settingsPath);
|
const configured = await configureApiKeyProvider(authPath);
|
||||||
|
if (configured) {
|
||||||
|
maybeSetRecommendedDefaultModel(settingsPath, authPath);
|
||||||
|
}
|
||||||
|
return configured;
|
||||||
}
|
}
|
||||||
|
|
||||||
printInfo("Authentication cancelled.");
|
printInfo("Authentication cancelled.");
|
||||||
@@ -597,10 +694,24 @@ export async function authenticateModelProvider(authPath: string, settingsPath?:
|
|||||||
}
|
}
|
||||||
|
|
||||||
export async function loginModelProvider(authPath: string, providerId?: string, settingsPath?: string): Promise<boolean> {
|
export async function loginModelProvider(authPath: string, providerId?: string, settingsPath?: string): Promise<boolean> {
|
||||||
|
if (providerId) {
|
||||||
|
const resolvedProvider = resolveModelProviderForCommand(authPath, providerId);
|
||||||
|
if (!resolvedProvider) {
|
||||||
|
throw new Error(`Unknown model provider: ${providerId}`);
|
||||||
|
}
|
||||||
|
if (resolvedProvider.kind === "api-key") {
|
||||||
|
const configured = await configureApiKeyProvider(authPath, resolvedProvider.id);
|
||||||
|
if (configured) {
|
||||||
|
maybeSetRecommendedDefaultModel(settingsPath, authPath);
|
||||||
|
}
|
||||||
|
return configured;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
const provider = providerId ? resolveOAuthProvider(authPath, providerId) : await selectOAuthProvider(authPath, "login");
|
const provider = providerId ? resolveOAuthProvider(authPath, providerId) : await selectOAuthProvider(authPath, "login");
|
||||||
if (!provider) {
|
if (!provider) {
|
||||||
if (providerId) {
|
if (providerId) {
|
||||||
throw new Error(`Unknown OAuth model provider: ${providerId}`);
|
throw new Error(`Unknown model provider: ${providerId}`);
|
||||||
}
|
}
|
||||||
printInfo("Login cancelled.");
|
printInfo("Login cancelled.");
|
||||||
return false;
|
return false;
|
||||||
@@ -637,35 +748,38 @@ export async function loginModelProvider(authPath: string, providerId?: string,
|
|||||||
|
|
||||||
printSuccess(`Model provider login complete: ${provider.id}`);
|
printSuccess(`Model provider login complete: ${provider.id}`);
|
||||||
|
|
||||||
if (settingsPath) {
|
maybeSetRecommendedDefaultModel(settingsPath, authPath);
|
||||||
const currentSpec = getCurrentModelSpec(settingsPath);
|
|
||||||
const available = getAvailableModelRecords(authPath);
|
|
||||||
const currentValid = currentSpec
|
|
||||||
? available.some((m) => `${m.provider}/${m.id}` === currentSpec)
|
|
||||||
: false;
|
|
||||||
|
|
||||||
if ((!currentSpec || !currentValid) && available.length > 0) {
|
|
||||||
const recommended = chooseRecommendedModel(authPath);
|
|
||||||
if (recommended) {
|
|
||||||
setDefaultModelSpec(settingsPath, authPath, recommended.spec);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function logoutModelProvider(authPath: string, providerId?: string): Promise<void> {
|
export async function logoutModelProvider(authPath: string, providerId?: string): Promise<void> {
|
||||||
const provider = providerId ? resolveOAuthProvider(authPath, providerId) : await selectOAuthProvider(authPath, "logout");
|
const authStorage = AuthStorage.create(authPath);
|
||||||
if (!provider) {
|
if (providerId) {
|
||||||
if (providerId) {
|
const resolvedProvider = resolveModelProviderForCommand(authPath, providerId);
|
||||||
throw new Error(`Unknown OAuth model provider: ${providerId}`);
|
if (resolvedProvider) {
|
||||||
|
authStorage.logout(resolvedProvider.id);
|
||||||
|
printSuccess(`Model provider logout complete: ${resolvedProvider.id}`);
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const normalizedProviderId = normalizeProviderId(providerId);
|
||||||
|
if (authStorage.has(normalizedProviderId)) {
|
||||||
|
authStorage.logout(normalizedProviderId);
|
||||||
|
printSuccess(`Model provider logout complete: ${normalizedProviderId}`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw new Error(`Unknown model provider: ${providerId}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
const provider = await selectOAuthProvider(authPath, "logout");
|
||||||
|
if (!provider) {
|
||||||
printInfo("Logout cancelled.");
|
printInfo("Logout cancelled.");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
AuthStorage.create(authPath).logout(provider.id);
|
authStorage.logout(provider.id);
|
||||||
printSuccess(`Model provider logout complete: ${provider.id}`);
|
printSuccess(`Model provider logout complete: ${provider.id}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -689,20 +803,20 @@ export async function runModelSetup(settingsPath: string, authPath: string): Pro
|
|||||||
|
|
||||||
while (status.availableModels.length === 0) {
|
while (status.availableModels.length === 0) {
|
||||||
const choices = [
|
const choices = [
|
||||||
"API key (OpenAI, Anthropic, ZAI, Kimi, MiniMax, ...)",
|
"OAuth login (recommended: ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
|
||||||
"OAuth login (ChatGPT Plus/Pro, Claude Pro/Max, Copilot, ...)",
|
"API key or custom provider (OpenAI, Anthropic, ZAI, Kimi, MiniMax, ...)",
|
||||||
"Cancel",
|
"Cancel",
|
||||||
];
|
];
|
||||||
const selection = await promptChoice("Choose how to configure model access:", choices, 0);
|
const selection = await promptChoice("Choose how to configure model access:", choices, 0);
|
||||||
if (selection === 0) {
|
if (selection === 0) {
|
||||||
const configured = await configureApiKeyProvider(authPath);
|
const loggedIn = await loginModelProvider(authPath, undefined, settingsPath);
|
||||||
if (!configured) {
|
if (!loggedIn) {
|
||||||
status = collectModelStatus(settingsPath, authPath);
|
status = collectModelStatus(settingsPath, authPath);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
} else if (selection === 1) {
|
} else if (selection === 1) {
|
||||||
const loggedIn = await loginModelProvider(authPath, undefined, settingsPath);
|
const configured = await configureApiKeyProvider(authPath);
|
||||||
if (!loggedIn) {
|
if (!configured) {
|
||||||
status = collectModelStatus(settingsPath, authPath);
|
status = collectModelStatus(settingsPath, authPath);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|||||||
456
src/pi/package-ops.ts
Normal file
456
src/pi/package-ops.ts
Normal file
@@ -0,0 +1,456 @@
|
|||||||
|
import { spawn } from "node:child_process";
|
||||||
|
import { cpSync, existsSync, lstatSync, mkdirSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||||
|
import { fileURLToPath } from "node:url";
|
||||||
|
import { dirname, join, resolve } from "node:path";
|
||||||
|
|
||||||
|
import { DefaultPackageManager, SettingsManager } from "@mariozechner/pi-coding-agent";
|
||||||
|
|
||||||
|
import { NATIVE_PACKAGE_SOURCES, supportsNativePackageSources } from "./package-presets.js";
|
||||||
|
import { applyFeynmanPackageManagerEnv, getFeynmanNpmPrefixPath } from "./runtime.js";
|
||||||
|
import { getPathWithCurrentNode, resolveExecutable } from "../system/executables.js";
|
||||||
|
|
||||||
|
type PackageScope = "user" | "project";
|
||||||
|
|
||||||
|
type ConfiguredPackage = {
|
||||||
|
source: string;
|
||||||
|
scope: PackageScope;
|
||||||
|
filtered: boolean;
|
||||||
|
installedPath?: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
type NpmSource = {
|
||||||
|
name: string;
|
||||||
|
source: string;
|
||||||
|
spec: string;
|
||||||
|
pinned: boolean;
|
||||||
|
};
|
||||||
|
|
||||||
|
export type MissingConfiguredPackageSummary = {
|
||||||
|
missing: ConfiguredPackage[];
|
||||||
|
bundled: ConfiguredPackage[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export type InstallPackageSourcesResult = {
|
||||||
|
installed: string[];
|
||||||
|
skipped: string[];
|
||||||
|
};
|
||||||
|
|
||||||
|
export type UpdateConfiguredPackagesResult = {
|
||||||
|
updated: string[];
|
||||||
|
skipped: string[];
|
||||||
|
};
|
||||||
|
|
||||||
|
const FILTERED_INSTALL_OUTPUT_PATTERNS = [
|
||||||
|
/npm warn deprecated node-domexception@1\.0\.0/i,
|
||||||
|
/npm notice/i,
|
||||||
|
/^(added|removed|changed) \d+ packages?( in .+)?$/i,
|
||||||
|
/^(\d+ )?packages are looking for funding$/i,
|
||||||
|
/^run `npm fund` for details$/i,
|
||||||
|
];
|
||||||
|
const APP_ROOT = resolve(dirname(fileURLToPath(import.meta.url)), "..", "..");
|
||||||
|
|
||||||
|
function createPackageContext(workingDir: string, agentDir: string) {
|
||||||
|
applyFeynmanPackageManagerEnv(agentDir);
|
||||||
|
process.env.PATH = getPathWithCurrentNode(process.env.PATH);
|
||||||
|
const settingsManager = SettingsManager.create(workingDir, agentDir);
|
||||||
|
const packageManager = new DefaultPackageManager({
|
||||||
|
cwd: workingDir,
|
||||||
|
agentDir,
|
||||||
|
settingsManager,
|
||||||
|
});
|
||||||
|
|
||||||
|
return {
|
||||||
|
settingsManager,
|
||||||
|
packageManager,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function shouldSkipNativeSource(source: string, version = process.versions.node): boolean {
|
||||||
|
return !supportsNativePackageSources(version) && NATIVE_PACKAGE_SOURCES.includes(source as (typeof NATIVE_PACKAGE_SOURCES)[number]);
|
||||||
|
}
|
||||||
|
|
||||||
|
function filterUnsupportedSources(sources: string[], version = process.versions.node): { supported: string[]; skipped: string[] } {
|
||||||
|
const supported: string[] = [];
|
||||||
|
const skipped: string[] = [];
|
||||||
|
|
||||||
|
for (const source of sources) {
|
||||||
|
if (shouldSkipNativeSource(source, version)) {
|
||||||
|
skipped.push(source);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
supported.push(source);
|
||||||
|
}
|
||||||
|
|
||||||
|
return { supported, skipped };
|
||||||
|
}
|
||||||
|
|
||||||
|
function relayFilteredOutput(chunk: Buffer | string, writer: NodeJS.WriteStream): void {
|
||||||
|
const text = chunk.toString();
|
||||||
|
for (const line of text.split(/\r?\n/)) {
|
||||||
|
if (!line.trim()) continue;
|
||||||
|
if (FILTERED_INSTALL_OUTPUT_PATTERNS.some((pattern) => pattern.test(line.trim()))) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
writer.write(`${line}\n`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseNpmSource(source: string): NpmSource | undefined {
|
||||||
|
if (!source.startsWith("npm:")) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const spec = source.slice("npm:".length).trim();
|
||||||
|
const match = spec.match(/^(@?[^@]+(?:\/[^@]+)?)(?:@(.+))?$/);
|
||||||
|
const name = match?.[1] ?? spec;
|
||||||
|
const version = match?.[2];
|
||||||
|
|
||||||
|
return {
|
||||||
|
name,
|
||||||
|
source,
|
||||||
|
spec,
|
||||||
|
pinned: Boolean(version),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function dedupeNpmSources(sources: string[], updateToLatest: boolean): string[] {
|
||||||
|
const specs = new Map<string, string>();
|
||||||
|
|
||||||
|
for (const source of sources) {
|
||||||
|
const parsed = parseNpmSource(source);
|
||||||
|
if (!parsed) continue;
|
||||||
|
|
||||||
|
specs.set(parsed.name, updateToLatest && !parsed.pinned ? `${parsed.name}@latest` : parsed.spec);
|
||||||
|
}
|
||||||
|
|
||||||
|
return [...specs.values()];
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureProjectInstallRoot(workingDir: string): string {
|
||||||
|
const installRoot = resolve(workingDir, ".feynman", "npm");
|
||||||
|
mkdirSync(installRoot, { recursive: true });
|
||||||
|
|
||||||
|
const ignorePath = join(installRoot, ".gitignore");
|
||||||
|
if (!existsSync(ignorePath)) {
|
||||||
|
writeFileSync(ignorePath, "*\n!.gitignore\n", "utf8");
|
||||||
|
}
|
||||||
|
|
||||||
|
const packageJsonPath = join(installRoot, "package.json");
|
||||||
|
if (!existsSync(packageJsonPath)) {
|
||||||
|
writeFileSync(packageJsonPath, JSON.stringify({ name: "feynman-packages", private: true }, null, 2) + "\n", "utf8");
|
||||||
|
}
|
||||||
|
|
||||||
|
return installRoot;
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolveAdjacentNpmExecutable(): string | undefined {
|
||||||
|
const executableName = process.platform === "win32" ? "npm.cmd" : "npm";
|
||||||
|
const candidate = resolve(dirname(process.execPath), executableName);
|
||||||
|
return existsSync(candidate) ? candidate : undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolvePackageManagerCommand(settingsManager: SettingsManager): { command: string; args: string[] } | undefined {
|
||||||
|
const configured = settingsManager.getNpmCommand();
|
||||||
|
if (!configured || configured.length === 0) {
|
||||||
|
const adjacentNpm = resolveAdjacentNpmExecutable() ?? resolveExecutable("npm");
|
||||||
|
return adjacentNpm ? { command: adjacentNpm, args: [] } : undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const [command = "npm", ...args] = configured;
|
||||||
|
if (!command) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const executable = resolveExecutable(command);
|
||||||
|
if (!executable) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
return { command: executable, args };
|
||||||
|
}
|
||||||
|
|
||||||
|
async function runPackageManagerInstall(
|
||||||
|
settingsManager: SettingsManager,
|
||||||
|
workingDir: string,
|
||||||
|
agentDir: string,
|
||||||
|
scope: PackageScope,
|
||||||
|
specs: string[],
|
||||||
|
): Promise<void> {
|
||||||
|
if (specs.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const packageManagerCommand = resolvePackageManagerCommand(settingsManager);
|
||||||
|
if (!packageManagerCommand) {
|
||||||
|
throw new Error("No supported package manager found. Install npm, pnpm, or bun, or configure `npmCommand`.");
|
||||||
|
}
|
||||||
|
|
||||||
|
const args = [
|
||||||
|
...packageManagerCommand.args,
|
||||||
|
"install",
|
||||||
|
"--no-audit",
|
||||||
|
"--no-fund",
|
||||||
|
"--legacy-peer-deps",
|
||||||
|
"--loglevel",
|
||||||
|
"error",
|
||||||
|
];
|
||||||
|
|
||||||
|
if (scope === "user") {
|
||||||
|
args.push("-g", "--prefix", getFeynmanNpmPrefixPath(agentDir));
|
||||||
|
} else {
|
||||||
|
args.push("--prefix", ensureProjectInstallRoot(workingDir));
|
||||||
|
}
|
||||||
|
|
||||||
|
args.push(...specs);
|
||||||
|
|
||||||
|
await new Promise<void>((resolvePromise, reject) => {
|
||||||
|
const child = spawn(packageManagerCommand.command, args, {
|
||||||
|
cwd: scope === "user" ? agentDir : workingDir,
|
||||||
|
stdio: ["ignore", "pipe", "pipe"],
|
||||||
|
env: {
|
||||||
|
...process.env,
|
||||||
|
PATH: getPathWithCurrentNode(process.env.PATH),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
child.stdout?.on("data", (chunk) => relayFilteredOutput(chunk, process.stdout));
|
||||||
|
child.stderr?.on("data", (chunk) => relayFilteredOutput(chunk, process.stderr));
|
||||||
|
|
||||||
|
child.on("error", reject);
|
||||||
|
child.on("exit", (code) => {
|
||||||
|
if ((code ?? 1) !== 0) {
|
||||||
|
const installingGenerativeUi = specs.some((spec) => spec.startsWith("pi-generative-ui"));
|
||||||
|
if (installingGenerativeUi && process.platform === "darwin") {
|
||||||
|
reject(
|
||||||
|
new Error(
|
||||||
|
"Installing pi-generative-ui failed. Its native glimpseui dependency did not compile against the current macOS/Xcode toolchain. Try the npm-installed Feynman path with your local Node toolchain or skip this optional preset for now.",
|
||||||
|
),
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
reject(new Error(`${packageManagerCommand.command} install failed with code ${code ?? 1}`));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
resolvePromise();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function groupConfiguredNpmSources(packages: ConfiguredPackage[]): Record<PackageScope, string[]> {
|
||||||
|
return {
|
||||||
|
user: packages.filter((entry) => entry.scope === "user").map((entry) => entry.source),
|
||||||
|
project: packages.filter((entry) => entry.scope === "project").map((entry) => entry.source),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function isBundledWorkspacePackagePath(installedPath: string | undefined, appRoot: string): boolean {
|
||||||
|
if (!installedPath) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const bundledRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
|
||||||
|
return installedPath.startsWith(bundledRoot);
|
||||||
|
}
|
||||||
|
|
||||||
|
export function getMissingConfiguredPackages(
|
||||||
|
workingDir: string,
|
||||||
|
agentDir: string,
|
||||||
|
appRoot: string,
|
||||||
|
): MissingConfiguredPackageSummary {
|
||||||
|
const { packageManager } = createPackageContext(workingDir, agentDir);
|
||||||
|
const configured = packageManager.listConfiguredPackages();
|
||||||
|
|
||||||
|
return configured.reduce<MissingConfiguredPackageSummary>(
|
||||||
|
(summary, entry) => {
|
||||||
|
if (entry.installedPath) {
|
||||||
|
if (isBundledWorkspacePackagePath(entry.installedPath, appRoot)) {
|
||||||
|
summary.bundled.push(entry);
|
||||||
|
}
|
||||||
|
return summary;
|
||||||
|
}
|
||||||
|
|
||||||
|
summary.missing.push(entry);
|
||||||
|
return summary;
|
||||||
|
},
|
||||||
|
{ missing: [], bundled: [] },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function installPackageSources(
|
||||||
|
workingDir: string,
|
||||||
|
agentDir: string,
|
||||||
|
sources: string[],
|
||||||
|
options?: { local?: boolean; persist?: boolean },
|
||||||
|
): Promise<InstallPackageSourcesResult> {
|
||||||
|
const { settingsManager, packageManager } = createPackageContext(workingDir, agentDir);
|
||||||
|
const scope: PackageScope = options?.local ? "project" : "user";
|
||||||
|
const installed: string[] = [];
|
||||||
|
|
||||||
|
const bundledSeeded = scope === "user" ? seedBundledWorkspacePackages(agentDir, APP_ROOT, sources) : [];
|
||||||
|
installed.push(...bundledSeeded);
|
||||||
|
const remainingSources = sources.filter((source) => !bundledSeeded.includes(source));
|
||||||
|
const grouped = groupConfiguredNpmSources(
|
||||||
|
remainingSources.map((source) => ({
|
||||||
|
source,
|
||||||
|
scope,
|
||||||
|
filtered: false,
|
||||||
|
})),
|
||||||
|
);
|
||||||
|
const { supported: supportedUserSources, skipped } = filterUnsupportedSources(grouped.user);
|
||||||
|
const { supported: supportedProjectSources, skipped: skippedProject } = filterUnsupportedSources(grouped.project);
|
||||||
|
skipped.push(...skippedProject);
|
||||||
|
|
||||||
|
const supportedNpmSources = scope === "user" ? supportedUserSources : supportedProjectSources;
|
||||||
|
if (supportedNpmSources.length > 0) {
|
||||||
|
await runPackageManagerInstall(settingsManager, workingDir, agentDir, scope, dedupeNpmSources(supportedNpmSources, false));
|
||||||
|
installed.push(...supportedNpmSources);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const source of sources) {
|
||||||
|
if (parseNpmSource(source)) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
await packageManager.install(source, { local: options?.local });
|
||||||
|
installed.push(source);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (options?.persist) {
|
||||||
|
for (const source of installed) {
|
||||||
|
if (packageManager.addSourceToSettings(source, { local: options?.local })) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
skipped.push(source);
|
||||||
|
}
|
||||||
|
await settingsManager.flush();
|
||||||
|
}
|
||||||
|
|
||||||
|
return { installed, skipped };
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function updateConfiguredPackages(
|
||||||
|
workingDir: string,
|
||||||
|
agentDir: string,
|
||||||
|
source?: string,
|
||||||
|
): Promise<UpdateConfiguredPackagesResult> {
|
||||||
|
const { settingsManager, packageManager } = createPackageContext(workingDir, agentDir);
|
||||||
|
|
||||||
|
if (source) {
|
||||||
|
await packageManager.update(source);
|
||||||
|
return { updated: [source], skipped: [] };
|
||||||
|
}
|
||||||
|
|
||||||
|
const availableUpdates = await packageManager.checkForAvailableUpdates();
|
||||||
|
if (availableUpdates.length === 0) {
|
||||||
|
return { updated: [], skipped: [] };
|
||||||
|
}
|
||||||
|
|
||||||
|
const npmUpdatesByScope: Record<PackageScope, string[]> = { user: [], project: [] };
|
||||||
|
const gitUpdates: string[] = [];
|
||||||
|
const skipped: string[] = [];
|
||||||
|
|
||||||
|
for (const entry of availableUpdates) {
|
||||||
|
if (entry.type === "npm") {
|
||||||
|
if (shouldSkipNativeSource(entry.source)) {
|
||||||
|
skipped.push(entry.source);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
npmUpdatesByScope[entry.scope].push(entry.source);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
gitUpdates.push(entry.source);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const scope of ["user", "project"] as const) {
|
||||||
|
const sources = npmUpdatesByScope[scope];
|
||||||
|
if (sources.length === 0) continue;
|
||||||
|
|
||||||
|
await runPackageManagerInstall(settingsManager, workingDir, agentDir, scope, dedupeNpmSources(sources, true));
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const gitSource of gitUpdates) {
|
||||||
|
await packageManager.update(gitSource);
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
updated: availableUpdates
|
||||||
|
.map((entry) => entry.source)
|
||||||
|
.filter((source) => !skipped.includes(source)),
|
||||||
|
skipped,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureParentDir(path: string): void {
|
||||||
|
mkdirSync(dirname(path), { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
function pathsMatchSymlinkTarget(linkPath: string, targetPath: string): boolean {
|
||||||
|
try {
|
||||||
|
if (!lstatSync(linkPath).isSymbolicLink()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return resolve(dirname(linkPath), readlinkSync(linkPath)) === targetPath;
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function linkDirectory(linkPath: string, targetPath: string): void {
|
||||||
|
if (pathsMatchSymlinkTarget(linkPath, targetPath)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (existsSync(linkPath) && lstatSync(linkPath).isSymbolicLink()) {
|
||||||
|
rmSync(linkPath, { force: true });
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
|
||||||
|
if (existsSync(linkPath)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
ensureParentDir(linkPath);
|
||||||
|
try {
|
||||||
|
symlinkSync(targetPath, linkPath, process.platform === "win32" ? "junction" : "dir");
|
||||||
|
} catch {
|
||||||
|
// Fallback for filesystems that do not allow symlinks.
|
||||||
|
if (!existsSync(linkPath)) {
|
||||||
|
cpSync(targetPath, linkPath, { recursive: true });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function seedBundledWorkspacePackages(
|
||||||
|
agentDir: string,
|
||||||
|
appRoot: string,
|
||||||
|
sources: string[],
|
||||||
|
): string[] {
|
||||||
|
const bundledNodeModulesRoot = resolve(appRoot, ".feynman", "npm", "node_modules");
|
||||||
|
if (!existsSync(bundledNodeModulesRoot)) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
const globalNodeModulesRoot = resolve(getFeynmanNpmPrefixPath(agentDir), "lib", "node_modules");
|
||||||
|
const seeded: string[] = [];
|
||||||
|
|
||||||
|
for (const source of sources) {
|
||||||
|
if (shouldSkipNativeSource(source)) continue;
|
||||||
|
|
||||||
|
const parsed = parseNpmSource(source);
|
||||||
|
if (!parsed) continue;
|
||||||
|
|
||||||
|
const bundledPackagePath = resolve(bundledNodeModulesRoot, parsed.name);
|
||||||
|
if (!existsSync(bundledPackagePath)) continue;
|
||||||
|
|
||||||
|
const targetPath = resolve(globalNodeModulesRoot, parsed.name);
|
||||||
|
if (!existsSync(targetPath)) {
|
||||||
|
linkDirectory(targetPath, bundledPackagePath);
|
||||||
|
seeded.push(source);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return seeded;
|
||||||
|
}
|
||||||
@@ -17,6 +17,13 @@ export const CORE_PACKAGE_SOURCES = [
|
|||||||
"npm:@tmustier/pi-ralph-wiggum",
|
"npm:@tmustier/pi-ralph-wiggum",
|
||||||
] as const;
|
] as const;
|
||||||
|
|
||||||
|
export const NATIVE_PACKAGE_SOURCES = [
|
||||||
|
"npm:@kaiserlich-dev/pi-session-search",
|
||||||
|
"npm:@samfp/pi-memory",
|
||||||
|
] as const;
|
||||||
|
|
||||||
|
export const MAX_NATIVE_PACKAGE_NODE_MAJOR = 24;
|
||||||
|
|
||||||
export const OPTIONAL_PACKAGE_PRESETS = {
|
export const OPTIONAL_PACKAGE_PRESETS = {
|
||||||
"generative-ui": {
|
"generative-ui": {
|
||||||
description: "Interactive Glimpse UI widgets.",
|
description: "Interactive Glimpse UI widgets.",
|
||||||
@@ -50,6 +57,24 @@ export function shouldPruneLegacyDefaultPackages(packages: PackageSource[] | und
|
|||||||
return arraysMatchAsSets(packages as string[], LEGACY_DEFAULT_PACKAGE_SOURCES);
|
return arraysMatchAsSets(packages as string[], LEGACY_DEFAULT_PACKAGE_SOURCES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function parseNodeMajor(version: string): number {
|
||||||
|
const [major = "0"] = version.replace(/^v/, "").split(".");
|
||||||
|
return Number.parseInt(major, 10) || 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function supportsNativePackageSources(version = process.versions.node): boolean {
|
||||||
|
return parseNodeMajor(version) <= MAX_NATIVE_PACKAGE_NODE_MAJOR;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function filterPackageSourcesForCurrentNode<T extends string>(sources: readonly T[], version = process.versions.node): T[] {
|
||||||
|
if (supportsNativePackageSources(version)) {
|
||||||
|
return [...sources];
|
||||||
|
}
|
||||||
|
|
||||||
|
const blocked = new Set<string>(NATIVE_PACKAGE_SOURCES);
|
||||||
|
return sources.filter((source) => !blocked.has(source));
|
||||||
|
}
|
||||||
|
|
||||||
export function getOptionalPackagePresetSources(name: string): string[] | undefined {
|
export function getOptionalPackagePresetSources(name: string): string[] | undefined {
|
||||||
const normalized = name.trim().toLowerCase();
|
const normalized = name.trim().toLowerCase();
|
||||||
if (normalized === "ui") {
|
if (normalized === "ui") {
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ import { dirname } from "node:path";
|
|||||||
|
|
||||||
import { ModelRegistry, type PackageSource } from "@mariozechner/pi-coding-agent";
|
import { ModelRegistry, type PackageSource } from "@mariozechner/pi-coding-agent";
|
||||||
|
|
||||||
import { CORE_PACKAGE_SOURCES, shouldPruneLegacyDefaultPackages } from "./package-presets.js";
|
import { CORE_PACKAGE_SOURCES, filterPackageSourcesForCurrentNode, shouldPruneLegacyDefaultPackages } from "./package-presets.js";
|
||||||
import { createModelRegistry } from "../model/registry.js";
|
import { createModelRegistry } from "../model/registry.js";
|
||||||
|
|
||||||
export type ThinkingLevel = "off" | "minimal" | "low" | "medium" | "high" | "xhigh";
|
export type ThinkingLevel = "off" | "minimal" | "low" | "medium" | "high" | "xhigh";
|
||||||
@@ -67,6 +67,23 @@ function choosePreferredModel(
|
|||||||
return availableModels[0];
|
return availableModels[0];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function filterConfiguredPackagesForCurrentNode(packages: PackageSource[] | undefined): PackageSource[] {
|
||||||
|
if (!Array.isArray(packages)) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
const filteredStringSources = new Set(filterPackageSourcesForCurrentNode(
|
||||||
|
packages
|
||||||
|
.map((entry) => (typeof entry === "string" ? entry : entry.source))
|
||||||
|
.filter((entry): entry is string => typeof entry === "string"),
|
||||||
|
));
|
||||||
|
|
||||||
|
return packages.filter((entry) => {
|
||||||
|
const source = typeof entry === "string" ? entry : entry.source;
|
||||||
|
return filteredStringSources.has(source);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
export function readJson(path: string): Record<string, unknown> {
|
export function readJson(path: string): Record<string, unknown> {
|
||||||
if (!existsSync(path)) {
|
if (!existsSync(path)) {
|
||||||
return {};
|
return {};
|
||||||
@@ -110,10 +127,13 @@ export function normalizeFeynmanSettings(
|
|||||||
settings.theme = "feynman";
|
settings.theme = "feynman";
|
||||||
settings.quietStartup = true;
|
settings.quietStartup = true;
|
||||||
settings.collapseChangelog = true;
|
settings.collapseChangelog = true;
|
||||||
|
const supportedCorePackages = filterPackageSourcesForCurrentNode(CORE_PACKAGE_SOURCES);
|
||||||
if (!Array.isArray(settings.packages) || settings.packages.length === 0) {
|
if (!Array.isArray(settings.packages) || settings.packages.length === 0) {
|
||||||
settings.packages = [...CORE_PACKAGE_SOURCES];
|
settings.packages = supportedCorePackages;
|
||||||
} else if (shouldPruneLegacyDefaultPackages(settings.packages as PackageSource[])) {
|
} else if (shouldPruneLegacyDefaultPackages(settings.packages as PackageSource[])) {
|
||||||
settings.packages = [...CORE_PACKAGE_SOURCES];
|
settings.packages = supportedCorePackages;
|
||||||
|
} else {
|
||||||
|
settings.packages = filterConfiguredPackagesForCurrentNode(settings.packages as PackageSource[]);
|
||||||
}
|
}
|
||||||
|
|
||||||
const modelRegistry = createModelRegistry(authPath);
|
const modelRegistry = createModelRegistry(authPath);
|
||||||
|
|||||||
@@ -3,11 +3,13 @@ import { dirname, resolve } from "node:path";
|
|||||||
import { getFeynmanHome } from "../config/paths.js";
|
import { getFeynmanHome } from "../config/paths.js";
|
||||||
|
|
||||||
export type PiWebSearchProvider = "auto" | "perplexity" | "exa" | "gemini";
|
export type PiWebSearchProvider = "auto" | "perplexity" | "exa" | "gemini";
|
||||||
|
export type PiWebSearchWorkflow = "none" | "summary-review";
|
||||||
|
|
||||||
export type PiWebAccessConfig = Record<string, unknown> & {
|
export type PiWebAccessConfig = Record<string, unknown> & {
|
||||||
route?: PiWebSearchProvider;
|
route?: PiWebSearchProvider;
|
||||||
provider?: PiWebSearchProvider;
|
provider?: PiWebSearchProvider;
|
||||||
searchProvider?: PiWebSearchProvider;
|
searchProvider?: PiWebSearchProvider;
|
||||||
|
workflow?: PiWebSearchWorkflow;
|
||||||
perplexityApiKey?: string;
|
perplexityApiKey?: string;
|
||||||
exaApiKey?: string;
|
exaApiKey?: string;
|
||||||
geminiApiKey?: string;
|
geminiApiKey?: string;
|
||||||
@@ -18,6 +20,7 @@ export type PiWebAccessStatus = {
|
|||||||
configPath: string;
|
configPath: string;
|
||||||
searchProvider: PiWebSearchProvider;
|
searchProvider: PiWebSearchProvider;
|
||||||
requestProvider: PiWebSearchProvider;
|
requestProvider: PiWebSearchProvider;
|
||||||
|
workflow: PiWebSearchWorkflow;
|
||||||
perplexityConfigured: boolean;
|
perplexityConfigured: boolean;
|
||||||
exaConfigured: boolean;
|
exaConfigured: boolean;
|
||||||
geminiApiConfigured: boolean;
|
geminiApiConfigured: boolean;
|
||||||
@@ -35,6 +38,10 @@ function normalizeProvider(value: unknown): PiWebSearchProvider | undefined {
|
|||||||
return value === "auto" || value === "perplexity" || value === "exa" || value === "gemini" ? value : undefined;
|
return value === "auto" || value === "perplexity" || value === "exa" || value === "gemini" ? value : undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function normalizeWorkflow(value: unknown): PiWebSearchWorkflow | undefined {
|
||||||
|
return value === "none" || value === "summary-review" ? value : undefined;
|
||||||
|
}
|
||||||
|
|
||||||
function normalizeNonEmptyString(value: unknown): string | undefined {
|
function normalizeNonEmptyString(value: unknown): string | undefined {
|
||||||
return typeof value === "string" && value.trim().length > 0 ? value.trim() : undefined;
|
return typeof value === "string" && value.trim().length > 0 ? value.trim() : undefined;
|
||||||
}
|
}
|
||||||
@@ -102,6 +109,7 @@ export function getPiWebAccessStatus(
|
|||||||
const searchProvider =
|
const searchProvider =
|
||||||
normalizeProvider(config.searchProvider) ?? normalizeProvider(config.route) ?? normalizeProvider(config.provider) ?? "auto";
|
normalizeProvider(config.searchProvider) ?? normalizeProvider(config.route) ?? normalizeProvider(config.provider) ?? "auto";
|
||||||
const requestProvider = normalizeProvider(config.provider) ?? normalizeProvider(config.route) ?? searchProvider;
|
const requestProvider = normalizeProvider(config.provider) ?? normalizeProvider(config.route) ?? searchProvider;
|
||||||
|
const workflow = normalizeWorkflow(config.workflow) ?? "none";
|
||||||
const perplexityConfigured = Boolean(normalizeNonEmptyString(config.perplexityApiKey));
|
const perplexityConfigured = Boolean(normalizeNonEmptyString(config.perplexityApiKey));
|
||||||
const exaConfigured = Boolean(normalizeNonEmptyString(config.exaApiKey));
|
const exaConfigured = Boolean(normalizeNonEmptyString(config.exaApiKey));
|
||||||
const geminiApiConfigured = Boolean(normalizeNonEmptyString(config.geminiApiKey));
|
const geminiApiConfigured = Boolean(normalizeNonEmptyString(config.geminiApiKey));
|
||||||
@@ -112,6 +120,7 @@ export function getPiWebAccessStatus(
|
|||||||
configPath,
|
configPath,
|
||||||
searchProvider,
|
searchProvider,
|
||||||
requestProvider,
|
requestProvider,
|
||||||
|
workflow,
|
||||||
perplexityConfigured,
|
perplexityConfigured,
|
||||||
exaConfigured,
|
exaConfigured,
|
||||||
geminiApiConfigured,
|
geminiApiConfigured,
|
||||||
@@ -128,6 +137,7 @@ export function formatPiWebAccessDoctorLines(
|
|||||||
"web access: pi-web-access",
|
"web access: pi-web-access",
|
||||||
` search route: ${status.routeLabel}`,
|
` search route: ${status.routeLabel}`,
|
||||||
` request route: ${status.requestProvider}`,
|
` request route: ${status.requestProvider}`,
|
||||||
|
` search workflow: ${status.workflow}`,
|
||||||
` perplexity api: ${status.perplexityConfigured ? "configured" : "not configured"}`,
|
` perplexity api: ${status.perplexityConfigured ? "configured" : "not configured"}`,
|
||||||
` exa api: ${status.exaConfigured ? "configured" : "not configured"}`,
|
` exa api: ${status.exaConfigured ? "configured" : "not configured"}`,
|
||||||
` gemini api: ${status.geminiApiConfigured ? "configured" : "not configured"}`,
|
` gemini api: ${status.geminiApiConfigured ? "configured" : "not configured"}`,
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ export function printSearchStatus(): void {
|
|||||||
printInfo("Managed by: pi-web-access");
|
printInfo("Managed by: pi-web-access");
|
||||||
printInfo(`Search route: ${status.routeLabel}`);
|
printInfo(`Search route: ${status.routeLabel}`);
|
||||||
printInfo(`Request route: ${status.requestProvider}`);
|
printInfo(`Request route: ${status.requestProvider}`);
|
||||||
|
printInfo(`Search workflow: ${status.workflow}`);
|
||||||
printInfo(`Perplexity API configured: ${status.perplexityConfigured ? "yes" : "no"}`);
|
printInfo(`Perplexity API configured: ${status.perplexityConfigured ? "yes" : "no"}`);
|
||||||
printInfo(`Exa API configured: ${status.exaConfigured ? "yes" : "no"}`);
|
printInfo(`Exa API configured: ${status.exaConfigured ? "yes" : "no"}`);
|
||||||
printInfo(`Gemini API configured: ${status.geminiApiConfigured ? "yes" : "no"}`);
|
printInfo(`Gemini API configured: ${status.geminiApiConfigured ? "yes" : "no"}`);
|
||||||
@@ -36,6 +37,7 @@ export function setSearchProvider(provider: PiWebSearchProvider, apiKey?: string
|
|||||||
const updates: Partial<Record<keyof PiWebAccessConfig, unknown>> = {
|
const updates: Partial<Record<keyof PiWebAccessConfig, unknown>> = {
|
||||||
provider,
|
provider,
|
||||||
searchProvider: provider,
|
searchProvider: provider,
|
||||||
|
workflow: "none",
|
||||||
route: undefined,
|
route: undefined,
|
||||||
};
|
};
|
||||||
const apiKeyField = PROVIDER_API_KEY_FIELDS[provider];
|
const apiKeyField = PROVIDER_API_KEY_FIELDS[provider];
|
||||||
@@ -50,7 +52,7 @@ export function setSearchProvider(provider: PiWebSearchProvider, apiKey?: string
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function clearSearchConfig(): void {
|
export function clearSearchConfig(): void {
|
||||||
savePiWebAccessConfig({ provider: undefined, searchProvider: undefined, route: undefined });
|
savePiWebAccessConfig({ provider: undefined, searchProvider: undefined, route: undefined, workflow: "none" });
|
||||||
|
|
||||||
const status = getPiWebAccessStatus();
|
const status = getPiWebAccessStatus();
|
||||||
console.log(`Web search provider reset to ${status.routeLabel}.`);
|
console.log(`Web search provider reset to ${status.routeLabel}.`);
|
||||||
|
|||||||
@@ -1,30 +1,130 @@
|
|||||||
import { stdin as input, stdout as output } from "node:process";
|
import {
|
||||||
import { createInterface } from "node:readline/promises";
|
confirm as clackConfirm,
|
||||||
|
intro as clackIntro,
|
||||||
|
isCancel,
|
||||||
|
multiselect as clackMultiselect,
|
||||||
|
outro as clackOutro,
|
||||||
|
select as clackSelect,
|
||||||
|
text as clackText,
|
||||||
|
type Option,
|
||||||
|
} from "@clack/prompts";
|
||||||
|
|
||||||
export async function promptText(question: string, defaultValue = ""): Promise<string> {
|
export class SetupCancelledError extends Error {
|
||||||
if (!input.isTTY || !output.isTTY) {
|
constructor(message = "setup cancelled") {
|
||||||
|
super(message);
|
||||||
|
this.name = "SetupCancelledError";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export type PromptSelectOption<T = string> = {
|
||||||
|
value: T;
|
||||||
|
label: string;
|
||||||
|
hint?: string;
|
||||||
|
};
|
||||||
|
|
||||||
|
function ensureInteractiveTerminal(): void {
|
||||||
|
if (!process.stdin.isTTY || !process.stdout.isTTY) {
|
||||||
throw new Error("feynman setup requires an interactive terminal.");
|
throw new Error("feynman setup requires an interactive terminal.");
|
||||||
}
|
}
|
||||||
const rl = createInterface({ input, output });
|
}
|
||||||
try {
|
|
||||||
const suffix = defaultValue ? ` [${defaultValue}]` : "";
|
function guardCancelled<T>(value: T | symbol): T {
|
||||||
const value = (await rl.question(`${question}${suffix}: `)).trim();
|
if (isCancel(value)) {
|
||||||
return value || defaultValue;
|
throw new SetupCancelledError();
|
||||||
} finally {
|
|
||||||
rl.close();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function isInteractiveTerminal(): boolean {
|
||||||
|
return Boolean(process.stdin.isTTY && process.stdout.isTTY);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function promptIntro(title: string): Promise<void> {
|
||||||
|
ensureInteractiveTerminal();
|
||||||
|
clackIntro(title);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function promptOutro(message: string): Promise<void> {
|
||||||
|
ensureInteractiveTerminal();
|
||||||
|
clackOutro(message);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function promptText(question: string, defaultValue = "", placeholder?: string): Promise<string> {
|
||||||
|
ensureInteractiveTerminal();
|
||||||
|
|
||||||
|
const value = guardCancelled(
|
||||||
|
await clackText({
|
||||||
|
message: question,
|
||||||
|
initialValue: defaultValue || undefined,
|
||||||
|
placeholder: placeholder ?? (defaultValue || undefined),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
const normalized = String(value ?? "").trim();
|
||||||
|
return normalized || defaultValue;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function promptSelect<T>(
|
||||||
|
question: string,
|
||||||
|
options: PromptSelectOption<T>[],
|
||||||
|
initialValue?: T,
|
||||||
|
): Promise<T> {
|
||||||
|
ensureInteractiveTerminal();
|
||||||
|
|
||||||
|
const selection = guardCancelled(
|
||||||
|
await clackSelect({
|
||||||
|
message: question,
|
||||||
|
options: options.map((option) => ({
|
||||||
|
value: option.value,
|
||||||
|
label: option.label,
|
||||||
|
hint: option.hint,
|
||||||
|
})) as Option<T>[],
|
||||||
|
initialValue,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
return selection;
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function promptChoice(question: string, choices: string[], defaultIndex = 0): Promise<number> {
|
export async function promptChoice(question: string, choices: string[], defaultIndex = 0): Promise<number> {
|
||||||
console.log(question);
|
const options = choices.map((choice, index) => ({
|
||||||
for (const [index, choice] of choices.entries()) {
|
value: index,
|
||||||
const marker = index === defaultIndex ? "*" : " ";
|
label: choice,
|
||||||
console.log(` ${marker} ${index + 1}. ${choice}`);
|
}));
|
||||||
}
|
return promptSelect(question, options, Math.max(0, Math.min(defaultIndex, choices.length - 1)));
|
||||||
const answer = await promptText("Select", String(defaultIndex + 1));
|
}
|
||||||
const parsed = Number(answer);
|
|
||||||
if (!Number.isFinite(parsed) || parsed < 1 || parsed > choices.length) {
|
export async function promptConfirm(question: string, initialValue = true): Promise<boolean> {
|
||||||
return defaultIndex;
|
ensureInteractiveTerminal();
|
||||||
}
|
|
||||||
return parsed - 1;
|
return guardCancelled(
|
||||||
|
await clackConfirm({
|
||||||
|
message: question,
|
||||||
|
initialValue,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function promptMultiSelect<T>(
|
||||||
|
question: string,
|
||||||
|
options: PromptSelectOption<T>[],
|
||||||
|
initialValues: T[] = [],
|
||||||
|
): Promise<T[]> {
|
||||||
|
ensureInteractiveTerminal();
|
||||||
|
|
||||||
|
const selection = guardCancelled(
|
||||||
|
await clackMultiselect({
|
||||||
|
message: question,
|
||||||
|
options: options.map((option) => ({
|
||||||
|
value: option.value,
|
||||||
|
label: option.label,
|
||||||
|
hint: option.hint,
|
||||||
|
})) as Option<T>[],
|
||||||
|
initialValues,
|
||||||
|
required: false,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
return selection;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,15 +1,24 @@
|
|||||||
import { isLoggedIn as isAlphaLoggedIn, login as loginAlpha } from "@companion-ai/alpha-hub/lib";
|
import { isLoggedIn as isAlphaLoggedIn, login as loginAlpha } from "@companion-ai/alpha-hub/lib";
|
||||||
|
import { dirname } from "node:path";
|
||||||
|
|
||||||
import { getDefaultSessionDir, getFeynmanHome } from "../config/paths.js";
|
import { getPiWebAccessStatus } from "../pi/web-access.js";
|
||||||
import { getPiWebAccessStatus, getPiWebSearchConfigPath } from "../pi/web-access.js";
|
|
||||||
import { normalizeFeynmanSettings } from "../pi/settings.js";
|
import { normalizeFeynmanSettings } from "../pi/settings.js";
|
||||||
import type { ThinkingLevel } from "../pi/settings.js";
|
import type { ThinkingLevel } from "../pi/settings.js";
|
||||||
|
import { getMissingConfiguredPackages, installPackageSources } from "../pi/package-ops.js";
|
||||||
|
import { listOptionalPackagePresets } from "../pi/package-presets.js";
|
||||||
import { getCurrentModelSpec, runModelSetup } from "../model/commands.js";
|
import { getCurrentModelSpec, runModelSetup } from "../model/commands.js";
|
||||||
import { buildModelStatusSnapshotFromRecords, getAvailableModelRecords, getSupportedModelRecords } from "../model/catalog.js";
|
import { buildModelStatusSnapshotFromRecords, getAvailableModelRecords, getSupportedModelRecords } from "../model/catalog.js";
|
||||||
import { PANDOC_FALLBACK_PATHS, resolveExecutable } from "../system/executables.js";
|
import { PANDOC_FALLBACK_PATHS, resolveExecutable } from "../system/executables.js";
|
||||||
import { setupPreviewDependencies } from "./preview.js";
|
import { setupPreviewDependencies } from "./preview.js";
|
||||||
import { runDoctor } from "./doctor.js";
|
|
||||||
import { printInfo, printSection, printSuccess } from "../ui/terminal.js";
|
import { printInfo, printSection, printSuccess } from "../ui/terminal.js";
|
||||||
|
import {
|
||||||
|
isInteractiveTerminal,
|
||||||
|
promptConfirm,
|
||||||
|
promptIntro,
|
||||||
|
promptMultiSelect,
|
||||||
|
promptOutro,
|
||||||
|
SetupCancelledError,
|
||||||
|
} from "./prompts.js";
|
||||||
|
|
||||||
type SetupOptions = {
|
type SetupOptions = {
|
||||||
settingsPath: string;
|
settingsPath: string;
|
||||||
@@ -21,10 +30,6 @@ type SetupOptions = {
|
|||||||
defaultThinkingLevel?: ThinkingLevel;
|
defaultThinkingLevel?: ThinkingLevel;
|
||||||
};
|
};
|
||||||
|
|
||||||
function isInteractiveTerminal(): boolean {
|
|
||||||
return Boolean(process.stdin.isTTY && process.stdout.isTTY);
|
|
||||||
}
|
|
||||||
|
|
||||||
function printNonInteractiveSetupGuidance(): void {
|
function printNonInteractiveSetupGuidance(): void {
|
||||||
printInfo("Non-interactive terminal. Use explicit commands:");
|
printInfo("Non-interactive terminal. Use explicit commands:");
|
||||||
printInfo(" feynman model login <provider>");
|
printInfo(" feynman model login <provider>");
|
||||||
@@ -34,37 +39,181 @@ function printNonInteractiveSetupGuidance(): void {
|
|||||||
printInfo(" feynman doctor");
|
printInfo(" feynman doctor");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function summarizePackageSources(sources: string[]): string {
|
||||||
|
if (sources.length <= 3) {
|
||||||
|
return sources.join(", ");
|
||||||
|
}
|
||||||
|
|
||||||
|
return `${sources.slice(0, 3).join(", ")} +${sources.length - 3} more`;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function maybeInstallBundledPackages(options: SetupOptions): Promise<void> {
|
||||||
|
const agentDir = dirname(options.authPath);
|
||||||
|
const { missing, bundled } = getMissingConfiguredPackages(options.workingDir, agentDir, options.appRoot);
|
||||||
|
const userMissing = missing.filter((entry) => entry.scope === "user").map((entry) => entry.source);
|
||||||
|
const projectMissing = missing.filter((entry) => entry.scope === "project").map((entry) => entry.source);
|
||||||
|
|
||||||
|
printSection("Packages");
|
||||||
|
if (bundled.length > 0) {
|
||||||
|
printInfo(`Bundled research packages ready: ${summarizePackageSources(bundled.map((entry) => entry.source))}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (missing.length === 0) {
|
||||||
|
printInfo("No additional package install required.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
printInfo(`Missing packages: ${summarizePackageSources(missing.map((entry) => entry.source))}`);
|
||||||
|
const shouldInstall = await promptConfirm("Install missing Feynman packages now?", true);
|
||||||
|
if (!shouldInstall) {
|
||||||
|
printInfo("Skipping package install. Feynman may install missing packages later if needed.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (userMissing.length > 0) {
|
||||||
|
try {
|
||||||
|
await installPackageSources(options.workingDir, agentDir, userMissing);
|
||||||
|
printSuccess(`Installed bundled packages: ${summarizePackageSources(userMissing)}`);
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
printInfo(message.includes("No supported package manager found")
|
||||||
|
? "No package manager available for additional installs. The standalone bundle can still run with its shipped packages."
|
||||||
|
: `Package install skipped: ${message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (projectMissing.length > 0) {
|
||||||
|
try {
|
||||||
|
await installPackageSources(options.workingDir, agentDir, projectMissing, { local: true });
|
||||||
|
printSuccess(`Installed project packages: ${summarizePackageSources(projectMissing)}`);
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
printInfo(`Project package install skipped: ${message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function maybeInstallOptionalPackages(options: SetupOptions): Promise<void> {
|
||||||
|
const agentDir = dirname(options.authPath);
|
||||||
|
const presets = listOptionalPackagePresets();
|
||||||
|
if (presets.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const selectedPresets = await promptMultiSelect(
|
||||||
|
"Optional packages",
|
||||||
|
presets.map((preset) => ({
|
||||||
|
value: preset.name,
|
||||||
|
label: preset.name,
|
||||||
|
hint: preset.description,
|
||||||
|
})),
|
||||||
|
[],
|
||||||
|
);
|
||||||
|
|
||||||
|
if (selectedPresets.length === 0) {
|
||||||
|
printInfo("No optional packages selected.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const presetName of selectedPresets) {
|
||||||
|
const preset = presets.find((entry) => entry.name === presetName);
|
||||||
|
if (!preset) continue;
|
||||||
|
try {
|
||||||
|
await installPackageSources(options.workingDir, agentDir, preset.sources, {
|
||||||
|
persist: true,
|
||||||
|
});
|
||||||
|
printSuccess(`Installed optional preset: ${preset.name}`);
|
||||||
|
} catch (error) {
|
||||||
|
const message = error instanceof Error ? error.message : String(error);
|
||||||
|
printInfo(message.includes("No supported package manager found")
|
||||||
|
? `Skipped optional preset ${preset.name}: no package manager available.`
|
||||||
|
: `Skipped optional preset ${preset.name}: ${message}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function maybeLoginAlpha(): Promise<void> {
|
||||||
|
if (isAlphaLoggedIn()) {
|
||||||
|
printInfo("alphaXiv already configured.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const shouldLogin = await promptConfirm("Connect alphaXiv now?", true);
|
||||||
|
if (!shouldLogin) {
|
||||||
|
printInfo("Skipping alphaXiv login for now.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
await loginAlpha();
|
||||||
|
printSuccess("alphaXiv login complete");
|
||||||
|
} catch (error) {
|
||||||
|
printInfo(`alphaXiv login skipped: ${error instanceof Error ? error.message : String(error)}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function maybeInstallPreviewDependencies(): Promise<void> {
|
||||||
|
if (resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS)) {
|
||||||
|
printInfo("Preview support already configured.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const shouldInstall = await promptConfirm("Install pandoc for preview/export support?", false);
|
||||||
|
if (!shouldInstall) {
|
||||||
|
printInfo("Skipping preview dependency install.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = setupPreviewDependencies();
|
||||||
|
printSuccess(result.message);
|
||||||
|
} catch (error) {
|
||||||
|
printInfo(`Preview setup skipped: ${error instanceof Error ? error.message : String(error)}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
export async function runSetup(options: SetupOptions): Promise<void> {
|
export async function runSetup(options: SetupOptions): Promise<void> {
|
||||||
if (!isInteractiveTerminal()) {
|
if (!isInteractiveTerminal()) {
|
||||||
printNonInteractiveSetupGuidance();
|
printNonInteractiveSetupGuidance();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
await runModelSetup(options.settingsPath, options.authPath);
|
try {
|
||||||
|
await promptIntro("Feynman setup");
|
||||||
|
await runModelSetup(options.settingsPath, options.authPath);
|
||||||
|
await maybeInstallBundledPackages(options);
|
||||||
|
await maybeInstallOptionalPackages(options);
|
||||||
|
await maybeLoginAlpha();
|
||||||
|
await maybeInstallPreviewDependencies();
|
||||||
|
|
||||||
if (!isAlphaLoggedIn()) {
|
normalizeFeynmanSettings(
|
||||||
await loginAlpha();
|
options.settingsPath,
|
||||||
printSuccess("alphaXiv login complete");
|
options.bundledSettingsPath,
|
||||||
|
options.defaultThinkingLevel ?? "medium",
|
||||||
|
options.authPath,
|
||||||
|
);
|
||||||
|
|
||||||
|
const modelStatus = buildModelStatusSnapshotFromRecords(
|
||||||
|
getSupportedModelRecords(options.authPath),
|
||||||
|
getAvailableModelRecords(options.authPath),
|
||||||
|
getCurrentModelSpec(options.settingsPath),
|
||||||
|
);
|
||||||
|
printSection("Ready");
|
||||||
|
printInfo(`Model: ${getCurrentModelSpec(options.settingsPath) ?? "not set"}`);
|
||||||
|
printInfo(`alphaXiv: ${isAlphaLoggedIn() ? "configured" : "not configured"}`);
|
||||||
|
printInfo(`Preview: ${resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS) ? "configured" : "not configured"}`);
|
||||||
|
printInfo(`Web: ${getPiWebAccessStatus().routeLabel}`);
|
||||||
|
if (modelStatus.recommended && !modelStatus.currentValid) {
|
||||||
|
printInfo(`Recommended model: ${modelStatus.recommended}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
await promptOutro("Feynman is ready");
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof SetupCancelledError) {
|
||||||
|
printInfo("Setup cancelled.");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw error;
|
||||||
}
|
}
|
||||||
|
|
||||||
const result = setupPreviewDependencies();
|
|
||||||
printSuccess(result.message);
|
|
||||||
|
|
||||||
normalizeFeynmanSettings(
|
|
||||||
options.settingsPath,
|
|
||||||
options.bundledSettingsPath,
|
|
||||||
options.defaultThinkingLevel ?? "medium",
|
|
||||||
options.authPath,
|
|
||||||
);
|
|
||||||
|
|
||||||
const modelStatus = buildModelStatusSnapshotFromRecords(
|
|
||||||
getSupportedModelRecords(options.authPath),
|
|
||||||
getAvailableModelRecords(options.authPath),
|
|
||||||
getCurrentModelSpec(options.settingsPath),
|
|
||||||
);
|
|
||||||
printSection("Ready");
|
|
||||||
printInfo(`Model: ${getCurrentModelSpec(options.settingsPath) ?? "not set"}`);
|
|
||||||
printInfo(`alphaXiv: ${isAlphaLoggedIn() ? "configured" : "not configured"}`);
|
|
||||||
printInfo(`Preview: ${resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS) ? "configured" : "not configured"}`);
|
|
||||||
printInfo(`Web: ${getPiWebAccessStatus().routeLabel}`);
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
import { spawnSync } from "node:child_process";
|
import { spawnSync } from "node:child_process";
|
||||||
import { existsSync } from "node:fs";
|
import { existsSync } from "node:fs";
|
||||||
|
import { dirname, delimiter } from "node:path";
|
||||||
|
|
||||||
const isWindows = process.platform === "win32";
|
const isWindows = process.platform === "win32";
|
||||||
const programFiles = process.env.PROGRAMFILES ?? "C:\\Program Files";
|
const programFiles = process.env.PROGRAMFILES ?? "C:\\Program Files";
|
||||||
@@ -40,14 +41,20 @@ export function resolveExecutable(name: string, fallbackPaths: string[] = []): s
|
|||||||
}
|
}
|
||||||
|
|
||||||
const isWindows = process.platform === "win32";
|
const isWindows = process.platform === "win32";
|
||||||
|
const env = {
|
||||||
|
...process.env,
|
||||||
|
PATH: process.env.PATH ?? "",
|
||||||
|
};
|
||||||
const result = isWindows
|
const result = isWindows
|
||||||
? spawnSync("cmd", ["/c", `where ${name}`], {
|
? spawnSync("cmd", ["/c", `where ${name}`], {
|
||||||
encoding: "utf8",
|
encoding: "utf8",
|
||||||
stdio: ["ignore", "pipe", "ignore"],
|
stdio: ["ignore", "pipe", "ignore"],
|
||||||
|
env,
|
||||||
})
|
})
|
||||||
: spawnSync("sh", ["-lc", `command -v ${name}`], {
|
: spawnSync("sh", ["-c", `command -v ${name}`], {
|
||||||
encoding: "utf8",
|
encoding: "utf8",
|
||||||
stdio: ["ignore", "pipe", "ignore"],
|
stdio: ["ignore", "pipe", "ignore"],
|
||||||
|
env,
|
||||||
});
|
});
|
||||||
|
|
||||||
if (result.status === 0) {
|
if (result.status === 0) {
|
||||||
@@ -59,3 +66,9 @@ export function resolveExecutable(name: string, fallbackPaths: string[] = []): s
|
|||||||
|
|
||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export function getPathWithCurrentNode(pathValue = process.env.PATH ?? ""): string {
|
||||||
|
const nodeDir = dirname(process.execPath);
|
||||||
|
const parts = pathValue.split(delimiter).filter(Boolean);
|
||||||
|
return parts.includes(nodeDir) ? pathValue : `${nodeDir}${delimiter}${pathValue}`;
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
export const MIN_NODE_VERSION = "20.19.0";
|
export const MIN_NODE_VERSION = "20.19.0";
|
||||||
|
export const MAX_NODE_MAJOR = 24;
|
||||||
|
export const PREFERRED_NODE_MAJOR = 22;
|
||||||
|
|
||||||
type ParsedNodeVersion = {
|
type ParsedNodeVersion = {
|
||||||
major: number;
|
major: number;
|
||||||
@@ -22,16 +24,21 @@ function compareNodeVersions(left: ParsedNodeVersion, right: ParsedNodeVersion):
|
|||||||
}
|
}
|
||||||
|
|
||||||
export function isSupportedNodeVersion(version = process.versions.node): boolean {
|
export function isSupportedNodeVersion(version = process.versions.node): boolean {
|
||||||
return compareNodeVersions(parseNodeVersion(version), parseNodeVersion(MIN_NODE_VERSION)) >= 0;
|
const parsed = parseNodeVersion(version);
|
||||||
|
return compareNodeVersions(parsed, parseNodeVersion(MIN_NODE_VERSION)) >= 0 && parsed.major <= MAX_NODE_MAJOR;
|
||||||
}
|
}
|
||||||
|
|
||||||
export function getUnsupportedNodeVersionLines(version = process.versions.node): string[] {
|
export function getUnsupportedNodeVersionLines(version = process.versions.node): string[] {
|
||||||
const isWindows = process.platform === "win32";
|
const isWindows = process.platform === "win32";
|
||||||
|
const parsed = parseNodeVersion(version);
|
||||||
|
const rangeText = `Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x`;
|
||||||
return [
|
return [
|
||||||
`feynman requires Node.js ${MIN_NODE_VERSION} or later (detected ${version}).`,
|
`feynman supports ${rangeText} (detected ${version}).`,
|
||||||
isWindows
|
parsed.major > MAX_NODE_MAJOR
|
||||||
? "Install a newer Node.js from https://nodejs.org, or use the standalone installer:"
|
? "This newer Node release is not supported yet because native Pi packages may fail to build."
|
||||||
: "Switch to Node 20 with `nvm install 20 && nvm use 20`, or use the standalone installer:",
|
: isWindows
|
||||||
|
? "Install a supported Node.js release from https://nodejs.org, or use the standalone installer:"
|
||||||
|
: `Switch to a supported Node release with \`nvm install ${PREFERRED_NODE_MAJOR} && nvm use ${PREFERRED_NODE_MAJOR}\`, or use the standalone installer:`,
|
||||||
isWindows
|
isWindows
|
||||||
? "irm https://feynman.is/install.ps1 | iex"
|
? "irm https://feynman.is/install.ps1 | iex"
|
||||||
: "curl -fsSL https://feynman.is/install | bash",
|
: "curl -fsSL https://feynman.is/install | bash",
|
||||||
|
|||||||
51
tests/alpha-hub-auth-patch.test.ts
Normal file
51
tests/alpha-hub-auth-patch.test.ts
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
import test from "node:test";
|
||||||
|
import assert from "node:assert/strict";
|
||||||
|
|
||||||
|
import { patchAlphaHubAuthSource } from "../scripts/lib/alpha-hub-auth-patch.mjs";
|
||||||
|
|
||||||
|
test("patchAlphaHubAuthSource fixes browser open logic for WSL and Windows", () => {
|
||||||
|
const input = [
|
||||||
|
"function openBrowser(url) {",
|
||||||
|
" try {",
|
||||||
|
" const plat = platform();",
|
||||||
|
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
|
||||||
|
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
|
||||||
|
" else if (plat === 'win32') execSync(`start \"\" \"${url}\"`);",
|
||||||
|
" } catch {}",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchAlphaHubAuthSource(input);
|
||||||
|
|
||||||
|
assert.match(patched, /const isWsl = plat === 'linux'/);
|
||||||
|
assert.match(patched, /wslview/);
|
||||||
|
assert.match(patched, /cmd\.exe \/c start/);
|
||||||
|
assert.match(patched, /cmd \/c start/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchAlphaHubAuthSource includes the auth URL in login output", () => {
|
||||||
|
const input = "process.stderr.write('Opening browser for alphaXiv login...\\n');";
|
||||||
|
|
||||||
|
const patched = patchAlphaHubAuthSource(input);
|
||||||
|
|
||||||
|
assert.match(patched, /Auth URL: \$\{authUrl\.toString\(\)\}/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchAlphaHubAuthSource is idempotent", () => {
|
||||||
|
const input = [
|
||||||
|
"function openBrowser(url) {",
|
||||||
|
" try {",
|
||||||
|
" const plat = platform();",
|
||||||
|
" if (plat === 'darwin') execSync(`open \"${url}\"`);",
|
||||||
|
" else if (plat === 'linux') execSync(`xdg-open \"${url}\"`);",
|
||||||
|
" else if (plat === 'win32') execSync(`start \"\" \"${url}\"`);",
|
||||||
|
" } catch {}",
|
||||||
|
"}",
|
||||||
|
"process.stderr.write('Opening browser for alphaXiv login...\\n');",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const once = patchAlphaHubAuthSource(input);
|
||||||
|
const twice = patchAlphaHubAuthSource(once);
|
||||||
|
|
||||||
|
assert.equal(twice, once);
|
||||||
|
});
|
||||||
@@ -30,3 +30,24 @@ test("bundled prompts and skills do not contain blocked promotional product cont
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("research writing prompts forbid fabricated results and unproven figures", () => {
|
||||||
|
const draftPrompt = readFileSync(join(repoRoot, "prompts", "draft.md"), "utf8");
|
||||||
|
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
|
||||||
|
const writerPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "writer.md"), "utf8");
|
||||||
|
const verifierPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "verifier.md"), "utf8");
|
||||||
|
|
||||||
|
for (const [label, content] of [
|
||||||
|
["system prompt", systemPrompt],
|
||||||
|
["writer prompt", writerPrompt],
|
||||||
|
["verifier prompt", verifierPrompt],
|
||||||
|
] as const) {
|
||||||
|
assert.match(content, /Never (invent|fabricate)/i, `${label} must explicitly forbid invented or fabricated results`);
|
||||||
|
assert.match(content, /(figure|chart|image|table)/i, `${label} must cover visual/table provenance`);
|
||||||
|
assert.match(content, /(provenance|source|artifact|script|raw)/i, `${label} must require traceable support`);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.match(draftPrompt, /system prompt's provenance rules/i);
|
||||||
|
assert.match(draftPrompt, /placeholder or proposed experimental plan/i);
|
||||||
|
assert.match(draftPrompt, /source-backed quantitative data/i);
|
||||||
|
});
|
||||||
|
|||||||
@@ -4,9 +4,9 @@ import { mkdtempSync, readFileSync, writeFileSync } from "node:fs";
|
|||||||
import { tmpdir } from "node:os";
|
import { tmpdir } from "node:os";
|
||||||
import { join } from "node:path";
|
import { join } from "node:path";
|
||||||
|
|
||||||
import { resolveInitialPrompt } from "../src/cli.js";
|
import { resolveInitialPrompt, shouldRunInteractiveSetup } from "../src/cli.js";
|
||||||
import { buildModelStatusSnapshotFromRecords, chooseRecommendedModel } from "../src/model/catalog.js";
|
import { buildModelStatusSnapshotFromRecords, chooseRecommendedModel } from "../src/model/catalog.js";
|
||||||
import { setDefaultModelSpec } from "../src/model/commands.js";
|
import { resolveModelProviderForCommand, setDefaultModelSpec } from "../src/model/commands.js";
|
||||||
|
|
||||||
function createAuthPath(contents: Record<string, unknown>): string {
|
function createAuthPath(contents: Record<string, unknown>): string {
|
||||||
const root = mkdtempSync(join(tmpdir(), "feynman-auth-"));
|
const root = mkdtempSync(join(tmpdir(), "feynman-auth-"));
|
||||||
@@ -42,6 +42,56 @@ test("setDefaultModelSpec accepts a unique bare model id from authenticated mode
|
|||||||
assert.equal(settings.defaultModel, "gpt-5.4");
|
assert.equal(settings.defaultModel, "gpt-5.4");
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("setDefaultModelSpec accepts provider:model syntax for authenticated models", () => {
|
||||||
|
const authPath = createAuthPath({
|
||||||
|
google: { type: "api_key", key: "google-test-key" },
|
||||||
|
});
|
||||||
|
const settingsPath = join(mkdtempSync(join(tmpdir(), "feynman-settings-")), "settings.json");
|
||||||
|
|
||||||
|
setDefaultModelSpec(settingsPath, authPath, "google:gemini-3-pro-preview");
|
||||||
|
|
||||||
|
const settings = JSON.parse(readFileSync(settingsPath, "utf8")) as {
|
||||||
|
defaultProvider?: string;
|
||||||
|
defaultModel?: string;
|
||||||
|
};
|
||||||
|
assert.equal(settings.defaultProvider, "google");
|
||||||
|
assert.equal(settings.defaultModel, "gemini-3-pro-preview");
|
||||||
|
});
|
||||||
|
|
||||||
|
test("resolveModelProviderForCommand falls back to API-key providers when OAuth is unavailable", () => {
|
||||||
|
const authPath = createAuthPath({});
|
||||||
|
|
||||||
|
const resolved = resolveModelProviderForCommand(authPath, "google");
|
||||||
|
|
||||||
|
assert.equal(resolved?.kind, "api-key");
|
||||||
|
assert.equal(resolved?.id, "google");
|
||||||
|
});
|
||||||
|
|
||||||
|
test("resolveModelProviderForCommand prefers OAuth when a provider supports both auth modes", () => {
|
||||||
|
const authPath = createAuthPath({});
|
||||||
|
|
||||||
|
const resolved = resolveModelProviderForCommand(authPath, "anthropic");
|
||||||
|
|
||||||
|
assert.equal(resolved?.kind, "oauth");
|
||||||
|
assert.equal(resolved?.id, "anthropic");
|
||||||
|
});
|
||||||
|
|
||||||
|
test("setDefaultModelSpec prefers the explicitly configured provider when a bare model id is ambiguous", () => {
|
||||||
|
const authPath = createAuthPath({
|
||||||
|
openai: { type: "api_key", key: "openai-test-key" },
|
||||||
|
});
|
||||||
|
const settingsPath = join(mkdtempSync(join(tmpdir(), "feynman-settings-")), "settings.json");
|
||||||
|
|
||||||
|
setDefaultModelSpec(settingsPath, authPath, "gpt-5.4");
|
||||||
|
|
||||||
|
const settings = JSON.parse(readFileSync(settingsPath, "utf8")) as {
|
||||||
|
defaultProvider?: string;
|
||||||
|
defaultModel?: string;
|
||||||
|
};
|
||||||
|
assert.equal(settings.defaultProvider, "openai");
|
||||||
|
assert.equal(settings.defaultModel, "gpt-5.4");
|
||||||
|
});
|
||||||
|
|
||||||
test("buildModelStatusSnapshotFromRecords flags an invalid current model and suggests a replacement", () => {
|
test("buildModelStatusSnapshotFromRecords flags an invalid current model and suggests a replacement", () => {
|
||||||
const snapshot = buildModelStatusSnapshotFromRecords(
|
const snapshot = buildModelStatusSnapshotFromRecords(
|
||||||
[
|
[
|
||||||
@@ -68,10 +118,63 @@ test("chooseRecommendedModel prefers MiniMax M2.7 over highspeed when that is th
|
|||||||
});
|
});
|
||||||
|
|
||||||
test("resolveInitialPrompt maps top-level research commands to Pi slash workflows", () => {
|
test("resolveInitialPrompt maps top-level research commands to Pi slash workflows", () => {
|
||||||
const workflows = new Set(["lit", "watch", "jobs", "deepresearch"]);
|
const workflows = new Set([
|
||||||
|
"lit",
|
||||||
|
"watch",
|
||||||
|
"jobs",
|
||||||
|
"deepresearch",
|
||||||
|
"review",
|
||||||
|
"audit",
|
||||||
|
"replicate",
|
||||||
|
"compare",
|
||||||
|
"draft",
|
||||||
|
"autoresearch",
|
||||||
|
"summarize",
|
||||||
|
"log",
|
||||||
|
]);
|
||||||
assert.equal(resolveInitialPrompt("lit", ["tool-using", "agents"], undefined, workflows), "/lit tool-using agents");
|
assert.equal(resolveInitialPrompt("lit", ["tool-using", "agents"], undefined, workflows), "/lit tool-using agents");
|
||||||
assert.equal(resolveInitialPrompt("watch", ["openai"], undefined, workflows), "/watch openai");
|
assert.equal(resolveInitialPrompt("watch", ["openai"], undefined, workflows), "/watch openai");
|
||||||
assert.equal(resolveInitialPrompt("jobs", [], undefined, workflows), "/jobs");
|
assert.equal(resolveInitialPrompt("jobs", [], undefined, workflows), "/jobs");
|
||||||
|
assert.equal(resolveInitialPrompt("deepresearch", ["scaling", "laws"], undefined, workflows), "/deepresearch scaling laws");
|
||||||
|
assert.equal(resolveInitialPrompt("review", ["paper.md"], undefined, workflows), "/review paper.md");
|
||||||
|
assert.equal(resolveInitialPrompt("audit", ["2401.12345"], undefined, workflows), "/audit 2401.12345");
|
||||||
|
assert.equal(resolveInitialPrompt("replicate", ["chain-of-thought"], undefined, workflows), "/replicate chain-of-thought");
|
||||||
|
assert.equal(resolveInitialPrompt("compare", ["tool", "use"], undefined, workflows), "/compare tool use");
|
||||||
|
assert.equal(resolveInitialPrompt("draft", ["mechanistic", "interp"], undefined, workflows), "/draft mechanistic interp");
|
||||||
|
assert.equal(resolveInitialPrompt("autoresearch", ["gsm8k"], undefined, workflows), "/autoresearch gsm8k");
|
||||||
|
assert.equal(resolveInitialPrompt("summarize", ["README.md"], undefined, workflows), "/summarize README.md");
|
||||||
|
assert.equal(resolveInitialPrompt("log", [], undefined, workflows), "/log");
|
||||||
assert.equal(resolveInitialPrompt("chat", ["hello"], undefined, workflows), "hello");
|
assert.equal(resolveInitialPrompt("chat", ["hello"], undefined, workflows), "hello");
|
||||||
assert.equal(resolveInitialPrompt("unknown", ["topic"], undefined, workflows), "unknown topic");
|
assert.equal(resolveInitialPrompt("unknown", ["topic"], undefined, workflows), "unknown topic");
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("shouldRunInteractiveSetup triggers on first run when no default model is configured", () => {
|
||||||
|
const authPath = createAuthPath({});
|
||||||
|
|
||||||
|
assert.equal(shouldRunInteractiveSetup(undefined, undefined, true, authPath), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("shouldRunInteractiveSetup triggers when the configured default model is unavailable", () => {
|
||||||
|
const authPath = createAuthPath({
|
||||||
|
openai: { type: "api_key", key: "openai-test-key" },
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(shouldRunInteractiveSetup(undefined, "anthropic/claude-opus-4-6", true, authPath), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("shouldRunInteractiveSetup skips onboarding when the configured default model is available", () => {
|
||||||
|
const authPath = createAuthPath({
|
||||||
|
openai: { type: "api_key", key: "openai-test-key" },
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(shouldRunInteractiveSetup(undefined, "openai/gpt-5.4", true, authPath), false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("shouldRunInteractiveSetup skips onboarding for explicit model overrides or non-interactive terminals", () => {
|
||||||
|
const authPath = createAuthPath({
|
||||||
|
openai: { type: "api_key", key: "openai-test-key" },
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(shouldRunInteractiveSetup("openai/gpt-5.4", undefined, true, authPath), false);
|
||||||
|
assert.equal(shouldRunInteractiveSetup(undefined, undefined, false, authPath), false);
|
||||||
|
});
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ import test from "node:test";
|
|||||||
import assert from "node:assert/strict";
|
import assert from "node:assert/strict";
|
||||||
|
|
||||||
import {
|
import {
|
||||||
|
MAX_NODE_MAJOR,
|
||||||
MIN_NODE_VERSION,
|
MIN_NODE_VERSION,
|
||||||
ensureSupportedNodeVersion,
|
ensureSupportedNodeVersion,
|
||||||
getUnsupportedNodeVersionLines,
|
getUnsupportedNodeVersionLines,
|
||||||
@@ -12,6 +13,8 @@ test("isSupportedNodeVersion enforces the exact minimum floor", () => {
|
|||||||
assert.equal(isSupportedNodeVersion("20.19.0"), true);
|
assert.equal(isSupportedNodeVersion("20.19.0"), true);
|
||||||
assert.equal(isSupportedNodeVersion("20.19.0"), true);
|
assert.equal(isSupportedNodeVersion("20.19.0"), true);
|
||||||
assert.equal(isSupportedNodeVersion("21.0.0"), true);
|
assert.equal(isSupportedNodeVersion("21.0.0"), true);
|
||||||
|
assert.equal(isSupportedNodeVersion(`${MAX_NODE_MAJOR}.9.9`), true);
|
||||||
|
assert.equal(isSupportedNodeVersion(`${MAX_NODE_MAJOR + 1}.0.0`), false);
|
||||||
assert.equal(isSupportedNodeVersion("20.18.1"), false);
|
assert.equal(isSupportedNodeVersion("20.18.1"), false);
|
||||||
assert.equal(isSupportedNodeVersion("18.17.0"), false);
|
assert.equal(isSupportedNodeVersion("18.17.0"), false);
|
||||||
});
|
});
|
||||||
@@ -22,7 +25,7 @@ test("ensureSupportedNodeVersion throws a guided upgrade message", () => {
|
|||||||
(error: unknown) =>
|
(error: unknown) =>
|
||||||
error instanceof Error &&
|
error instanceof Error &&
|
||||||
error.message.includes(`Node.js ${MIN_NODE_VERSION}`) &&
|
error.message.includes(`Node.js ${MIN_NODE_VERSION}`) &&
|
||||||
error.message.includes("nvm install 20 && nvm use 20") &&
|
error.message.includes("nvm install 22 && nvm use 22") &&
|
||||||
error.message.includes("https://feynman.is/install"),
|
error.message.includes("https://feynman.is/install"),
|
||||||
);
|
);
|
||||||
});
|
});
|
||||||
@@ -30,6 +33,13 @@ test("ensureSupportedNodeVersion throws a guided upgrade message", () => {
|
|||||||
test("unsupported version guidance reports the detected version", () => {
|
test("unsupported version guidance reports the detected version", () => {
|
||||||
const lines = getUnsupportedNodeVersionLines("18.17.0");
|
const lines = getUnsupportedNodeVersionLines("18.17.0");
|
||||||
|
|
||||||
assert.equal(lines[0], "feynman requires Node.js 20.19.0 or later (detected 18.17.0).");
|
assert.equal(lines[0], `feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected 18.17.0).`);
|
||||||
assert.ok(lines.some((line) => line.includes("curl -fsSL https://feynman.is/install | bash")));
|
assert.ok(lines.some((line) => line.includes("curl -fsSL https://feynman.is/install | bash")));
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("unsupported version guidance explains upper-bound failures", () => {
|
||||||
|
const lines = getUnsupportedNodeVersionLines("25.1.0");
|
||||||
|
|
||||||
|
assert.equal(lines[0], `feynman supports Node.js ${MIN_NODE_VERSION} through ${MAX_NODE_MAJOR}.x (detected 25.1.0).`);
|
||||||
|
assert.ok(lines.some((line) => line.includes("native Pi packages may fail to build")));
|
||||||
|
});
|
||||||
|
|||||||
255
tests/package-ops.test.ts
Normal file
255
tests/package-ops.test.ts
Normal file
@@ -0,0 +1,255 @@
|
|||||||
|
import test from "node:test";
|
||||||
|
import assert from "node:assert/strict";
|
||||||
|
import { appendFileSync, existsSync, lstatSync, mkdtempSync, mkdirSync, readFileSync, writeFileSync } from "node:fs";
|
||||||
|
import { tmpdir } from "node:os";
|
||||||
|
import { join, resolve } from "node:path";
|
||||||
|
|
||||||
|
import { installPackageSources, seedBundledWorkspacePackages, updateConfiguredPackages } from "../src/pi/package-ops.js";
|
||||||
|
|
||||||
|
function createBundledWorkspace(appRoot: string, packageNames: string[]): void {
|
||||||
|
for (const packageName of packageNames) {
|
||||||
|
const packageDir = resolve(appRoot, ".feynman", "npm", "node_modules", packageName);
|
||||||
|
mkdirSync(packageDir, { recursive: true });
|
||||||
|
writeFileSync(
|
||||||
|
join(packageDir, "package.json"),
|
||||||
|
JSON.stringify({ name: packageName, version: "1.0.0" }, null, 2) + "\n",
|
||||||
|
"utf8",
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function createInstalledGlobalPackage(homeRoot: string, packageName: string, version = "1.0.0"): void {
|
||||||
|
const packageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", packageName);
|
||||||
|
mkdirSync(packageDir, { recursive: true });
|
||||||
|
writeFileSync(
|
||||||
|
join(packageDir, "package.json"),
|
||||||
|
JSON.stringify({ name: packageName, version }, null, 2) + "\n",
|
||||||
|
"utf8",
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
function writeSettings(agentDir: string, settings: Record<string, unknown>): void {
|
||||||
|
mkdirSync(agentDir, { recursive: true });
|
||||||
|
writeFileSync(resolve(agentDir, "settings.json"), JSON.stringify(settings, null, 2) + "\n", "utf8");
|
||||||
|
}
|
||||||
|
|
||||||
|
function writeFakeNpmScript(root: string, body: string): string {
|
||||||
|
const scriptPath = resolve(root, "fake-npm.mjs");
|
||||||
|
writeFileSync(scriptPath, body, "utf8");
|
||||||
|
return scriptPath;
|
||||||
|
}
|
||||||
|
|
||||||
|
test("seedBundledWorkspacePackages links bundled packages into the Feynman npm prefix", () => {
|
||||||
|
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
|
||||||
|
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
|
||||||
|
const agentDir = resolve(homeRoot, "agent");
|
||||||
|
mkdirSync(agentDir, { recursive: true });
|
||||||
|
|
||||||
|
createBundledWorkspace(appRoot, ["pi-subagents", "@samfp/pi-memory"]);
|
||||||
|
|
||||||
|
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, [
|
||||||
|
"npm:pi-subagents",
|
||||||
|
"npm:@samfp/pi-memory",
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.deepEqual(seeded.sort(), ["npm:@samfp/pi-memory", "npm:pi-subagents"]);
|
||||||
|
const globalRoot = resolve(homeRoot, "npm-global", "lib", "node_modules");
|
||||||
|
assert.equal(existsSync(resolve(globalRoot, "pi-subagents", "package.json")), true);
|
||||||
|
assert.equal(existsSync(resolve(globalRoot, "@samfp", "pi-memory", "package.json")), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("seedBundledWorkspacePackages preserves existing installed packages", () => {
|
||||||
|
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
|
||||||
|
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
|
||||||
|
const agentDir = resolve(homeRoot, "agent");
|
||||||
|
const existingPackageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", "pi-subagents");
|
||||||
|
|
||||||
|
mkdirSync(agentDir, { recursive: true });
|
||||||
|
createBundledWorkspace(appRoot, ["pi-subagents"]);
|
||||||
|
mkdirSync(existingPackageDir, { recursive: true });
|
||||||
|
writeFileSync(resolve(existingPackageDir, "package.json"), '{"name":"pi-subagents","version":"user"}\n', "utf8");
|
||||||
|
|
||||||
|
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, ["npm:pi-subagents"]);
|
||||||
|
|
||||||
|
assert.deepEqual(seeded, []);
|
||||||
|
assert.equal(readFileSync(resolve(existingPackageDir, "package.json"), "utf8"), '{"name":"pi-subagents","version":"user"}\n');
|
||||||
|
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("installPackageSources filters noisy npm chatter but preserves meaningful output", async () => {
|
||||||
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
|
const workingDir = resolve(root, "project");
|
||||||
|
const agentDir = resolve(root, "agent");
|
||||||
|
mkdirSync(workingDir, { recursive: true });
|
||||||
|
|
||||||
|
const scriptPath = writeFakeNpmScript(root, [
|
||||||
|
`console.log("npm warn deprecated node-domexception@1.0.0: Use your platform's native DOMException instead");`,
|
||||||
|
'console.log("changed 343 packages in 9s");',
|
||||||
|
'console.log("59 packages are looking for funding");',
|
||||||
|
'console.log("run `npm fund` for details");',
|
||||||
|
'console.error("visible stderr line");',
|
||||||
|
'console.log("visible stdout line");',
|
||||||
|
"process.exit(0);",
|
||||||
|
].join("\n"));
|
||||||
|
|
||||||
|
writeSettings(agentDir, {
|
||||||
|
npmCommand: [process.execPath, scriptPath],
|
||||||
|
});
|
||||||
|
|
||||||
|
let stdout = "";
|
||||||
|
let stderr = "";
|
||||||
|
const originalStdoutWrite = process.stdout.write.bind(process.stdout);
|
||||||
|
const originalStderrWrite = process.stderr.write.bind(process.stderr);
|
||||||
|
(process.stdout.write as unknown as (chunk: string | Uint8Array) => boolean) = ((chunk: string | Uint8Array) => {
|
||||||
|
stdout += chunk.toString();
|
||||||
|
return true;
|
||||||
|
}) as typeof process.stdout.write;
|
||||||
|
(process.stderr.write as unknown as (chunk: string | Uint8Array) => boolean) = ((chunk: string | Uint8Array) => {
|
||||||
|
stderr += chunk.toString();
|
||||||
|
return true;
|
||||||
|
}) as typeof process.stderr.write;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await installPackageSources(workingDir, agentDir, ["npm:test-visible-package"]);
|
||||||
|
assert.deepEqual(result.installed, ["npm:test-visible-package"]);
|
||||||
|
assert.deepEqual(result.skipped, []);
|
||||||
|
} finally {
|
||||||
|
process.stdout.write = originalStdoutWrite;
|
||||||
|
process.stderr.write = originalStderrWrite;
|
||||||
|
}
|
||||||
|
|
||||||
|
const combined = `${stdout}\n${stderr}`;
|
||||||
|
assert.match(combined, /visible stdout line/);
|
||||||
|
assert.match(combined, /visible stderr line/);
|
||||||
|
assert.doesNotMatch(combined, /node-domexception/);
|
||||||
|
assert.doesNotMatch(combined, /changed 343 packages/);
|
||||||
|
assert.doesNotMatch(combined, /packages are looking for funding/);
|
||||||
|
assert.doesNotMatch(combined, /npm fund/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("installPackageSources skips native packages on unsupported Node majors before invoking npm", async () => {
|
||||||
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
|
const workingDir = resolve(root, "project");
|
||||||
|
const agentDir = resolve(root, "agent");
|
||||||
|
const markerPath = resolve(root, "npm-invoked.txt");
|
||||||
|
mkdirSync(workingDir, { recursive: true });
|
||||||
|
|
||||||
|
const scriptPath = writeFakeNpmScript(root, [
|
||||||
|
`import { writeFileSync } from "node:fs";`,
|
||||||
|
`writeFileSync(${JSON.stringify(markerPath)}, "invoked\\n", "utf8");`,
|
||||||
|
"process.exit(0);",
|
||||||
|
].join("\n"));
|
||||||
|
|
||||||
|
writeSettings(agentDir, {
|
||||||
|
npmCommand: [process.execPath, scriptPath],
|
||||||
|
});
|
||||||
|
|
||||||
|
const originalVersion = process.versions.node;
|
||||||
|
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
||||||
|
try {
|
||||||
|
const result = await installPackageSources(workingDir, agentDir, ["npm:@kaiserlich-dev/pi-session-search"]);
|
||||||
|
assert.deepEqual(result.installed, []);
|
||||||
|
assert.deepEqual(result.skipped, ["npm:@kaiserlich-dev/pi-session-search"]);
|
||||||
|
assert.equal(existsSync(markerPath), false);
|
||||||
|
} finally {
|
||||||
|
Object.defineProperty(process.versions, "node", { value: originalVersion, configurable: true });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
test("updateConfiguredPackages batches multiple npm updates into a single install per scope", async () => {
|
||||||
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
|
const workingDir = resolve(root, "project");
|
||||||
|
const agentDir = resolve(root, "agent");
|
||||||
|
const logPath = resolve(root, "npm-invocations.jsonl");
|
||||||
|
mkdirSync(workingDir, { recursive: true });
|
||||||
|
|
||||||
|
const scriptPath = writeFakeNpmScript(root, [
|
||||||
|
`import { appendFileSync } from "node:fs";`,
|
||||||
|
`import { resolve } from "node:path";`,
|
||||||
|
`const args = process.argv.slice(2);`,
|
||||||
|
`if (args.length === 2 && args[0] === "root" && args[1] === "-g") {`,
|
||||||
|
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
||||||
|
` process.exit(0);`,
|
||||||
|
`}`,
|
||||||
|
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
||||||
|
"process.exit(0);",
|
||||||
|
].join("\n"));
|
||||||
|
|
||||||
|
writeSettings(agentDir, {
|
||||||
|
npmCommand: [process.execPath, scriptPath],
|
||||||
|
packages: ["npm:test-one", "npm:test-two"],
|
||||||
|
});
|
||||||
|
createInstalledGlobalPackage(root, "test-one", "1.0.0");
|
||||||
|
createInstalledGlobalPackage(root, "test-two", "1.0.0");
|
||||||
|
|
||||||
|
const originalFetch = globalThis.fetch;
|
||||||
|
globalThis.fetch = (async () => ({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({ version: "2.0.0" }),
|
||||||
|
})) as typeof fetch;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await updateConfiguredPackages(workingDir, agentDir);
|
||||||
|
assert.deepEqual(result.skipped, []);
|
||||||
|
assert.deepEqual(result.updated.sort(), ["npm:test-one", "npm:test-two"]);
|
||||||
|
} finally {
|
||||||
|
globalThis.fetch = originalFetch;
|
||||||
|
}
|
||||||
|
|
||||||
|
const invocations = readFileSync(logPath, "utf8").trim().split("\n").map((line) => JSON.parse(line) as string[]);
|
||||||
|
assert.equal(invocations.length, 1);
|
||||||
|
assert.ok(invocations[0]?.includes("install"));
|
||||||
|
assert.ok(invocations[0]?.includes("test-one@latest"));
|
||||||
|
assert.ok(invocations[0]?.includes("test-two@latest"));
|
||||||
|
});
|
||||||
|
|
||||||
|
test("updateConfiguredPackages skips native package updates on unsupported Node majors", async () => {
|
||||||
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
|
const workingDir = resolve(root, "project");
|
||||||
|
const agentDir = resolve(root, "agent");
|
||||||
|
const logPath = resolve(root, "npm-invocations.jsonl");
|
||||||
|
mkdirSync(workingDir, { recursive: true });
|
||||||
|
|
||||||
|
const scriptPath = writeFakeNpmScript(root, [
|
||||||
|
`import { appendFileSync } from "node:fs";`,
|
||||||
|
`import { resolve } from "node:path";`,
|
||||||
|
`const args = process.argv.slice(2);`,
|
||||||
|
`if (args.length === 2 && args[0] === "root" && args[1] === "-g") {`,
|
||||||
|
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
||||||
|
` process.exit(0);`,
|
||||||
|
`}`,
|
||||||
|
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
||||||
|
"process.exit(0);",
|
||||||
|
].join("\n"));
|
||||||
|
|
||||||
|
writeSettings(agentDir, {
|
||||||
|
npmCommand: [process.execPath, scriptPath],
|
||||||
|
packages: ["npm:@kaiserlich-dev/pi-session-search", "npm:test-regular"],
|
||||||
|
});
|
||||||
|
createInstalledGlobalPackage(root, "@kaiserlich-dev/pi-session-search", "1.0.0");
|
||||||
|
createInstalledGlobalPackage(root, "test-regular", "1.0.0");
|
||||||
|
|
||||||
|
const originalFetch = globalThis.fetch;
|
||||||
|
const originalVersion = process.versions.node;
|
||||||
|
globalThis.fetch = (async () => ({
|
||||||
|
ok: true,
|
||||||
|
json: async () => ({ version: "2.0.0" }),
|
||||||
|
})) as typeof fetch;
|
||||||
|
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await updateConfiguredPackages(workingDir, agentDir);
|
||||||
|
assert.deepEqual(result.updated, ["npm:test-regular"]);
|
||||||
|
assert.deepEqual(result.skipped, ["npm:@kaiserlich-dev/pi-session-search"]);
|
||||||
|
} finally {
|
||||||
|
globalThis.fetch = originalFetch;
|
||||||
|
Object.defineProperty(process.versions, "node", { value: originalVersion, configurable: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
const invocations = existsSync(logPath)
|
||||||
|
? readFileSync(logPath, "utf8").trim().split("\n").filter(Boolean).map((line) => JSON.parse(line) as string[])
|
||||||
|
: [];
|
||||||
|
assert.equal(invocations.length, 1);
|
||||||
|
assert.ok(invocations[0]?.includes("test-regular@latest"));
|
||||||
|
assert.ok(!invocations[0]?.some((entry) => entry.includes("pi-session-search")));
|
||||||
|
});
|
||||||
@@ -4,7 +4,13 @@ import { tmpdir } from "node:os";
|
|||||||
import { join } from "node:path";
|
import { join } from "node:path";
|
||||||
import test from "node:test";
|
import test from "node:test";
|
||||||
|
|
||||||
import { CORE_PACKAGE_SOURCES, getOptionalPackagePresetSources, shouldPruneLegacyDefaultPackages } from "../src/pi/package-presets.js";
|
import {
|
||||||
|
CORE_PACKAGE_SOURCES,
|
||||||
|
getOptionalPackagePresetSources,
|
||||||
|
NATIVE_PACKAGE_SOURCES,
|
||||||
|
shouldPruneLegacyDefaultPackages,
|
||||||
|
supportsNativePackageSources,
|
||||||
|
} from "../src/pi/package-presets.js";
|
||||||
import { normalizeFeynmanSettings, normalizeThinkingLevel } from "../src/pi/settings.js";
|
import { normalizeFeynmanSettings, normalizeThinkingLevel } from "../src/pi/settings.js";
|
||||||
|
|
||||||
test("normalizeThinkingLevel accepts the latest Pi thinking levels", () => {
|
test("normalizeThinkingLevel accepts the latest Pi thinking levels", () => {
|
||||||
@@ -71,3 +77,42 @@ test("optional package presets map friendly aliases", () => {
|
|||||||
assert.deepEqual(getOptionalPackagePresetSources("search"), undefined);
|
assert.deepEqual(getOptionalPackagePresetSources("search"), undefined);
|
||||||
assert.equal(shouldPruneLegacyDefaultPackages(["npm:custom"]), false);
|
assert.equal(shouldPruneLegacyDefaultPackages(["npm:custom"]), false);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("supportsNativePackageSources disables sqlite-backed packages on Node 25+", () => {
|
||||||
|
assert.equal(supportsNativePackageSources("24.8.0"), true);
|
||||||
|
assert.equal(supportsNativePackageSources("25.0.0"), false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("normalizeFeynmanSettings prunes native core packages on unsupported Node majors", () => {
|
||||||
|
const root = mkdtempSync(join(tmpdir(), "feynman-settings-"));
|
||||||
|
const settingsPath = join(root, "settings.json");
|
||||||
|
const bundledSettingsPath = join(root, "bundled-settings.json");
|
||||||
|
const authPath = join(root, "auth.json");
|
||||||
|
|
||||||
|
writeFileSync(
|
||||||
|
settingsPath,
|
||||||
|
JSON.stringify(
|
||||||
|
{
|
||||||
|
packages: [...CORE_PACKAGE_SOURCES],
|
||||||
|
},
|
||||||
|
null,
|
||||||
|
2,
|
||||||
|
) + "\n",
|
||||||
|
"utf8",
|
||||||
|
);
|
||||||
|
writeFileSync(bundledSettingsPath, "{}\n", "utf8");
|
||||||
|
writeFileSync(authPath, "{}\n", "utf8");
|
||||||
|
|
||||||
|
const originalVersion = process.versions.node;
|
||||||
|
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
||||||
|
try {
|
||||||
|
normalizeFeynmanSettings(settingsPath, bundledSettingsPath, "medium", authPath);
|
||||||
|
} finally {
|
||||||
|
Object.defineProperty(process.versions, "node", { value: originalVersion, configurable: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
const settings = JSON.parse(readFileSync(settingsPath, "utf8")) as { packages?: string[] };
|
||||||
|
for (const source of NATIVE_PACKAGE_SOURCES) {
|
||||||
|
assert.equal(settings.packages?.includes(source), false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|||||||
@@ -102,3 +102,41 @@ test("patchPiSubagentsSource is idempotent", () => {
|
|||||||
|
|
||||||
assert.equal(twice, once);
|
assert.equal(twice, once);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource rewrites modern agents.ts discovery paths", () => {
|
||||||
|
const input = [
|
||||||
|
'import * as fs from "node:fs";',
|
||||||
|
'import * as os from "node:os";',
|
||||||
|
'import * as path from "node:path";',
|
||||||
|
'export function discoverAgents(cwd: string, scope: AgentScope): AgentDiscoveryResult {',
|
||||||
|
'\tconst userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
|
||||||
|
'\tconst userDirNew = path.join(os.homedir(), ".agents");',
|
||||||
|
'\tconst userAgentsOld = scope === "project" ? [] : loadAgentsFromDir(userDirOld, "user");',
|
||||||
|
'\tconst userAgentsNew = scope === "project" ? [] : loadAgentsFromDir(userDirNew, "user");',
|
||||||
|
'\tconst userAgents = [...userAgentsOld, ...userAgentsNew];',
|
||||||
|
'}',
|
||||||
|
'export function discoverAgentsAll(cwd: string) {',
|
||||||
|
'\tconst userDirOld = path.join(os.homedir(), ".pi", "agent", "agents");',
|
||||||
|
'\tconst userDirNew = path.join(os.homedir(), ".agents");',
|
||||||
|
'\tconst user = [',
|
||||||
|
'\t\t...loadAgentsFromDir(userDirOld, "user"),',
|
||||||
|
'\t\t...loadAgentsFromDir(userDirNew, "user"),',
|
||||||
|
'\t];',
|
||||||
|
'\tconst chains = [',
|
||||||
|
'\t\t...loadChainsFromDir(userDirOld, "user"),',
|
||||||
|
'\t\t...loadChainsFromDir(userDirNew, "user"),',
|
||||||
|
'\t\t...(projectDir ? loadChainsFromDir(projectDir, "project") : []),',
|
||||||
|
'\t];',
|
||||||
|
'\tconst userDir = fs.existsSync(userDirNew) ? userDirNew : userDirOld;',
|
||||||
|
'}',
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("agents.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /function resolvePiAgentDir\(\): string \{/);
|
||||||
|
assert.match(patched, /const userDir = path\.join\(resolvePiAgentDir\(\), "agents"\);/);
|
||||||
|
assert.match(patched, /const userAgents = scope === "project" \? \[\] : loadAgentsFromDir\(userDir, "user"\);/);
|
||||||
|
assert.ok(!patched.includes('loadAgentsFromDir(userDirOld, "user")'));
|
||||||
|
assert.ok(!patched.includes('loadChainsFromDir(userDirNew, "user")'));
|
||||||
|
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
|
||||||
|
});
|
||||||
|
|||||||
@@ -33,6 +33,30 @@ test("patchPiWebAccessSource updates index.ts directory handling", () => {
|
|||||||
assert.match(patched, /const dir = dirname\(WEB_SEARCH_CONFIG_PATH\);/);
|
assert.match(patched, /const dir = dirname\(WEB_SEARCH_CONFIG_PATH\);/);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("patchPiWebAccessSource defaults workflow to none for index.ts without disabling explicit summary-review", () => {
|
||||||
|
const input = [
|
||||||
|
'function resolveWorkflow(input: unknown, hasUI: boolean): WebSearchWorkflow {',
|
||||||
|
'\tif (!hasUI) return "none";',
|
||||||
|
'\tif (typeof input === "string" && input.trim().toLowerCase() === "none") return "none";',
|
||||||
|
'\treturn "summary-review";',
|
||||||
|
'}',
|
||||||
|
'const configWorkflow = loadConfigForExtensionInit().workflow;',
|
||||||
|
'const workflow = resolveWorkflow(params.workflow ?? configWorkflow, ctx?.hasUI !== false);',
|
||||||
|
'workflow: Type.Optional(',
|
||||||
|
'\tStringEnum(["none", "summary-review"], {',
|
||||||
|
'\t\tdescription: "Search workflow mode: none = no curator, summary-review = open curator with auto summary draft (default)",',
|
||||||
|
'\t}),',
|
||||||
|
'),',
|
||||||
|
"",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiWebAccessSource("index.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /params\.workflow \?\? configWorkflow \?\? "none"/);
|
||||||
|
assert.match(patched, /return "summary-review";/);
|
||||||
|
assert.match(patched, /summary-review = open curator with auto summary draft \(opt-in\)/);
|
||||||
|
});
|
||||||
|
|
||||||
test("patchPiWebAccessSource is idempotent", () => {
|
test("patchPiWebAccessSource is idempotent", () => {
|
||||||
const input = [
|
const input = [
|
||||||
'import { join } from "node:path";',
|
'import { join } from "node:path";',
|
||||||
|
|||||||
@@ -62,6 +62,7 @@ test("getPiWebAccessStatus reads Pi web-access config directly", () => {
|
|||||||
const status = getPiWebAccessStatus(loadPiWebAccessConfig(configPath), configPath);
|
const status = getPiWebAccessStatus(loadPiWebAccessConfig(configPath), configPath);
|
||||||
assert.equal(status.routeLabel, "Exa");
|
assert.equal(status.routeLabel, "Exa");
|
||||||
assert.equal(status.requestProvider, "exa");
|
assert.equal(status.requestProvider, "exa");
|
||||||
|
assert.equal(status.workflow, "none");
|
||||||
assert.equal(status.exaConfigured, true);
|
assert.equal(status.exaConfigured, true);
|
||||||
assert.equal(status.geminiApiConfigured, true);
|
assert.equal(status.geminiApiConfigured, true);
|
||||||
assert.equal(status.perplexityConfigured, false);
|
assert.equal(status.perplexityConfigured, false);
|
||||||
@@ -86,6 +87,7 @@ test("getPiWebAccessStatus reads Gemini routes directly", () => {
|
|||||||
const status = getPiWebAccessStatus(loadPiWebAccessConfig(configPath), configPath);
|
const status = getPiWebAccessStatus(loadPiWebAccessConfig(configPath), configPath);
|
||||||
assert.equal(status.routeLabel, "Gemini");
|
assert.equal(status.routeLabel, "Gemini");
|
||||||
assert.equal(status.requestProvider, "gemini");
|
assert.equal(status.requestProvider, "gemini");
|
||||||
|
assert.equal(status.workflow, "none");
|
||||||
assert.equal(status.exaConfigured, false);
|
assert.equal(status.exaConfigured, false);
|
||||||
assert.equal(status.geminiApiConfigured, true);
|
assert.equal(status.geminiApiConfigured, true);
|
||||||
assert.equal(status.perplexityConfigured, false);
|
assert.equal(status.perplexityConfigured, false);
|
||||||
@@ -100,6 +102,7 @@ test("getPiWebAccessStatus supports the legacy route key", () => {
|
|||||||
|
|
||||||
assert.equal(status.routeLabel, "Perplexity");
|
assert.equal(status.routeLabel, "Perplexity");
|
||||||
assert.equal(status.requestProvider, "perplexity");
|
assert.equal(status.requestProvider, "perplexity");
|
||||||
|
assert.equal(status.workflow, "none");
|
||||||
assert.equal(status.perplexityConfigured, true);
|
assert.equal(status.perplexityConfigured, true);
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -112,5 +115,6 @@ test("formatPiWebAccessDoctorLines reports Pi-managed web access", () => {
|
|||||||
);
|
);
|
||||||
|
|
||||||
assert.equal(lines[0], "web access: pi-web-access");
|
assert.equal(lines[0], "web access: pi-web-access");
|
||||||
|
assert.ok(lines.some((line) => line.includes("search workflow: none")));
|
||||||
assert.ok(lines.some((line) => line.includes("/tmp/pi-web-search.json")));
|
assert.ok(lines.some((line) => line.includes("/tmp/pi-web-search.json")));
|
||||||
});
|
});
|
||||||
|
|||||||
43
website/package-lock.json
generated
43
website/package-lock.json
generated
@@ -13,6 +13,7 @@
|
|||||||
"@tailwindcss/vite": "^4.2.1",
|
"@tailwindcss/vite": "^4.2.1",
|
||||||
"@types/react": "^19.2.14",
|
"@types/react": "^19.2.14",
|
||||||
"@types/react-dom": "^19.2.3",
|
"@types/react-dom": "^19.2.3",
|
||||||
|
"@vercel/analytics": "^2.0.1",
|
||||||
"astro": "^5.18.1",
|
"astro": "^5.18.1",
|
||||||
"class-variance-authority": "^0.7.1",
|
"class-variance-authority": "^0.7.1",
|
||||||
"clsx": "^2.1.1",
|
"clsx": "^2.1.1",
|
||||||
@@ -5120,6 +5121,48 @@
|
|||||||
"integrity": "sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==",
|
"integrity": "sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==",
|
||||||
"license": "ISC"
|
"license": "ISC"
|
||||||
},
|
},
|
||||||
|
"node_modules/@vercel/analytics": {
|
||||||
|
"version": "2.0.1",
|
||||||
|
"resolved": "https://registry.npmjs.org/@vercel/analytics/-/analytics-2.0.1.tgz",
|
||||||
|
"integrity": "sha512-MTQG6V9qQrt1tsDeF+2Uoo5aPjqbVPys1xvnIftXSJYG2SrwXRHnqEvVoYID7BTruDz4lCd2Z7rM1BdkUehk2g==",
|
||||||
|
"license": "MIT",
|
||||||
|
"peerDependencies": {
|
||||||
|
"@remix-run/react": "^2",
|
||||||
|
"@sveltejs/kit": "^1 || ^2",
|
||||||
|
"next": ">= 13",
|
||||||
|
"nuxt": ">= 3",
|
||||||
|
"react": "^18 || ^19 || ^19.0.0-rc",
|
||||||
|
"svelte": ">= 4",
|
||||||
|
"vue": "^3",
|
||||||
|
"vue-router": "^4"
|
||||||
|
},
|
||||||
|
"peerDependenciesMeta": {
|
||||||
|
"@remix-run/react": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"@sveltejs/kit": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"next": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"nuxt": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"react": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"svelte": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"vue": {
|
||||||
|
"optional": true
|
||||||
|
},
|
||||||
|
"vue-router": {
|
||||||
|
"optional": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"node_modules/@vitejs/plugin-react": {
|
"node_modules/@vitejs/plugin-react": {
|
||||||
"version": "4.7.0",
|
"version": "4.7.0",
|
||||||
"resolved": "https://registry.npmjs.org/@vitejs/plugin-react/-/plugin-react-4.7.0.tgz",
|
"resolved": "https://registry.npmjs.org/@vitejs/plugin-react/-/plugin-react-4.7.0.tgz",
|
||||||
|
|||||||
@@ -21,6 +21,7 @@
|
|||||||
"@tailwindcss/vite": "^4.2.1",
|
"@tailwindcss/vite": "^4.2.1",
|
||||||
"@types/react": "^19.2.14",
|
"@types/react": "^19.2.14",
|
||||||
"@types/react-dom": "^19.2.3",
|
"@types/react-dom": "^19.2.3",
|
||||||
|
"@vercel/analytics": "^2.0.1",
|
||||||
"astro": "^5.18.1",
|
"astro": "^5.18.1",
|
||||||
"class-variance-authority": "^0.7.1",
|
"class-variance-authority": "^0.7.1",
|
||||||
"clsx": "^2.1.1",
|
"clsx": "^2.1.1",
|
||||||
|
|||||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.16
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.20
|
||||||
EOF
|
EOF
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.16
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.20
|
||||||
"@
|
"@
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -22,17 +22,18 @@ The `settings.json` file is the primary configuration file. It is created by `fe
|
|||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"defaultModel": "anthropic:claude-sonnet-4-20250514",
|
"defaultProvider": "anthropic",
|
||||||
"thinkingLevel": "medium"
|
"defaultModel": "claude-sonnet-4-20250514",
|
||||||
|
"defaultThinkingLevel": "medium"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Model configuration
|
## Model configuration
|
||||||
|
|
||||||
The `defaultModel` field sets which model is used when you launch Feynman without the `--model` flag. The format is `provider:model-name`. You can change it via the CLI:
|
The `defaultProvider` and `defaultModel` fields set which model is used when you launch Feynman without the `--model` flag. You can change them via the CLI:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
feynman model set anthropic:claude-opus-4-20250514
|
feynman model set anthropic/claude-opus-4-20250514
|
||||||
```
|
```
|
||||||
|
|
||||||
To see all models you have configured:
|
To see all models you have configured:
|
||||||
@@ -48,6 +49,7 @@ To add another provider, authenticate it first:
|
|||||||
```bash
|
```bash
|
||||||
feynman model login anthropic
|
feynman model login anthropic
|
||||||
feynman model login google
|
feynman model login google
|
||||||
|
feynman model login amazon-bedrock
|
||||||
```
|
```
|
||||||
|
|
||||||
Then switch the default model:
|
Then switch the default model:
|
||||||
@@ -56,6 +58,8 @@ Then switch the default model:
|
|||||||
feynman model set anthropic/claude-opus-4-6
|
feynman model set anthropic/claude-opus-4-6
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The `model set` command accepts both `provider/model` and `provider:model` formats. `feynman model login google` opens the API-key flow directly, while `feynman model login amazon-bedrock` verifies the AWS credential chain that Pi uses for Bedrock access.
|
||||||
|
|
||||||
## Subagent model overrides
|
## Subagent model overrides
|
||||||
|
|
||||||
Feynman's bundled subagents inherit the main default model unless you override them explicitly. Inside the REPL, run:
|
Feynman's bundled subagents inherit the main default model unless you override them explicitly. Inside the REPL, run:
|
||||||
@@ -90,7 +94,8 @@ Feynman respects the following environment variables, which take precedence over
|
|||||||
| `FEYNMAN_THINKING` | Override the thinking level |
|
| `FEYNMAN_THINKING` | Override the thinking level |
|
||||||
| `ANTHROPIC_API_KEY` | Anthropic API key |
|
| `ANTHROPIC_API_KEY` | Anthropic API key |
|
||||||
| `OPENAI_API_KEY` | OpenAI API key |
|
| `OPENAI_API_KEY` | OpenAI API key |
|
||||||
| `GOOGLE_API_KEY` | Google AI API key |
|
| `GEMINI_API_KEY` | Google Gemini API key |
|
||||||
|
| `AWS_PROFILE` | Preferred AWS profile for Amazon Bedrock |
|
||||||
| `TAVILY_API_KEY` | Tavily web search API key |
|
| `TAVILY_API_KEY` | Tavily web search API key |
|
||||||
| `SERPER_API_KEY` | Serper web search API key |
|
| `SERPER_API_KEY` | Serper web search API key |
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
---
|
---
|
||||||
title: Installation
|
title: Installation
|
||||||
description: Install Feynman on macOS, Linux, or Windows using the standalone installer.
|
description: Install Feynman on macOS, Linux, or Windows with curl or npm.
|
||||||
section: Getting Started
|
section: Getting Started
|
||||||
order: 1
|
order: 1
|
||||||
---
|
---
|
||||||
|
|
||||||
Feynman ships as a standalone runtime bundle for macOS, Linux, and Windows. The one-line installer downloads a prebuilt native bundle with zero external runtime dependencies.
|
Feynman can be installed either as a standalone runtime bundle or as an npm package. For most users, the standalone installer is the simplest path because it downloads a prebuilt native bundle with zero external runtime dependencies.
|
||||||
|
|
||||||
## One-line installer (recommended)
|
## One-line installer (recommended)
|
||||||
|
|
||||||
@@ -27,6 +27,61 @@ irm https://feynman.is/install.ps1 | iex
|
|||||||
|
|
||||||
This installs the Windows runtime bundle under `%LOCALAPPDATA%\Programs\feynman`, adds its launcher to your user `PATH`, and lets you re-run the installer at any time to update.
|
This installs the Windows runtime bundle under `%LOCALAPPDATA%\Programs\feynman`, adds its launcher to your user `PATH`, and lets you re-run the installer at any time to update.
|
||||||
|
|
||||||
|
## Alternative: npm install
|
||||||
|
|
||||||
|
If you prefer installing Feynman into an existing Node.js environment, use npm instead:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @companion-ai/feynman
|
||||||
|
```
|
||||||
|
|
||||||
|
This path uses your local Node.js runtime instead of the bundled standalone runtime. It requires a compatible Node.js version that satisfies Feynman's current engine range: `>=20.19.0 <25`.
|
||||||
|
|
||||||
|
## Updating the standalone app
|
||||||
|
|
||||||
|
To update the standalone Feynman app on macOS, Linux, or Windows, rerun the installer you originally used. That replaces the downloaded runtime bundle with the latest tagged release.
|
||||||
|
|
||||||
|
`feynman update` is different: it updates installed Pi packages inside Feynman's environment, not the standalone app bundle itself.
|
||||||
|
|
||||||
|
If you installed Feynman with npm, upgrade it with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install -g @companion-ai/feynman@latest
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstalling
|
||||||
|
|
||||||
|
Feynman does not currently ship a dedicated `uninstall` command. Remove the standalone launcher and runtime bundle directly, then optionally remove the Feynman home directory if you also want to delete settings, sessions, and installed package state. If you also want to clear alphaXiv login state, remove `~/.ahub`.
|
||||||
|
|
||||||
|
If you installed Feynman with npm, uninstall it with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm uninstall -g @companion-ai/feynman
|
||||||
|
```
|
||||||
|
|
||||||
|
On macOS or Linux:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rm -f ~/.local/bin/feynman
|
||||||
|
rm -rf ~/.local/share/feynman
|
||||||
|
# optional: remove settings, sessions, and installed package state
|
||||||
|
rm -rf ~/.feynman
|
||||||
|
# optional: remove alphaXiv auth state
|
||||||
|
rm -rf ~/.ahub
|
||||||
|
```
|
||||||
|
|
||||||
|
On Windows PowerShell:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
Remove-Item "$env:LOCALAPPDATA\\Programs\\feynman" -Recurse -Force
|
||||||
|
# optional: remove settings, sessions, and installed package state
|
||||||
|
Remove-Item "$HOME\\.feynman" -Recurse -Force
|
||||||
|
# optional: remove alphaXiv auth state
|
||||||
|
Remove-Item "$HOME\\.ahub" -Recurse -Force
|
||||||
|
```
|
||||||
|
|
||||||
|
If you added the launcher directory to `PATH` manually, remove that entry as well.
|
||||||
|
|
||||||
## Skills only
|
## Skills only
|
||||||
|
|
||||||
If you only want Feynman's research skills and not the full terminal runtime, install the skill library separately.
|
If you only want Feynman's research skills and not the full terminal runtime, install the skill library separately.
|
||||||
@@ -62,13 +117,13 @@ These installers download the bundled `skills/` and `prompts/` trees plus the re
|
|||||||
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.17
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.20
|
||||||
```
|
```
|
||||||
|
|
||||||
On Windows:
|
On Windows:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.17
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.20
|
||||||
```
|
```
|
||||||
|
|
||||||
## Post-install setup
|
## Post-install setup
|
||||||
@@ -90,15 +145,3 @@ feynman --version
|
|||||||
```
|
```
|
||||||
|
|
||||||
If you see a version number, you are ready to go. Run `feynman doctor` at any time to diagnose configuration issues, missing dependencies, or authentication problems.
|
If you see a version number, you are ready to go. Run `feynman doctor` at any time to diagnose configuration issues, missing dependencies, or authentication problems.
|
||||||
|
|
||||||
## Local development
|
|
||||||
|
|
||||||
For contributing or running Feynman from source:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/getcompanion-ai/feynman.git
|
|
||||||
cd feynman
|
|
||||||
nvm use || nvm install
|
|
||||||
npm install
|
|
||||||
npm start
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ Feynman supports multiple model providers. The setup wizard presents a list of a
|
|||||||
google:gemini-2.5-pro
|
google:gemini-2.5-pro
|
||||||
```
|
```
|
||||||
|
|
||||||
The model you choose here becomes the default for all sessions. You can override it per-session with the `--model` flag or change it later via `feynman model set <provider:model>`.
|
The model you choose here becomes the default for all sessions. You can override it per-session with the `--model` flag or change it later via `feynman model set <provider/model>` or `feynman model set <provider:model>`.
|
||||||
|
|
||||||
## Stage 2: Authentication
|
## Stage 2: Authentication
|
||||||
|
|
||||||
@@ -42,6 +42,16 @@ For API key providers, you are prompted to paste your key directly:
|
|||||||
|
|
||||||
Keys are encrypted at rest and never sent anywhere except the provider's API endpoint.
|
Keys are encrypted at rest and never sent anywhere except the provider's API endpoint.
|
||||||
|
|
||||||
|
### Amazon Bedrock
|
||||||
|
|
||||||
|
For Amazon Bedrock, choose:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Amazon Bedrock (AWS credential chain)
|
||||||
|
```
|
||||||
|
|
||||||
|
Feynman verifies the same AWS credential chain Pi uses at runtime, including `AWS_PROFILE`, `~/.aws` credentials/config, SSO, ECS/IRSA, and EC2 instance roles. Once that check passes, Bedrock models become available in `feynman model list` without needing a traditional API key.
|
||||||
|
|
||||||
### Local models: Ollama, LM Studio, vLLM
|
### Local models: Ollama, LM Studio, vLLM
|
||||||
|
|
||||||
If you want to use a model running locally, choose the API-key flow and then select:
|
If you want to use a model running locally, choose the API-key flow and then select:
|
||||||
|
|||||||
@@ -23,11 +23,11 @@ This page covers the dedicated Feynman CLI commands and flags. Workflow commands
|
|||||||
| Command | Description |
|
| Command | Description |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| `feynman model list` | List available models in Pi auth storage |
|
| `feynman model list` | List available models in Pi auth storage |
|
||||||
| `feynman model login [id]` | Login to a Pi OAuth model provider |
|
| `feynman model login [id]` | Authenticate a model provider with OAuth or API-key setup |
|
||||||
| `feynman model logout [id]` | Logout from a Pi OAuth model provider |
|
| `feynman model logout [id]` | Clear stored auth for a model provider |
|
||||||
| `feynman model set <provider:model>` | Set the default model for all sessions |
|
| `feynman model set <provider/model>` | Set the default model for all sessions |
|
||||||
|
|
||||||
These commands manage your model provider configuration. The `model set` command updates `~/.feynman/settings.json` with the new default. The format is `provider:model-name`, for example `anthropic:claude-sonnet-4-20250514`.
|
These commands manage your model provider configuration. The `model set` command updates `~/.feynman/settings.json` with the new default. It accepts either `provider/model-name` or `provider:model-name`, for example `anthropic/claude-sonnet-4-20250514` or `anthropic:claude-sonnet-4-20250514`. Running `feynman model login google` or `feynman model login amazon-bedrock` routes directly into the relevant API-key setup flow instead of requiring the interactive picker.
|
||||||
|
|
||||||
## AlphaXiv commands
|
## AlphaXiv commands
|
||||||
|
|
||||||
@@ -76,7 +76,7 @@ These are equivalent to launching the REPL and typing the corresponding slash co
|
|||||||
| Flag | Description |
|
| Flag | Description |
|
||||||
| --- | --- |
|
| --- | --- |
|
||||||
| `--prompt "<text>"` | Run one prompt and exit (one-shot mode) |
|
| `--prompt "<text>"` | Run one prompt and exit (one-shot mode) |
|
||||||
| `--model <provider:model>` | Force a specific model for this session |
|
| `--model <provider/model|provider:model>` | Force a specific model for this session |
|
||||||
| `--thinking <level>` | Set thinking level: `off`, `minimal`, `low`, `medium`, `high`, `xhigh` |
|
| `--thinking <level>` | Set thinking level: `off`, `minimal`, `low`, `medium`, `high`, `xhigh` |
|
||||||
| `--cwd <path>` | Set the working directory for all file operations |
|
| `--cwd <path>` | Set the working directory for all file operations |
|
||||||
| `--session-dir <path>` | Set the session storage directory |
|
| `--session-dir <path>` | Set the session storage directory |
|
||||||
|
|||||||
@@ -74,3 +74,5 @@ feynman update pi-subagents
|
|||||||
```
|
```
|
||||||
|
|
||||||
Running `feynman update` without arguments updates everything. Pass a specific package name to update just that one. Updates are safe and preserve your configuration.
|
Running `feynman update` without arguments updates everything. Pass a specific package name to update just that one. Updates are safe and preserve your configuration.
|
||||||
|
|
||||||
|
This command updates Pi packages inside Feynman's environment. To upgrade the standalone Feynman app itself, rerun the installer from the [Installation guide](/docs/getting-started/installation).
|
||||||
|
|||||||
@@ -35,6 +35,8 @@ When working from existing session context (after a deep research or literature
|
|||||||
|
|
||||||
The writer pays attention to academic conventions: claims are attributed to their sources with inline citations, methodology sections describe procedures precisely, and limitations are discussed honestly. The draft includes placeholder sections for any content the writer cannot generate from available sources, clearly marking what needs human input.
|
The writer pays attention to academic conventions: claims are attributed to their sources with inline citations, methodology sections describe procedures precisely, and limitations are discussed honestly. The draft includes placeholder sections for any content the writer cannot generate from available sources, clearly marking what needs human input.
|
||||||
|
|
||||||
|
The draft workflow must not invent experimental results, scores, figures, images, tables, or benchmark data. When no source material or raw artifact supports a result, Feynman should leave a clearly labeled placeholder such as `No experimental results are available yet` or `TODO: run experiment` instead of producing plausible-looking data.
|
||||||
|
|
||||||
## Output format
|
## Output format
|
||||||
|
|
||||||
The draft follows standard academic structure:
|
The draft follows standard academic structure:
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
---
|
---
|
||||||
import "@/styles/global.css"
|
import "@/styles/global.css"
|
||||||
import { ViewTransitions } from "astro:transitions"
|
import { ClientRouter } from "astro:transitions"
|
||||||
|
import Analytics from "@vercel/analytics/astro"
|
||||||
|
|
||||||
interface Props {
|
interface Props {
|
||||||
title?: string
|
title?: string
|
||||||
@@ -25,7 +26,8 @@ const {
|
|||||||
<link rel="preconnect" href="https://fonts.googleapis.com" />
|
<link rel="preconnect" href="https://fonts.googleapis.com" />
|
||||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
|
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
|
||||||
<link href="https://fonts.googleapis.com/css2?family=VT323&display=swap" rel="stylesheet" />
|
<link href="https://fonts.googleapis.com/css2?family=VT323&display=swap" rel="stylesheet" />
|
||||||
<ViewTransitions />
|
<ClientRouter />
|
||||||
|
<Analytics />
|
||||||
<script is:inline>
|
<script is:inline>
|
||||||
;(function () {
|
;(function () {
|
||||||
const theme = localStorage.getItem("theme")
|
const theme = localStorage.getItem("theme")
|
||||||
|
|||||||
@@ -45,6 +45,7 @@ const terminalCommands = [
|
|||||||
|
|
||||||
const installCommands = [
|
const installCommands = [
|
||||||
{ label: "curl", command: "curl -fsSL https://feynman.is/install | bash" },
|
{ label: "curl", command: "curl -fsSL https://feynman.is/install | bash" },
|
||||||
|
{ label: "npm", command: "npm install -g @companion-ai/feynman" },
|
||||||
]
|
]
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -102,6 +103,10 @@ const installCommands = [
|
|||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<p class="text-sm text-muted-foreground">
|
||||||
|
Use curl for the bundled runtime, or npm if you already manage Node locally.
|
||||||
|
</p>
|
||||||
|
|
||||||
<p class="text-sm text-muted-foreground">
|
<p class="text-sm text-muted-foreground">
|
||||||
Need just the skills? <a href="/docs/getting-started/installation" class="text-primary hover:underline">Install the skills-only bundle</a>.
|
Need just the skills? <a href="/docs/getting-started/installation" class="text-primary hover:underline">Install the skills-only bundle</a>.
|
||||||
</p>
|
</p>
|
||||||
|
|||||||
Reference in New Issue
Block a user