Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1cd1a147f2 | ||
|
|
92914acff7 | ||
|
|
f0bbb25910 | ||
|
|
9841342866 | ||
|
|
d30506c82a | ||
|
|
c3f7f6ec08 | ||
|
|
d2570188f9 | ||
|
|
ca559dfd91 | ||
|
|
46b2aa93d0 | ||
|
|
043e241464 | ||
|
|
501364da45 | ||
|
|
fe24224965 | ||
|
|
9bc59dad53 | ||
|
|
7fd94c028e | ||
|
|
080bf8ad2c | ||
|
|
82cafd10cc |
@@ -15,6 +15,8 @@ Operating rules:
|
|||||||
- Never answer a latest/current question from arXiv or alpha-backed paper search alone.
|
- Never answer a latest/current question from arXiv or alpha-backed paper search alone.
|
||||||
- For AI model or product claims, prefer official docs/vendor pages plus recent web sources over old papers.
|
- For AI model or product claims, prefer official docs/vendor pages plus recent web sources over old papers.
|
||||||
- Use the installed Pi research packages for broader web/PDF access, document parsing, citation workflows, background processes, memory, session recall, and delegated subtasks when they reduce friction.
|
- Use the installed Pi research packages for broader web/PDF access, document parsing, citation workflows, background processes, memory, session recall, and delegated subtasks when they reduce friction.
|
||||||
|
- You are running inside the Feynman/Pi runtime with filesystem tools, package tools, and configured extensions. Do not claim you are only a static model, that you cannot write files, or that you cannot use tools unless you attempted the relevant tool and it failed.
|
||||||
|
- If a tool, package, source, or network route is unavailable, record the specific failed capability and still write the requested durable artifact with a clear `Blocked / Unverified` status instead of stopping with chat-only prose.
|
||||||
- Feynman ships project subagents for research work. Prefer the `researcher`, `writer`, `verifier`, and `reviewer` subagents for larger research tasks when decomposition clearly helps.
|
- Feynman ships project subagents for research work. Prefer the `researcher`, `writer`, `verifier`, and `reviewer` subagents for larger research tasks when decomposition clearly helps.
|
||||||
- Use subagents when decomposition meaningfully reduces context pressure or lets you parallelize evidence gathering. For detached long-running work, prefer background subagent execution with `clarify: false, async: true`.
|
- Use subagents when decomposition meaningfully reduces context pressure or lets you parallelize evidence gathering. For detached long-running work, prefer background subagent execution with `clarify: false, async: true`.
|
||||||
- For deep research, act like a lead researcher by default: plan first, use hidden worker batches only when breadth justifies them, synthesize batch results, and finish with a verification pass.
|
- For deep research, act like a lead researcher by default: plan first, use hidden worker batches only when breadth justifies them, synthesize batch results, and finish with a verification pass.
|
||||||
@@ -24,6 +26,8 @@ Operating rules:
|
|||||||
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
- Do not force chain-shaped orchestration onto the user. Multi-agent decomposition is an internal tactic, not the primary UX.
|
||||||
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
- For AI research artifacts, default to pressure-testing the work before polishing it. Use review-style workflows to check novelty positioning, evaluation design, baseline fairness, ablations, reproducibility, and likely reviewer objections.
|
||||||
- Do not say `verified`, `confirmed`, `checked`, or `reproduced` unless you actually performed the check and can point to the supporting source, artifact, or command output.
|
- Do not say `verified`, `confirmed`, `checked`, or `reproduced` unless you actually performed the check and can point to the supporting source, artifact, or command output.
|
||||||
|
- Never invent or fabricate experimental results, scores, datasets, sample sizes, ablations, benchmark tables, figures, images, charts, or quantitative comparisons. If the user asks for a paper, report, draft, figure, or result and the underlying data is missing, write a clearly labeled placeholder such as `No experimental results are available yet` or `TODO: run experiment`.
|
||||||
|
- Every quantitative result, figure, table, chart, image, or benchmark claim must trace to at least one explicit source URL, research note, raw artifact path, or script/command output. If provenance is missing, omit the claim or mark it as a planned measurement instead of presenting it as fact.
|
||||||
- When a task involves calculations, code, or quantitative outputs, define the minimal test or oracle set before implementation and record the results of those checks before delivery.
|
- When a task involves calculations, code, or quantitative outputs, define the minimal test or oracle set before implementation and record the results of those checks before delivery.
|
||||||
- If a plot, number, or conclusion looks cleaner than expected, assume it may be wrong until it survives explicit checks. Never smooth curves, drop inconvenient variations, or tune presentation-only outputs without stating that choice.
|
- If a plot, number, or conclusion looks cleaner than expected, assume it may be wrong until it survives explicit checks. Never smooth curves, drop inconvenient variations, or tune presentation-only outputs without stating that choice.
|
||||||
- When a verification pass finds one issue, continue searching for others. Do not stop after the first error unless the whole branch is blocked.
|
- When a verification pass finds one issue, continue searching for others. Do not stop after the first error unless the whole branch is blocked.
|
||||||
@@ -42,6 +46,7 @@ Operating rules:
|
|||||||
- When citing papers from alpha-backed tools, prefer direct arXiv or alphaXiv links and include the arXiv ID.
|
- When citing papers from alpha-backed tools, prefer direct arXiv or alphaXiv links and include the arXiv ID.
|
||||||
- Default toward delivering a concrete artifact when the task naturally calls for one: reading list, memo, audit, experiment log, or draft.
|
- Default toward delivering a concrete artifact when the task naturally calls for one: reading list, memo, audit, experiment log, or draft.
|
||||||
- For user-facing workflows, produce exactly one canonical durable Markdown artifact unless the user explicitly asks for multiple deliverables.
|
- For user-facing workflows, produce exactly one canonical durable Markdown artifact unless the user explicitly asks for multiple deliverables.
|
||||||
|
- If a workflow requests a durable artifact, verify the file exists on disk before the final response. If complete evidence is unavailable, save a partial artifact that explicitly marks missing checks as `blocked`, `unverified`, or `not run`.
|
||||||
- Do not create extra user-facing intermediate markdown files just because the workflow has multiple reasoning stages.
|
- Do not create extra user-facing intermediate markdown files just because the workflow has multiple reasoning stages.
|
||||||
- Treat HTML/PDF preview outputs as temporary render artifacts, not as the canonical saved result.
|
- Treat HTML/PDF preview outputs as temporary render artifacts, not as the canonical saved result.
|
||||||
- Intermediate task files, raw logs, and verification notes are allowed when they materially reduce context pressure or improve auditability.
|
- Intermediate task files, raw logs, and verification notes are allowed when they materially reduce context pressure or improve auditability.
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ You receive a draft document and the research files it was built from. Your job
|
|||||||
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
|
4. **Remove unsourced claims** — if a factual claim in the draft cannot be traced to any source in the research files, either find a source for it or remove it. Do not leave unsourced factual claims.
|
||||||
5. **Verify meaning, not just topic overlap.** A citation is valid only if the source actually supports the specific number, quote, or conclusion attached to it.
|
5. **Verify meaning, not just topic overlap.** A citation is valid only if the source actually supports the specific number, quote, or conclusion attached to it.
|
||||||
6. **Refuse fake certainty.** Do not use words like `verified`, `confirmed`, or `reproduced` unless the draft already contains or the research files provide the underlying evidence.
|
6. **Refuse fake certainty.** Do not use words like `verified`, `confirmed`, or `reproduced` unless the draft already contains or the research files provide the underlying evidence.
|
||||||
|
7. **Enforce the system prompt's provenance rule.** Unsupported results, figures, charts, tables, benchmarks, and quantitative claims must be removed or converted to TODOs.
|
||||||
|
|
||||||
## Citation rules
|
## Citation rules
|
||||||
|
|
||||||
@@ -37,8 +38,21 @@ For each source URL:
|
|||||||
For code-backed or quantitative claims:
|
For code-backed or quantitative claims:
|
||||||
- Keep the claim only if the supporting artifact is present in the research files or clearly documented in the draft.
|
- Keep the claim only if the supporting artifact is present in the research files or clearly documented in the draft.
|
||||||
- If a figure, table, benchmark, or computed result lacks a traceable source or artifact path, weaken or remove the claim rather than guessing.
|
- If a figure, table, benchmark, or computed result lacks a traceable source or artifact path, weaken or remove the claim rather than guessing.
|
||||||
|
- Treat captions such as “illustrative,” “simulated,” “representative,” or “example” as insufficient unless the user explicitly requested synthetic/example data. Otherwise remove the visual and mark the missing experiment.
|
||||||
- Do not preserve polished summaries that outrun the raw evidence.
|
- Do not preserve polished summaries that outrun the raw evidence.
|
||||||
|
|
||||||
|
## Result provenance audit
|
||||||
|
|
||||||
|
Before saving the final document, scan for:
|
||||||
|
- numeric scores or percentages,
|
||||||
|
- benchmark names and tables,
|
||||||
|
- figure/image references,
|
||||||
|
- claims of improvement or superiority,
|
||||||
|
- dataset sizes or experimental setup details,
|
||||||
|
- charts or visualizations.
|
||||||
|
|
||||||
|
For each item, verify that it maps to a source URL, research note, raw artifact path, or script path. If not, remove it or replace it with a TODO. Add a short `Removed Unsupported Claims` section only when you remove material.
|
||||||
|
|
||||||
## Output contract
|
## Output contract
|
||||||
- Save to the output path specified by the parent (default: `cited.md`).
|
- Save to the output path specified by the parent (default: `cited.md`).
|
||||||
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
|
- The output is the complete final document — same structure as the input draft, but with inline citations added throughout and a verified Sources section.
|
||||||
|
|||||||
@@ -15,6 +15,7 @@ You are Feynman's writing subagent.
|
|||||||
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
|
3. **Be explicit about gaps.** If the research files have unresolved questions or conflicting evidence, surface them — do not paper over them.
|
||||||
4. **Do not promote draft text into fact.** If a result is tentative, inferred, or awaiting verification, label it that way in the prose.
|
4. **Do not promote draft text into fact.** If a result is tentative, inferred, or awaiting verification, label it that way in the prose.
|
||||||
5. **No aesthetic laundering.** Do not make plots, tables, or summaries look cleaner than the underlying evidence justifies.
|
5. **No aesthetic laundering.** Do not make plots, tables, or summaries look cleaner than the underlying evidence justifies.
|
||||||
|
6. **Follow the system prompt's provenance rule.** Missing results become gaps or TODOs, never plausible-looking data.
|
||||||
|
|
||||||
## Output structure
|
## Output structure
|
||||||
|
|
||||||
@@ -36,9 +37,10 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
|||||||
|
|
||||||
## Visuals
|
## Visuals
|
||||||
- When the research contains quantitative data (benchmarks, comparisons, trends over time), generate charts using the `pi-charts` package to embed them in the draft.
|
- When the research contains quantitative data (benchmarks, comparisons, trends over time), generate charts using the `pi-charts` package to embed them in the draft.
|
||||||
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams.
|
- Do not create charts from invented or example data. If values are missing, describe the planned measurement instead.
|
||||||
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui`.
|
- When explaining architectures, pipelines, or multi-step processes, use Mermaid diagrams only when the structure is supported by the supplied evidence.
|
||||||
- Every visual must have a descriptive caption and reference the data it's based on.
|
- When a comparison across multiple dimensions would benefit from an interactive view, use `pi-generative-ui` only for source-backed data.
|
||||||
|
- Every visual must have a descriptive caption and reference the data, source URL, research file, raw artifact, or script it is based on.
|
||||||
- Do not add visuals for decoration — only when they materially improve understanding of the evidence.
|
- Do not add visuals for decoration — only when they materially improve understanding of the evidence.
|
||||||
|
|
||||||
## Operating rules
|
## Operating rules
|
||||||
@@ -48,6 +50,7 @@ Unresolved issues, disagreements between sources, gaps in evidence.
|
|||||||
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
- Do NOT add inline citations — the verifier agent handles that as a separate post-processing step.
|
||||||
- Do NOT add a Sources section — the verifier agent builds that.
|
- Do NOT add a Sources section — the verifier agent builds that.
|
||||||
- Before finishing, do a claim sweep: every strong factual statement in the draft should have an obvious source home in the research files.
|
- Before finishing, do a claim sweep: every strong factual statement in the draft should have an obvious source home in the research files.
|
||||||
|
- Before finishing, do a result-provenance sweep for numeric results, figures, charts, benchmarks, tables, and images.
|
||||||
|
|
||||||
## Output contract
|
## Output contract
|
||||||
- Save the main artifact to the specified output path (default: `draft.md`).
|
- Save the main artifact to the specified output path (default: `draft.md`).
|
||||||
|
|||||||
84
.github/workflows/publish.yml
vendored
84
.github/workflows/publish.yml
vendored
@@ -5,62 +5,64 @@ env:
|
|||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
tags:
|
branches: [main]
|
||||||
- "v*"
|
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
inputs:
|
|
||||||
tag:
|
|
||||||
description: Existing git tag to publish and release (for example: v0.2.18)
|
|
||||||
required: true
|
|
||||||
type: string
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
verify:
|
version-check:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
outputs:
|
outputs:
|
||||||
tag: ${{ steps.meta.outputs.tag }}
|
version: ${{ steps.version.outputs.version }}
|
||||||
version: ${{ steps.meta.outputs.version }}
|
should_release: ${{ steps.version.outputs.should_release }}
|
||||||
steps:
|
steps:
|
||||||
- name: Resolve release metadata
|
- uses: actions/checkout@v6
|
||||||
id: meta
|
- uses: actions/setup-node@v6
|
||||||
|
with:
|
||||||
|
node-version: 24
|
||||||
|
registry-url: "https://registry.npmjs.org"
|
||||||
|
- id: version
|
||||||
shell: bash
|
shell: bash
|
||||||
env:
|
env:
|
||||||
INPUT_TAG: ${{ inputs.tag }}
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
REF_NAME: ${{ github.ref_name }}
|
|
||||||
run: |
|
run: |
|
||||||
TAG="${INPUT_TAG:-$REF_NAME}"
|
LOCAL=$(node -p "require('./package.json').version")
|
||||||
VERSION="${TAG#v}"
|
echo "version=$LOCAL" >> "$GITHUB_OUTPUT"
|
||||||
echo "tag=$TAG" >> "$GITHUB_OUTPUT"
|
PUBLISHED=$(npm view @companion-ai/feynman version 2>/dev/null || true)
|
||||||
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
|
if [ "$PUBLISHED" = "$LOCAL" ] || gh release view "v$LOCAL" >/dev/null 2>&1; then
|
||||||
|
echo "should_release=false" >> "$GITHUB_OUTPUT"
|
||||||
|
else
|
||||||
|
echo "should_release=true" >> "$GITHUB_OUTPUT"
|
||||||
|
fi
|
||||||
|
|
||||||
|
verify:
|
||||||
|
needs: version-check
|
||||||
|
if: needs.version-check.outputs.should_release == 'true'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
steps:
|
||||||
- uses: actions/checkout@v6
|
- uses: actions/checkout@v6
|
||||||
with:
|
|
||||||
ref: refs/tags/${{ steps.meta.outputs.tag }}
|
|
||||||
- uses: actions/setup-node@v6
|
- uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24
|
node-version: 24
|
||||||
registry-url: "https://registry.npmjs.org"
|
registry-url: "https://registry.npmjs.org"
|
||||||
- run: npm ci
|
- run: npm ci
|
||||||
- name: Verify package version matches tag
|
|
||||||
shell: bash
|
|
||||||
run: |
|
|
||||||
ACTUAL="$(node -p "require('./package.json').version")"
|
|
||||||
EXPECTED="${{ steps.meta.outputs.version }}"
|
|
||||||
test "$ACTUAL" = "$EXPECTED"
|
|
||||||
- run: npm test
|
- run: npm test
|
||||||
- run: npm pack
|
- run: npm pack
|
||||||
|
|
||||||
publish-npm:
|
publish-npm:
|
||||||
needs: verify
|
needs:
|
||||||
|
- version-check
|
||||||
|
- verify
|
||||||
|
if: needs.version-check.outputs.should_release == 'true' && needs.verify.result == 'success'
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
id-token: write
|
id-token: write
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v6
|
- uses: actions/checkout@v6
|
||||||
with:
|
|
||||||
ref: refs/tags/${{ needs.verify.outputs.tag }}
|
|
||||||
- uses: actions/setup-node@v6
|
- uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24
|
node-version: 24
|
||||||
@@ -69,7 +71,8 @@ jobs:
|
|||||||
- run: npm publish --provenance --access public
|
- run: npm publish --provenance --access public
|
||||||
|
|
||||||
build-native-bundles:
|
build-native-bundles:
|
||||||
needs: verify
|
needs: version-check
|
||||||
|
if: needs.version-check.outputs.should_release == 'true'
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
@@ -87,8 +90,6 @@ jobs:
|
|||||||
contents: read
|
contents: read
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v6
|
- uses: actions/checkout@v6
|
||||||
with:
|
|
||||||
ref: refs/tags/${{ needs.verify.outputs.tag }}
|
|
||||||
- uses: actions/setup-node@v6
|
- uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24
|
node-version: 24
|
||||||
@@ -121,8 +122,10 @@ jobs:
|
|||||||
|
|
||||||
release-github:
|
release-github:
|
||||||
needs:
|
needs:
|
||||||
|
- version-check
|
||||||
- publish-npm
|
- publish-npm
|
||||||
- build-native-bundles
|
- build-native-bundles
|
||||||
|
if: needs.version-check.outputs.should_release == 'true' && needs.publish-npm.result == 'success' && needs.build-native-bundles.result == 'success'
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions:
|
permissions:
|
||||||
contents: write
|
contents: write
|
||||||
@@ -136,17 +139,18 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
GH_REPO: ${{ github.repository }}
|
GH_REPO: ${{ github.repository }}
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
TAG: ${{ needs.verify.outputs.tag }}
|
VERSION: ${{ needs.version-check.outputs.version }}
|
||||||
run: |
|
run: |
|
||||||
if gh release view "$TAG" >/dev/null 2>&1; then
|
if gh release view "v$VERSION" >/dev/null 2>&1; then
|
||||||
gh release upload "$TAG" release-assets/* --clobber
|
gh release upload "v$VERSION" release-assets/* --clobber
|
||||||
gh release edit "$TAG" \
|
gh release edit "v$VERSION" \
|
||||||
--title "$TAG" \
|
--title "v$VERSION" \
|
||||||
--notes "Standalone Feynman bundles for native installation." \
|
--notes "Standalone Feynman bundles for native installation." \
|
||||||
--draft=false \
|
--draft=false \
|
||||||
--latest
|
--latest
|
||||||
else
|
else
|
||||||
gh release create "$TAG" release-assets/* \
|
gh release create "v$VERSION" release-assets/* \
|
||||||
--title "$TAG" \
|
--title "v$VERSION" \
|
||||||
--notes "Standalone Feynman bundles for native installation."
|
--notes "Standalone Feynman bundles for native installation." \
|
||||||
|
--target "$GITHUB_SHA"
|
||||||
fi
|
fi
|
||||||
|
|||||||
16
README.md
16
README.md
@@ -25,7 +25,7 @@ curl -fsSL https://feynman.is/install | bash
|
|||||||
irm https://feynman.is/install.ps1 | iex
|
irm https://feynman.is/install.ps1 | iex
|
||||||
```
|
```
|
||||||
|
|
||||||
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.18`.
|
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.28`.
|
||||||
|
|
||||||
The installer downloads a standalone native bundle with its own Node.js runtime.
|
The installer downloads a standalone native bundle with its own Node.js runtime.
|
||||||
|
|
||||||
@@ -33,7 +33,7 @@ To upgrade the standalone app later, rerun the installer. `feynman update` only
|
|||||||
|
|
||||||
To uninstall the standalone app, remove the launcher and runtime bundle, then optionally remove `~/.feynman` if you also want to delete settings, sessions, and installed package state. If you also want to delete alphaXiv login state, remove `~/.ahub`. See the installation guide for platform-specific paths.
|
To uninstall the standalone app, remove the launcher and runtime bundle, then optionally remove `~/.feynman` if you also want to delete settings, sessions, and installed package state. If you also want to delete alphaXiv login state, remove `~/.ahub`. See the installation guide for platform-specific paths.
|
||||||
|
|
||||||
Local models are supported through the custom-provider flow. For Ollama, run `feynman setup`, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at `http://localhost:11434/v1`.
|
Local models are supported through the setup flow. For LM Studio, run `feynman setup`, choose `LM Studio`, and keep the default `http://localhost:1234/v1` unless you changed the server port. For LiteLLM, choose `LiteLLM Proxy` and keep the default `http://localhost:4000/v1`. For Ollama or vLLM, choose `Custom provider (baseUrl + API key)`, use `openai-completions`, and point it at the local `/v1` endpoint.
|
||||||
|
|
||||||
### Skills Only
|
### Skills Only
|
||||||
|
|
||||||
@@ -142,6 +142,18 @@ Built on [Pi](https://github.com/badlogic/pi-mono) for the agent runtime, [alpha
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### Star History
|
||||||
|
|
||||||
|
<a href="https://www.star-history.com/?repos=getcompanion-ai%2Ffeynman&type=date&legend=top-left">
|
||||||
|
<picture>
|
||||||
|
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&theme=dark&legend=top-left" />
|
||||||
|
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
|
||||||
|
<img alt="Star History Chart" src="https://api.star-history.com/chart?repos=getcompanion-ai/feynman&type=date&legend=top-left" />
|
||||||
|
</picture>
|
||||||
|
</a>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Contributing
|
### Contributing
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full contributor guide.
|
See [CONTRIBUTING.md](CONTRIBUTING.md) for the full contributor guide.
|
||||||
|
|||||||
4
package-lock.json
generated
4
package-lock.json
generated
@@ -1,12 +1,12 @@
|
|||||||
{
|
{
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.18",
|
"version": "0.2.28",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.18",
|
"version": "0.2.28",
|
||||||
"hasInstallScript": true,
|
"hasInstallScript": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.18",
|
"version": "0.2.28",
|
||||||
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Audit the paper and codebase for: $@
|
|||||||
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the audit target (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
- Before starting, outline the audit plan: which paper, which repo, which claims to check. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||||
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
|
- Use the `researcher` subagent for evidence gathering and the `verifier` subagent to verify sources and add inline citations when the audit is non-trivial.
|
||||||
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
- Compare claimed methods, defaults, metrics, and data handling against the actual code.
|
||||||
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
- Call out missing code, mismatches, ambiguous defaults, and reproduction risks.
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Compare sources for: $@
|
|||||||
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the comparison topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
- Before starting, outline the comparison plan: which sources to compare, which dimensions to evaluate, expected output structure. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||||
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
|
- Use the `researcher` subagent to gather source material when the comparison set is broad, and the `verifier` subagent to verify sources and add inline citations to the final matrix.
|
||||||
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
- Build a comparison matrix covering: source, key claim, evidence type, caveats, confidence.
|
||||||
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.
|
- Generate charts with `pi-charts` when the comparison involves quantitative metrics. Use Mermaid for method or architecture comparisons.
|
||||||
|
|||||||
@@ -51,7 +51,9 @@ If `CHANGELOG.md` exists, read the most recent relevant entries before finalizin
|
|||||||
|
|
||||||
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
|
Also save the plan with `memory_remember` (type: `fact`, key: `deepresearch.<slug>.plan`) so it survives context truncation.
|
||||||
|
|
||||||
Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting in the terminal, give them a brief chance to request plan changes before proceeding.
|
Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||||
|
|
||||||
|
Do not stop after planning. If live search, subagents, web access, alphaXiv, or any other capability is unavailable, continue in degraded mode and write a durable blocked/partial report that records exactly which capabilities failed.
|
||||||
|
|
||||||
## 2. Scale decision
|
## 2. Scale decision
|
||||||
|
|
||||||
@@ -105,6 +107,13 @@ When the work spans multiple rounds, also append a concise chronological entry t
|
|||||||
|
|
||||||
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
|
Most topics need 1-2 rounds. Stop when additional rounds would not materially change conclusions.
|
||||||
|
|
||||||
|
If no researcher files can be produced because tools, subagents, or network access failed, create `outputs/.drafts/<slug>-draft.md` yourself as a blocked report with:
|
||||||
|
- what was requested,
|
||||||
|
- which capabilities failed,
|
||||||
|
- what evidence was and was not gathered,
|
||||||
|
- a proposed source-gathering plan,
|
||||||
|
- no invented sources or results.
|
||||||
|
|
||||||
## 5. Write the report
|
## 5. Write the report
|
||||||
|
|
||||||
Once evidence is sufficient, YOU write the full research brief directly. Do not delegate writing to another agent. Read the research files, synthesize the findings, and produce a complete document:
|
Once evidence is sufficient, YOU write the full research brief directly. Do not delegate writing to another agent. Read the research files, synthesize the findings, and produce a complete document:
|
||||||
@@ -190,6 +199,7 @@ Before you stop, verify on disk that all of these exist:
|
|||||||
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md` provenance sidecar
|
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md` provenance sidecar
|
||||||
|
|
||||||
Do not stop at `<slug>-brief.md` alone. If the cited brief exists but the promoted final output or provenance sidecar does not, create them before responding.
|
Do not stop at `<slug>-brief.md` alone. If the cited brief exists but the promoted final output or provenance sidecar does not, create them before responding.
|
||||||
|
If full verification could not be completed, still create the final deliverable and provenance sidecar with `Verification: BLOCKED` or `PASS WITH NOTES` and list the missing checks. Never end with only an explanation in chat.
|
||||||
|
|
||||||
## Background execution
|
## Background execution
|
||||||
|
|
||||||
|
|||||||
@@ -9,11 +9,12 @@ Write a paper-style draft for: $@
|
|||||||
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Present the outline to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
- Before writing, outline the draft structure: proposed title, sections, key claims to make, source material to draw from, and a verification log for the critical claims, figures, and calculations. Write the outline to `outputs/.plans/<slug>.md`. Briefly summarize the outline to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested outline review.
|
||||||
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
|
- Use the `writer` subagent when the draft should be produced from already-collected notes, then use the `verifier` subagent to add inline citations and verify sources.
|
||||||
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
- Include at minimum: title, abstract, problem statement, related work, method or synthesis, evidence or experiments, limitations, conclusion.
|
||||||
- Use clean Markdown with LaTeX where equations materially help.
|
- Use clean Markdown with LaTeX where equations materially help.
|
||||||
- Generate charts with `pi-charts` for quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines. Every figure needs a caption.
|
- Follow the system prompt's provenance rules for all results, figures, charts, images, tables, benchmarks, and quantitative comparisons. If evidence is missing, leave a placeholder or proposed experimental plan instead of claiming an outcome.
|
||||||
|
- Generate charts with `pi-charts` only for source-backed quantitative data, benchmarks, and comparisons. Use Mermaid for architectures and pipelines only when the structure is supported by sources. Every figure needs a provenance-bearing caption.
|
||||||
- Before delivery, sweep the draft for any claim that sounds stronger than its support. Mark tentative results as tentative and remove unsupported numerics instead of letting the verifier discover them later.
|
- Before delivery, sweep the draft for any claim that sounds stronger than its support. Mark tentative results as tentative and remove unsupported numerics instead of letting the verifier discover them later.
|
||||||
- Save exactly one draft to `papers/<slug>.md`.
|
- Save exactly one draft to `papers/<slug>.md`.
|
||||||
- End with a `Sources` appendix with direct URLs for all primary references.
|
- End with a `Sources` appendix with direct URLs for all primary references.
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 wo
|
|||||||
|
|
||||||
## Workflow
|
## Workflow
|
||||||
|
|
||||||
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
1. **Plan** — Outline the scope: key questions, source types to search (papers, web, repos), time period, expected sections, and a small task ledger plus verification log. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||||
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`. Do not silently skip assigned questions; mark them `done`, `blocked`, or `superseded`.
|
2. **Gather** — Use the `researcher` subagent when the sweep is wide enough to benefit from delegated paper triage before synthesis. For narrow topics, search directly. Researcher outputs go to `<slug>-research-*.md`. Do not silently skip assigned questions; mark them `done`, `blocked`, or `superseded`.
|
||||||
3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. Before finishing the draft, sweep every strong claim against the verification log and downgrade anything that is inferred or single-source critical.
|
3. **Synthesize** — Separate consensus, disagreements, and open questions. When useful, propose concrete next experiments or follow-up reading. Generate charts with `pi-charts` for quantitative comparisons across papers and Mermaid diagrams for taxonomies or method pipelines. Before finishing the draft, sweep every strong claim against the verification log and downgrade anything that is inferred or single-source critical.
|
||||||
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
|
4. **Cite** — Spawn the `verifier` agent to add inline citations and verify every source URL in the draft.
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Review this AI research artifact: $@
|
|||||||
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the artifact name (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
- Before starting, outline what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, etc.), and any verification-specific checks needed for claims, figures, and reported metrics. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||||
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
|
- Spawn a `researcher` subagent to gather evidence on the artifact — inspect the paper, code, cited work, and any linked experimental artifacts. Save to `<slug>-research.md`.
|
||||||
- Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
|
- Spawn a `reviewer` subagent with `<slug>-research.md` to produce the final peer review with inline annotations.
|
||||||
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
- For small or simple artifacts where evidence gathering is overkill, run the `reviewer` subagent directly instead.
|
||||||
|
|||||||
@@ -101,7 +101,7 @@ print(f"[summarize] chunks={len(chunks)} chunk_size={chunk_size} overlap={overla
|
|||||||
|
|
||||||
### 3b. Confirm before spawning
|
### 3b. Confirm before spawning
|
||||||
|
|
||||||
If this is an unattended or one-shot run, continue automatically. Otherwise tell the user: "Source is ~<chars> chars -> <N> chunks -> <N> researcher subagents. This may take several minutes. Proceed?" Wait for confirmation before launching Tier 3.
|
Briefly summarize: "Source is ~<chars> chars -> <N> chunks -> <N> researcher subagents. This may take several minutes." Then continue automatically. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested review before launching.
|
||||||
|
|
||||||
### 3c. Dispatch researcher subagents
|
### 3c. Dispatch researcher subagents
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ Create a research watch for: $@
|
|||||||
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
Derive a short slug from the watch topic (lowercase, hyphens, no filler words, ≤5 words). Use this slug for all files in this run.
|
||||||
|
|
||||||
Requirements:
|
Requirements:
|
||||||
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Present the plan to the user. If this is an unattended or one-shot run, continue automatically. If the user is actively interacting, give them a brief chance to request changes before proceeding.
|
- Before starting, outline the watch plan: what to monitor, what signals matter, what counts as a meaningful change, and the check frequency. Write the plan to `outputs/.plans/<slug>.md`. Briefly summarize the plan to the user and continue immediately. Do not ask for confirmation or wait for a proceed response unless the user explicitly requested plan review.
|
||||||
- Start with a baseline sweep of the topic.
|
- Start with a baseline sweep of the topic.
|
||||||
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
|
- Use `schedule_prompt` to create the recurring or delayed follow-up instead of merely promising to check later.
|
||||||
- Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.
|
- Save exactly one baseline artifact to `outputs/<slug>-baseline.md`.
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.18
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.28
|
||||||
"@
|
"@
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.18
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.28
|
||||||
EOF
|
EOF
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,2 +1,3 @@
|
|||||||
export const PI_SUBAGENTS_PATCH_TARGETS: string[];
|
export const PI_SUBAGENTS_PATCH_TARGETS: string[];
|
||||||
export function patchPiSubagentsSource(relativePath: string, source: string): string;
|
export function patchPiSubagentsSource(relativePath: string, source: string): string;
|
||||||
|
export function stripPiSubagentBuiltinModelSource(source: string): string;
|
||||||
|
|||||||
@@ -66,6 +66,24 @@ function replaceAll(source, from, to) {
|
|||||||
return source.split(from).join(to);
|
return source.split(from).join(to);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export function stripPiSubagentBuiltinModelSource(source) {
|
||||||
|
if (!source.startsWith("---\n")) {
|
||||||
|
return source;
|
||||||
|
}
|
||||||
|
|
||||||
|
const endIndex = source.indexOf("\n---", 4);
|
||||||
|
if (endIndex === -1) {
|
||||||
|
return source;
|
||||||
|
}
|
||||||
|
|
||||||
|
const frontmatter = source.slice(4, endIndex);
|
||||||
|
const nextFrontmatter = frontmatter
|
||||||
|
.split("\n")
|
||||||
|
.filter((line) => !/^\s*model\s*:/.test(line))
|
||||||
|
.join("\n");
|
||||||
|
return `---\n${nextFrontmatter}${source.slice(endIndex)}`;
|
||||||
|
}
|
||||||
|
|
||||||
export function patchPiSubagentsSource(relativePath, source) {
|
export function patchPiSubagentsSource(relativePath, source) {
|
||||||
let patched = source;
|
let patched = source;
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import { spawnSync } from "node:child_process";
|
import { spawnSync } from "node:child_process";
|
||||||
import { existsSync, lstatSync, mkdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
import { existsSync, lstatSync, mkdirSync, readdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||||
import { createRequire } from "node:module";
|
import { createRequire } from "node:module";
|
||||||
import { homedir } from "node:os";
|
import { homedir } from "node:os";
|
||||||
import { delimiter, dirname, resolve } from "node:path";
|
import { delimiter, dirname, resolve } from "node:path";
|
||||||
@@ -9,7 +9,7 @@ import { patchAlphaHubAuthSource } from "./lib/alpha-hub-auth-patch.mjs";
|
|||||||
import { patchPiExtensionLoaderSource } from "./lib/pi-extension-loader-patch.mjs";
|
import { patchPiExtensionLoaderSource } from "./lib/pi-extension-loader-patch.mjs";
|
||||||
import { patchPiGoogleLegacySchemaSource } from "./lib/pi-google-legacy-schema-patch.mjs";
|
import { patchPiGoogleLegacySchemaSource } from "./lib/pi-google-legacy-schema-patch.mjs";
|
||||||
import { PI_WEB_ACCESS_PATCH_TARGETS, patchPiWebAccessSource } from "./lib/pi-web-access-patch.mjs";
|
import { PI_WEB_ACCESS_PATCH_TARGETS, patchPiWebAccessSource } from "./lib/pi-web-access-patch.mjs";
|
||||||
import { PI_SUBAGENTS_PATCH_TARGETS, patchPiSubagentsSource } from "./lib/pi-subagents-patch.mjs";
|
import { PI_SUBAGENTS_PATCH_TARGETS, patchPiSubagentsSource, stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs";
|
||||||
|
|
||||||
const here = dirname(fileURLToPath(import.meta.url));
|
const here = dirname(fileURLToPath(import.meta.url));
|
||||||
const appRoot = resolve(here, "..");
|
const appRoot = resolve(here, "..");
|
||||||
@@ -260,6 +260,23 @@ function ensureParentDir(path) {
|
|||||||
mkdirSync(dirname(path), { recursive: true });
|
mkdirSync(dirname(path), { recursive: true });
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function packageDependencyExists(packagePath, globalNodeModulesRoot, dependency) {
|
||||||
|
return existsSync(resolve(packagePath, "node_modules", dependency)) ||
|
||||||
|
existsSync(resolve(globalNodeModulesRoot, dependency));
|
||||||
|
}
|
||||||
|
|
||||||
|
function installedPackageLooksUsable(packagePath, globalNodeModulesRoot) {
|
||||||
|
if (!existsSync(resolve(packagePath, "package.json"))) return false;
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(readFileSync(resolve(packagePath, "package.json"), "utf8"));
|
||||||
|
return Object.keys(pkg.dependencies ?? {}).every((dependency) =>
|
||||||
|
packageDependencyExists(packagePath, globalNodeModulesRoot, dependency)
|
||||||
|
);
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function linkPointsTo(linkPath, targetPath) {
|
function linkPointsTo(linkPath, targetPath) {
|
||||||
try {
|
try {
|
||||||
if (!lstatSync(linkPath).isSymbolicLink()) return false;
|
if (!lstatSync(linkPath).isSymbolicLink()) return false;
|
||||||
@@ -269,26 +286,53 @@ function linkPointsTo(linkPath, targetPath) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function listWorkspacePackageNames(root) {
|
||||||
|
if (!existsSync(root)) return [];
|
||||||
|
const names = [];
|
||||||
|
for (const entry of readdirSync(root, { withFileTypes: true })) {
|
||||||
|
if (!entry.isDirectory() && !entry.isSymbolicLink()) continue;
|
||||||
|
if (entry.name.startsWith(".")) continue;
|
||||||
|
if (entry.name.startsWith("@")) {
|
||||||
|
const scopeRoot = resolve(root, entry.name);
|
||||||
|
for (const scopedEntry of readdirSync(scopeRoot, { withFileTypes: true })) {
|
||||||
|
if (!scopedEntry.isDirectory() && !scopedEntry.isSymbolicLink()) continue;
|
||||||
|
names.push(`${entry.name}/${scopedEntry.name}`);
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
names.push(entry.name);
|
||||||
|
}
|
||||||
|
return names;
|
||||||
|
}
|
||||||
|
|
||||||
|
function linkBundledPackage(packageName) {
|
||||||
|
const sourcePath = resolve(workspaceRoot, packageName);
|
||||||
|
const targetPath = resolve(globalNodeModulesRoot, packageName);
|
||||||
|
if (!existsSync(sourcePath)) return false;
|
||||||
|
if (linkPointsTo(targetPath, sourcePath)) return false;
|
||||||
|
try {
|
||||||
|
if (lstatSync(targetPath).isSymbolicLink()) {
|
||||||
|
rmSync(targetPath, { force: true });
|
||||||
|
} else if (!installedPackageLooksUsable(targetPath, globalNodeModulesRoot)) {
|
||||||
|
rmSync(targetPath, { recursive: true, force: true });
|
||||||
|
}
|
||||||
|
} catch {}
|
||||||
|
if (existsSync(targetPath)) return false;
|
||||||
|
|
||||||
|
ensureParentDir(targetPath);
|
||||||
|
try {
|
||||||
|
symlinkSync(sourcePath, targetPath, process.platform === "win32" ? "junction" : "dir");
|
||||||
|
return true;
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function ensureBundledPackageLinks(packageSpecs) {
|
function ensureBundledPackageLinks(packageSpecs) {
|
||||||
if (!workspaceMatchesRuntime(packageSpecs)) return;
|
if (!workspaceMatchesRuntime(packageSpecs)) return;
|
||||||
|
|
||||||
for (const spec of packageSpecs) {
|
for (const packageName of listWorkspacePackageNames(workspaceRoot)) {
|
||||||
const packageName = parsePackageName(spec);
|
linkBundledPackage(packageName);
|
||||||
const sourcePath = resolve(workspaceRoot, packageName);
|
|
||||||
const targetPath = resolve(globalNodeModulesRoot, packageName);
|
|
||||||
if (!existsSync(sourcePath)) continue;
|
|
||||||
if (linkPointsTo(targetPath, sourcePath)) continue;
|
|
||||||
try {
|
|
||||||
if (lstatSync(targetPath).isSymbolicLink()) {
|
|
||||||
rmSync(targetPath, { force: true });
|
|
||||||
}
|
|
||||||
} catch {}
|
|
||||||
if (existsSync(targetPath)) continue;
|
|
||||||
|
|
||||||
ensureParentDir(targetPath);
|
|
||||||
try {
|
|
||||||
symlinkSync(sourcePath, targetPath, process.platform === "win32" ? "junction" : "dir");
|
|
||||||
} catch {}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -435,6 +479,19 @@ if (existsSync(piSubagentsRoot)) {
|
|||||||
writeFileSync(entryPath, patched, "utf8");
|
writeFileSync(entryPath, patched, "utf8");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const builtinAgentsRoot = resolve(piSubagentsRoot, "agents");
|
||||||
|
if (existsSync(builtinAgentsRoot)) {
|
||||||
|
for (const entry of readdirSync(builtinAgentsRoot, { withFileTypes: true })) {
|
||||||
|
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
||||||
|
const entryPath = resolve(builtinAgentsRoot, entry.name);
|
||||||
|
const source = readFileSync(entryPath, "utf8");
|
||||||
|
const patched = stripPiSubagentBuiltinModelSource(source);
|
||||||
|
if (patched !== source) {
|
||||||
|
writeFileSync(entryPath, patched, "utf8");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (packageJsonPath && existsSync(packageJsonPath)) {
|
if (packageJsonPath && existsSync(packageJsonPath)) {
|
||||||
|
|||||||
@@ -1,7 +1,9 @@
|
|||||||
import { existsSync, mkdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
|
import { existsSync, mkdirSync, readdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
|
||||||
import { resolve } from "node:path";
|
import { resolve } from "node:path";
|
||||||
import { spawnSync } from "node:child_process";
|
import { spawnSync } from "node:child_process";
|
||||||
|
|
||||||
|
import { stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs";
|
||||||
|
|
||||||
const appRoot = resolve(import.meta.dirname, "..");
|
const appRoot = resolve(import.meta.dirname, "..");
|
||||||
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
||||||
const feynmanDir = resolve(appRoot, ".feynman");
|
const feynmanDir = resolve(appRoot, ".feynman");
|
||||||
@@ -10,7 +12,7 @@ const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
|
|||||||
const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
|
const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
|
||||||
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
||||||
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
|
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
|
||||||
const PRUNE_VERSION = 3;
|
const PRUNE_VERSION = 4;
|
||||||
|
|
||||||
function readPackageSpecs() {
|
function readPackageSpecs() {
|
||||||
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
|
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
|
||||||
@@ -72,6 +74,17 @@ function writeWorkspacePackageJson() {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function childNpmInstallEnv() {
|
||||||
|
return {
|
||||||
|
...process.env,
|
||||||
|
// `npm pack --dry-run` exports dry-run config to lifecycle scripts. The
|
||||||
|
// vendored runtime workspace must still install real node_modules so the
|
||||||
|
// publish artifact can be validated without poisoning the archive.
|
||||||
|
npm_config_dry_run: "false",
|
||||||
|
NPM_CONFIG_DRY_RUN: "false",
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
function prepareWorkspace(packageSpecs) {
|
function prepareWorkspace(packageSpecs) {
|
||||||
rmSync(workspaceDir, { recursive: true, force: true });
|
rmSync(workspaceDir, { recursive: true, force: true });
|
||||||
mkdirSync(workspaceDir, { recursive: true });
|
mkdirSync(workspaceDir, { recursive: true });
|
||||||
@@ -84,9 +97,9 @@ function prepareWorkspace(packageSpecs) {
|
|||||||
const result = spawnSync(
|
const result = spawnSync(
|
||||||
process.env.npm_execpath ? process.execPath : "npm",
|
process.env.npm_execpath ? process.execPath : "npm",
|
||||||
process.env.npm_execpath
|
process.env.npm_execpath
|
||||||
? [process.env.npm_execpath, "install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
|
? [process.env.npm_execpath, "install", "--prefer-offline", "--no-audit", "--no-fund", "--no-dry-run", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
|
||||||
: ["install", "--prefer-offline", "--no-audit", "--no-fund", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
|
: ["install", "--prefer-offline", "--no-audit", "--no-fund", "--no-dry-run", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
|
||||||
{ stdio: "inherit" },
|
{ stdio: "inherit", env: childNpmInstallEnv() },
|
||||||
);
|
);
|
||||||
if (result.status !== 0) {
|
if (result.status !== 0) {
|
||||||
process.exit(result.status ?? 1);
|
process.exit(result.status ?? 1);
|
||||||
@@ -122,6 +135,25 @@ function pruneWorkspace() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function stripBundledPiSubagentModelPins() {
|
||||||
|
const agentsRoot = resolve(workspaceNodeModulesDir, "pi-subagents", "agents");
|
||||||
|
if (!existsSync(agentsRoot)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
let changed = false;
|
||||||
|
for (const entry of readdirSync(agentsRoot, { withFileTypes: true })) {
|
||||||
|
if (!entry.isFile() || !entry.name.endsWith(".md")) continue;
|
||||||
|
const entryPath = resolve(agentsRoot, entry.name);
|
||||||
|
const source = readFileSync(entryPath, "utf8");
|
||||||
|
const patched = stripPiSubagentBuiltinModelSource(source);
|
||||||
|
if (patched === source) continue;
|
||||||
|
writeFileSync(entryPath, patched, "utf8");
|
||||||
|
changed = true;
|
||||||
|
}
|
||||||
|
return changed;
|
||||||
|
}
|
||||||
|
|
||||||
function archiveIsCurrent() {
|
function archiveIsCurrent() {
|
||||||
if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) {
|
if (!existsSync(workspaceArchivePath) || !existsSync(manifestPath)) {
|
||||||
return false;
|
return false;
|
||||||
@@ -145,6 +177,10 @@ const packageSpecs = readPackageSpecs();
|
|||||||
|
|
||||||
if (workspaceIsCurrent(packageSpecs)) {
|
if (workspaceIsCurrent(packageSpecs)) {
|
||||||
console.log("[feynman] vendored runtime workspace already up to date");
|
console.log("[feynman] vendored runtime workspace already up to date");
|
||||||
|
if (stripBundledPiSubagentModelPins()) {
|
||||||
|
writeManifest(packageSpecs);
|
||||||
|
console.log("[feynman] stripped bundled pi-subagents model pins");
|
||||||
|
}
|
||||||
if (archiveIsCurrent()) {
|
if (archiveIsCurrent()) {
|
||||||
process.exit(0);
|
process.exit(0);
|
||||||
}
|
}
|
||||||
@@ -157,6 +193,7 @@ if (workspaceIsCurrent(packageSpecs)) {
|
|||||||
console.log("[feynman] preparing vendored runtime workspace...");
|
console.log("[feynman] preparing vendored runtime workspace...");
|
||||||
prepareWorkspace(packageSpecs);
|
prepareWorkspace(packageSpecs);
|
||||||
pruneWorkspace();
|
pruneWorkspace();
|
||||||
|
stripBundledPiSubagentModelPins();
|
||||||
writeManifest(packageSpecs);
|
writeManifest(packageSpecs);
|
||||||
createWorkspaceArchive();
|
createWorkspaceArchive();
|
||||||
console.log("[feynman] vendored runtime workspace ready");
|
console.log("[feynman] vendored runtime workspace ready");
|
||||||
|
|||||||
@@ -48,6 +48,7 @@ const PROVIDER_LABELS: Record<string, string> = {
|
|||||||
huggingface: "Hugging Face",
|
huggingface: "Hugging Face",
|
||||||
"amazon-bedrock": "Amazon Bedrock",
|
"amazon-bedrock": "Amazon Bedrock",
|
||||||
"azure-openai-responses": "Azure OpenAI Responses",
|
"azure-openai-responses": "Azure OpenAI Responses",
|
||||||
|
litellm: "LiteLLM Proxy",
|
||||||
};
|
};
|
||||||
|
|
||||||
const RESEARCH_MODEL_PREFERENCES = [
|
const RESEARCH_MODEL_PREFERENCES = [
|
||||||
|
|||||||
@@ -83,6 +83,8 @@ const API_KEY_PROVIDERS: ApiKeyProviderInfo[] = [
|
|||||||
{ id: "openai", label: "OpenAI Platform API", envVar: "OPENAI_API_KEY" },
|
{ id: "openai", label: "OpenAI Platform API", envVar: "OPENAI_API_KEY" },
|
||||||
{ id: "anthropic", label: "Anthropic API", envVar: "ANTHROPIC_API_KEY" },
|
{ id: "anthropic", label: "Anthropic API", envVar: "ANTHROPIC_API_KEY" },
|
||||||
{ id: "google", label: "Google Gemini API", envVar: "GEMINI_API_KEY" },
|
{ id: "google", label: "Google Gemini API", envVar: "GEMINI_API_KEY" },
|
||||||
|
{ id: "lm-studio", label: "LM Studio (local OpenAI-compatible server)" },
|
||||||
|
{ id: "litellm", label: "LiteLLM Proxy (OpenAI-compatible gateway)" },
|
||||||
{ id: "__custom__", label: "Custom provider (local/self-hosted/proxy)" },
|
{ id: "__custom__", label: "Custom provider (local/self-hosted/proxy)" },
|
||||||
{ id: "amazon-bedrock", label: "Amazon Bedrock (AWS credential chain)" },
|
{ id: "amazon-bedrock", label: "Amazon Bedrock (AWS credential chain)" },
|
||||||
{ id: "openrouter", label: "OpenRouter", envVar: "OPENROUTER_API_KEY" },
|
{ id: "openrouter", label: "OpenRouter", envVar: "OPENROUTER_API_KEY" },
|
||||||
@@ -126,13 +128,24 @@ export function resolveModelProviderForCommand(
|
|||||||
return undefined;
|
return undefined;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function apiKeyProviderHint(provider: ApiKeyProviderInfo): string {
|
||||||
|
if (provider.id === "__custom__") {
|
||||||
|
return "Ollama, vLLM, LM Studio, proxies";
|
||||||
|
}
|
||||||
|
if (provider.id === "lm-studio") {
|
||||||
|
return "http://localhost:1234/v1";
|
||||||
|
}
|
||||||
|
if (provider.id === "litellm") {
|
||||||
|
return "http://localhost:4000/v1";
|
||||||
|
}
|
||||||
|
return provider.envVar ?? provider.id;
|
||||||
|
}
|
||||||
|
|
||||||
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
|
async function selectApiKeyProvider(): Promise<ApiKeyProviderInfo | undefined> {
|
||||||
const options: PromptSelectOption<ApiKeyProviderInfo | "cancel">[] = API_KEY_PROVIDERS.map((provider) => ({
|
const options: PromptSelectOption<ApiKeyProviderInfo | "cancel">[] = API_KEY_PROVIDERS.map((provider) => ({
|
||||||
value: provider,
|
value: provider,
|
||||||
label: provider.label,
|
label: provider.label,
|
||||||
hint: provider.id === "__custom__"
|
hint: apiKeyProviderHint(provider),
|
||||||
? "Ollama, vLLM, LM Studio, proxies"
|
|
||||||
: provider.envVar ?? provider.id,
|
|
||||||
}));
|
}));
|
||||||
options.push({ value: "cancel", label: "Cancel" });
|
options.push({ value: "cancel", label: "Cancel" });
|
||||||
|
|
||||||
@@ -362,6 +375,103 @@ async function promptCustomProviderSetup(): Promise<CustomProviderSetup | undefi
|
|||||||
return { providerId, modelIds, baseUrl, api, apiKeyConfig, authHeader };
|
return { providerId, modelIds, baseUrl, api, apiKeyConfig, authHeader };
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async function promptLmStudioProviderSetup(): Promise<CustomProviderSetup | undefined> {
|
||||||
|
printSection("LM Studio");
|
||||||
|
printInfo("Start the LM Studio local server first, then load a model.");
|
||||||
|
|
||||||
|
const baseUrlRaw = await promptText("Base URL", "http://localhost:1234/v1");
|
||||||
|
const { baseUrl } = normalizeCustomProviderBaseUrl("openai-completions", baseUrlRaw);
|
||||||
|
if (!baseUrl) {
|
||||||
|
printWarning("Base URL is required.");
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const detectedModelIds = await bestEffortFetchOpenAiModelIds(baseUrl, "lm-studio", false);
|
||||||
|
let modelIdsDefault = "local-model";
|
||||||
|
if (detectedModelIds && detectedModelIds.length > 0) {
|
||||||
|
const sample = detectedModelIds.slice(0, 10).join(", ");
|
||||||
|
printInfo(`Detected LM Studio models: ${sample}${detectedModelIds.length > 10 ? ", ..." : ""}`);
|
||||||
|
modelIdsDefault = detectedModelIds[0]!;
|
||||||
|
} else {
|
||||||
|
printInfo("No models detected from /models. Enter the exact model id shown in LM Studio.");
|
||||||
|
}
|
||||||
|
|
||||||
|
const modelIdsRaw = await promptText("Model id(s) (comma-separated)", modelIdsDefault);
|
||||||
|
const modelIds = normalizeModelIds(modelIdsRaw);
|
||||||
|
if (modelIds.length === 0) {
|
||||||
|
printWarning("At least one model id is required.");
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
providerId: "lm-studio",
|
||||||
|
modelIds,
|
||||||
|
baseUrl,
|
||||||
|
api: "openai-completions",
|
||||||
|
apiKeyConfig: "lm-studio",
|
||||||
|
authHeader: false,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
async function promptLiteLlmProviderSetup(): Promise<CustomProviderSetup | undefined> {
|
||||||
|
printSection("LiteLLM Proxy");
|
||||||
|
printInfo("Start the LiteLLM proxy first. Feynman uses the OpenAI-compatible chat-completions API.");
|
||||||
|
|
||||||
|
const baseUrlRaw = await promptText("Base URL", "http://localhost:4000/v1");
|
||||||
|
const { baseUrl } = normalizeCustomProviderBaseUrl("openai-completions", baseUrlRaw);
|
||||||
|
if (!baseUrl) {
|
||||||
|
printWarning("Base URL is required.");
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const keyChoices = [
|
||||||
|
"Yes (use LITELLM_MASTER_KEY and send Authorization: Bearer <key>)",
|
||||||
|
"No (proxy runs without authentication)",
|
||||||
|
"Cancel",
|
||||||
|
];
|
||||||
|
const keySelection = await promptChoice("Is the proxy protected by a master key?", keyChoices, 0);
|
||||||
|
if (keySelection >= 2) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
const hasKey = keySelection === 0;
|
||||||
|
const apiKeyConfig = hasKey ? "LITELLM_MASTER_KEY" : "local";
|
||||||
|
const authHeader = hasKey;
|
||||||
|
if (hasKey) {
|
||||||
|
printInfo("Set LITELLM_MASTER_KEY in your shell or .env before using Feynman.");
|
||||||
|
}
|
||||||
|
|
||||||
|
const resolvedKey = hasKey ? await resolveApiKeyConfig(apiKeyConfig) : apiKeyConfig;
|
||||||
|
const detectedModelIds = resolvedKey
|
||||||
|
? await bestEffortFetchOpenAiModelIds(baseUrl, resolvedKey, authHeader)
|
||||||
|
: undefined;
|
||||||
|
|
||||||
|
let modelIdsDefault = "gpt-4";
|
||||||
|
if (detectedModelIds && detectedModelIds.length > 0) {
|
||||||
|
const sample = detectedModelIds.slice(0, 10).join(", ");
|
||||||
|
printInfo(`Detected LiteLLM models: ${sample}${detectedModelIds.length > 10 ? ", ..." : ""}`);
|
||||||
|
modelIdsDefault = detectedModelIds[0]!;
|
||||||
|
} else {
|
||||||
|
printInfo("No models detected from /models. Enter the model id(s) from your LiteLLM config.");
|
||||||
|
}
|
||||||
|
|
||||||
|
const modelIdsRaw = await promptText("Model id(s) (comma-separated)", modelIdsDefault);
|
||||||
|
const modelIds = normalizeModelIds(modelIdsRaw);
|
||||||
|
if (modelIds.length === 0) {
|
||||||
|
printWarning("At least one model id is required.");
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
providerId: "litellm",
|
||||||
|
modelIds,
|
||||||
|
baseUrl,
|
||||||
|
api: "openai-completions",
|
||||||
|
apiKeyConfig,
|
||||||
|
authHeader,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
async function verifyCustomProvider(setup: CustomProviderSetup, authPath: string): Promise<void> {
|
async function verifyCustomProvider(setup: CustomProviderSetup, authPath: string): Promise<void> {
|
||||||
const registry = createModelRegistry(authPath);
|
const registry = createModelRegistry(authPath);
|
||||||
const modelsError = registry.getError();
|
const modelsError = registry.getError();
|
||||||
@@ -548,6 +658,56 @@ async function configureApiKeyProvider(authPath: string, providerId?: string): P
|
|||||||
return configureBedrockProvider(authPath);
|
return configureBedrockProvider(authPath);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (provider.id === "lm-studio") {
|
||||||
|
const setup = await promptLmStudioProviderSetup();
|
||||||
|
if (!setup) {
|
||||||
|
printInfo("LM Studio setup cancelled.");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const modelsJsonPath = getModelsJsonPath(authPath);
|
||||||
|
const result = upsertProviderConfig(modelsJsonPath, setup.providerId, {
|
||||||
|
baseUrl: setup.baseUrl,
|
||||||
|
apiKey: setup.apiKeyConfig,
|
||||||
|
api: setup.api,
|
||||||
|
authHeader: setup.authHeader,
|
||||||
|
models: setup.modelIds.map((id) => ({ id })),
|
||||||
|
});
|
||||||
|
if (!result.ok) {
|
||||||
|
printWarning(result.error);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
printSuccess("Saved LM Studio provider.");
|
||||||
|
await verifyCustomProvider(setup, authPath);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (provider.id === "litellm") {
|
||||||
|
const setup = await promptLiteLlmProviderSetup();
|
||||||
|
if (!setup) {
|
||||||
|
printInfo("LiteLLM setup cancelled.");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const modelsJsonPath = getModelsJsonPath(authPath);
|
||||||
|
const result = upsertProviderConfig(modelsJsonPath, setup.providerId, {
|
||||||
|
baseUrl: setup.baseUrl,
|
||||||
|
apiKey: setup.apiKeyConfig,
|
||||||
|
api: setup.api,
|
||||||
|
authHeader: setup.authHeader,
|
||||||
|
models: setup.modelIds.map((id) => ({ id })),
|
||||||
|
});
|
||||||
|
if (!result.ok) {
|
||||||
|
printWarning(result.error);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
printSuccess("Saved LiteLLM provider.");
|
||||||
|
await verifyCustomProvider(setup, authPath);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
if (provider.id === "__custom__") {
|
if (provider.id === "__custom__") {
|
||||||
const setup = await promptCustomProviderSetup();
|
const setup = await promptCustomProviderSetup();
|
||||||
if (!setup) {
|
if (!setup) {
|
||||||
|
|||||||
@@ -1,11 +1,41 @@
|
|||||||
import { dirname, resolve } from "node:path";
|
import { dirname, resolve } from "node:path";
|
||||||
|
|
||||||
import { AuthStorage, ModelRegistry } from "@mariozechner/pi-coding-agent";
|
import { AuthStorage, ModelRegistry } from "@mariozechner/pi-coding-agent";
|
||||||
|
import { getModels } from "@mariozechner/pi-ai";
|
||||||
|
import { anthropicOAuthProvider } from "@mariozechner/pi-ai/oauth";
|
||||||
|
|
||||||
export function getModelsJsonPath(authPath: string): string {
|
export function getModelsJsonPath(authPath: string): string {
|
||||||
return resolve(dirname(authPath), "models.json");
|
return resolve(dirname(authPath), "models.json");
|
||||||
}
|
}
|
||||||
|
|
||||||
export function createModelRegistry(authPath: string): ModelRegistry {
|
function registerFeynmanModelOverlays(modelRegistry: ModelRegistry): void {
|
||||||
return ModelRegistry.create(AuthStorage.create(authPath), getModelsJsonPath(authPath));
|
const anthropicModels = getModels("anthropic");
|
||||||
|
if (anthropicModels.some((model) => model.id === "claude-opus-4-7")) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const opus46 = anthropicModels.find((model) => model.id === "claude-opus-4-6");
|
||||||
|
if (!opus46) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
modelRegistry.registerProvider("anthropic", {
|
||||||
|
baseUrl: "https://api.anthropic.com",
|
||||||
|
api: "anthropic-messages",
|
||||||
|
oauth: anthropicOAuthProvider,
|
||||||
|
models: [
|
||||||
|
...anthropicModels,
|
||||||
|
{
|
||||||
|
...opus46,
|
||||||
|
id: "claude-opus-4-7",
|
||||||
|
name: "Claude Opus 4.7",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
export function createModelRegistry(authPath: string): ModelRegistry {
|
||||||
|
const registry = ModelRegistry.create(AuthStorage.create(authPath), getModelsJsonPath(authPath));
|
||||||
|
registerFeynmanModelOverlays(registry);
|
||||||
|
return registry;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
import { spawn } from "node:child_process";
|
import { spawn } from "node:child_process";
|
||||||
import { cpSync, existsSync, lstatSync, mkdirSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
import { cpSync, existsSync, lstatSync, mkdirSync, readdirSync, readFileSync, readlinkSync, rmSync, symlinkSync, writeFileSync } from "node:fs";
|
||||||
import { fileURLToPath } from "node:url";
|
import { fileURLToPath } from "node:url";
|
||||||
import { dirname, join, resolve } from "node:path";
|
import { dirname, join, resolve } from "node:path";
|
||||||
|
|
||||||
@@ -169,6 +169,15 @@ function resolvePackageManagerCommand(settingsManager: SettingsManager): { comma
|
|||||||
return { command: executable, args };
|
return { command: executable, args };
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function childPackageManagerEnv(): NodeJS.ProcessEnv {
|
||||||
|
return {
|
||||||
|
...process.env,
|
||||||
|
PATH: getPathWithCurrentNode(process.env.PATH),
|
||||||
|
npm_config_dry_run: "false",
|
||||||
|
NPM_CONFIG_DRY_RUN: "false",
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
async function runPackageManagerInstall(
|
async function runPackageManagerInstall(
|
||||||
settingsManager: SettingsManager,
|
settingsManager: SettingsManager,
|
||||||
workingDir: string,
|
workingDir: string,
|
||||||
@@ -207,10 +216,7 @@ async function runPackageManagerInstall(
|
|||||||
const child = spawn(packageManagerCommand.command, args, {
|
const child = spawn(packageManagerCommand.command, args, {
|
||||||
cwd: scope === "user" ? agentDir : workingDir,
|
cwd: scope === "user" ? agentDir : workingDir,
|
||||||
stdio: ["ignore", "pipe", "pipe"],
|
stdio: ["ignore", "pipe", "pipe"],
|
||||||
env: {
|
env: childPackageManagerEnv(),
|
||||||
...process.env,
|
|
||||||
PATH: getPathWithCurrentNode(process.env.PATH),
|
|
||||||
},
|
|
||||||
});
|
});
|
||||||
|
|
||||||
child.stdout?.on("data", (chunk) => relayFilteredOutput(chunk, process.stdout));
|
child.stdout?.on("data", (chunk) => relayFilteredOutput(chunk, process.stdout));
|
||||||
@@ -423,6 +429,86 @@ function linkDirectory(linkPath: string, targetPath: string): void {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function packageNameToPath(root: string, packageName: string): string {
|
||||||
|
return resolve(root, packageName);
|
||||||
|
}
|
||||||
|
|
||||||
|
function listBundledWorkspacePackageNames(root: string): string[] {
|
||||||
|
if (!existsSync(root)) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
|
||||||
|
const names: string[] = [];
|
||||||
|
for (const entry of readdirSync(root, { withFileTypes: true })) {
|
||||||
|
if (!entry.isDirectory() && !entry.isSymbolicLink()) continue;
|
||||||
|
if (entry.name.startsWith(".")) continue;
|
||||||
|
if (entry.name.startsWith("@")) {
|
||||||
|
const scopeRoot = resolve(root, entry.name);
|
||||||
|
for (const scopedEntry of readdirSync(scopeRoot, { withFileTypes: true })) {
|
||||||
|
if (!scopedEntry.isDirectory() && !scopedEntry.isSymbolicLink()) continue;
|
||||||
|
names.push(`${entry.name}/${scopedEntry.name}`);
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
names.push(entry.name);
|
||||||
|
}
|
||||||
|
return names;
|
||||||
|
}
|
||||||
|
|
||||||
|
function packageDependencyExists(packagePath: string, globalNodeModulesRoot: string, dependency: string): boolean {
|
||||||
|
return existsSync(packageNameToPath(resolve(packagePath, "node_modules"), dependency)) ||
|
||||||
|
existsSync(packageNameToPath(globalNodeModulesRoot, dependency));
|
||||||
|
}
|
||||||
|
|
||||||
|
function installedPackageLooksUsable(packagePath: string, globalNodeModulesRoot: string): boolean {
|
||||||
|
if (!existsSync(resolve(packagePath, "package.json"))) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const pkg = JSON.parse(readFileSync(resolve(packagePath, "package.json"), "utf8")) as {
|
||||||
|
dependencies?: Record<string, string>;
|
||||||
|
};
|
||||||
|
const dependencies = Object.keys(pkg.dependencies ?? {});
|
||||||
|
return dependencies.every((dependency) => packageDependencyExists(packagePath, globalNodeModulesRoot, dependency));
|
||||||
|
} catch {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function replaceBrokenPackageWithBundledCopy(targetPath: string, bundledPackagePath: string, globalNodeModulesRoot: string): boolean {
|
||||||
|
if (!existsSync(targetPath)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (pathsMatchSymlinkTarget(targetPath, bundledPackagePath)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
if (installedPackageLooksUsable(targetPath, globalNodeModulesRoot)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
rmSync(targetPath, { recursive: true, force: true });
|
||||||
|
linkDirectory(targetPath, bundledPackagePath);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
function seedBundledPackage(globalNodeModulesRoot: string, bundledNodeModulesRoot: string, packageName: string): boolean {
|
||||||
|
const bundledPackagePath = resolve(bundledNodeModulesRoot, packageName);
|
||||||
|
if (!existsSync(bundledPackagePath)) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const targetPath = resolve(globalNodeModulesRoot, packageName);
|
||||||
|
if (replaceBrokenPackageWithBundledCopy(targetPath, bundledPackagePath, globalNodeModulesRoot)) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
if (!existsSync(targetPath)) {
|
||||||
|
linkDirectory(targetPath, bundledPackagePath);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
export function seedBundledWorkspacePackages(
|
export function seedBundledWorkspacePackages(
|
||||||
agentDir: string,
|
agentDir: string,
|
||||||
appRoot: string,
|
appRoot: string,
|
||||||
@@ -435,6 +521,10 @@ export function seedBundledWorkspacePackages(
|
|||||||
|
|
||||||
const globalNodeModulesRoot = resolve(getFeynmanNpmPrefixPath(agentDir), "lib", "node_modules");
|
const globalNodeModulesRoot = resolve(getFeynmanNpmPrefixPath(agentDir), "lib", "node_modules");
|
||||||
const seeded: string[] = [];
|
const seeded: string[] = [];
|
||||||
|
const bundledPackageNames = listBundledWorkspacePackageNames(bundledNodeModulesRoot);
|
||||||
|
for (const packageName of bundledPackageNames) {
|
||||||
|
seedBundledPackage(globalNodeModulesRoot, bundledNodeModulesRoot, packageName);
|
||||||
|
}
|
||||||
|
|
||||||
for (const source of sources) {
|
for (const source of sources) {
|
||||||
if (shouldSkipNativeSource(source)) continue;
|
if (shouldSkipNativeSource(source)) continue;
|
||||||
@@ -442,12 +532,8 @@ export function seedBundledWorkspacePackages(
|
|||||||
const parsed = parseNpmSource(source);
|
const parsed = parseNpmSource(source);
|
||||||
if (!parsed) continue;
|
if (!parsed) continue;
|
||||||
|
|
||||||
const bundledPackagePath = resolve(bundledNodeModulesRoot, parsed.name);
|
|
||||||
if (!existsSync(bundledPackagePath)) continue;
|
|
||||||
|
|
||||||
const targetPath = resolve(globalNodeModulesRoot, parsed.name);
|
const targetPath = resolve(globalNodeModulesRoot, parsed.name);
|
||||||
if (!existsSync(targetPath)) {
|
if (pathsMatchSymlinkTarget(targetPath, resolve(bundledNodeModulesRoot, parsed.name))) {
|
||||||
linkDirectory(targetPath, bundledPackagePath);
|
|
||||||
seeded.push(source);
|
seeded.push(source);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -30,3 +30,70 @@ test("bundled prompts and skills do not contain blocked promotional product cont
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("research writing prompts forbid fabricated results and unproven figures", () => {
|
||||||
|
const draftPrompt = readFileSync(join(repoRoot, "prompts", "draft.md"), "utf8");
|
||||||
|
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
|
||||||
|
const writerPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "writer.md"), "utf8");
|
||||||
|
const verifierPrompt = readFileSync(join(repoRoot, ".feynman", "agents", "verifier.md"), "utf8");
|
||||||
|
|
||||||
|
for (const [label, content] of [
|
||||||
|
["system prompt", systemPrompt],
|
||||||
|
] as const) {
|
||||||
|
assert.match(content, /Never (invent|fabricate)/i, `${label} must explicitly forbid invented or fabricated results`);
|
||||||
|
assert.match(content, /(figure|chart|image|table)/i, `${label} must cover visual/table provenance`);
|
||||||
|
assert.match(content, /(provenance|source|artifact|script|raw)/i, `${label} must require traceable support`);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [label, content] of [
|
||||||
|
["writer prompt", writerPrompt],
|
||||||
|
["verifier prompt", verifierPrompt],
|
||||||
|
["draft prompt", draftPrompt],
|
||||||
|
] as const) {
|
||||||
|
assert.match(content, /system prompt.*provenance rule/i, `${label} must point back to the system provenance rule`);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.match(draftPrompt, /system prompt's provenance rules/i);
|
||||||
|
assert.match(draftPrompt, /placeholder or proposed experimental plan/i);
|
||||||
|
assert.match(draftPrompt, /source-backed quantitative data/i);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("deepresearch workflow requires durable artifacts even when blocked", () => {
|
||||||
|
const systemPrompt = readFileSync(join(repoRoot, ".feynman", "SYSTEM.md"), "utf8");
|
||||||
|
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
|
||||||
|
|
||||||
|
assert.match(systemPrompt, /Do not claim you are only a static model/i);
|
||||||
|
assert.match(systemPrompt, /write the requested durable artifact/i);
|
||||||
|
assert.match(deepResearchPrompt, /Do not stop after planning/i);
|
||||||
|
assert.match(deepResearchPrompt, /degraded mode/i);
|
||||||
|
assert.match(deepResearchPrompt, /Verification: BLOCKED/i);
|
||||||
|
assert.match(deepResearchPrompt, /Never end with only an explanation in chat/i);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("workflow prompts do not introduce implicit confirmation gates", () => {
|
||||||
|
const workflowPrompts = [
|
||||||
|
"audit.md",
|
||||||
|
"compare.md",
|
||||||
|
"deepresearch.md",
|
||||||
|
"draft.md",
|
||||||
|
"lit.md",
|
||||||
|
"review.md",
|
||||||
|
"summarize.md",
|
||||||
|
"watch.md",
|
||||||
|
];
|
||||||
|
const bannedConfirmationGates = [
|
||||||
|
/Do you want to proceed/i,
|
||||||
|
/Wait for confirmation/i,
|
||||||
|
/wait for user confirmation/i,
|
||||||
|
/give them a brief chance/i,
|
||||||
|
/request changes before proceeding/i,
|
||||||
|
];
|
||||||
|
|
||||||
|
for (const fileName of workflowPrompts) {
|
||||||
|
const content = readFileSync(join(repoRoot, "prompts", fileName), "utf8");
|
||||||
|
assert.match(content, /continue (immediately|automatically)/i, `${fileName} should keep running after planning`);
|
||||||
|
for (const pattern of bannedConfirmationGates) {
|
||||||
|
assert.doesNotMatch(content, pattern, `${fileName} contains confirmation gate ${pattern}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import { join } from "node:path";
|
|||||||
import { resolveInitialPrompt, shouldRunInteractiveSetup } from "../src/cli.js";
|
import { resolveInitialPrompt, shouldRunInteractiveSetup } from "../src/cli.js";
|
||||||
import { buildModelStatusSnapshotFromRecords, chooseRecommendedModel } from "../src/model/catalog.js";
|
import { buildModelStatusSnapshotFromRecords, chooseRecommendedModel } from "../src/model/catalog.js";
|
||||||
import { resolveModelProviderForCommand, setDefaultModelSpec } from "../src/model/commands.js";
|
import { resolveModelProviderForCommand, setDefaultModelSpec } from "../src/model/commands.js";
|
||||||
|
import { createModelRegistry } from "../src/model/registry.js";
|
||||||
|
|
||||||
function createAuthPath(contents: Record<string, unknown>): string {
|
function createAuthPath(contents: Record<string, unknown>): string {
|
||||||
const root = mkdtempSync(join(tmpdir(), "feynman-auth-"));
|
const root = mkdtempSync(join(tmpdir(), "feynman-auth-"));
|
||||||
@@ -26,6 +27,17 @@ test("chooseRecommendedModel prefers the strongest authenticated research model"
|
|||||||
assert.equal(recommendation?.spec, "anthropic/claude-opus-4-6");
|
assert.equal(recommendation?.spec, "anthropic/claude-opus-4-6");
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("createModelRegistry overlays new Anthropic Opus model before upstream Pi updates", () => {
|
||||||
|
const authPath = createAuthPath({
|
||||||
|
anthropic: { type: "api_key", key: "anthropic-test-key" },
|
||||||
|
});
|
||||||
|
|
||||||
|
const registry = createModelRegistry(authPath);
|
||||||
|
|
||||||
|
assert.ok(registry.find("anthropic", "claude-opus-4-7"));
|
||||||
|
assert.equal(registry.getAvailable().some((model) => model.provider === "anthropic" && model.id === "claude-opus-4-7"), true);
|
||||||
|
});
|
||||||
|
|
||||||
test("setDefaultModelSpec accepts a unique bare model id from authenticated models", () => {
|
test("setDefaultModelSpec accepts a unique bare model id from authenticated models", () => {
|
||||||
const authPath = createAuthPath({
|
const authPath = createAuthPath({
|
||||||
openai: { type: "api_key", key: "openai-test-key" },
|
openai: { type: "api_key", key: "openai-test-key" },
|
||||||
@@ -67,6 +79,24 @@ test("resolveModelProviderForCommand falls back to API-key providers when OAuth
|
|||||||
assert.equal(resolved?.id, "google");
|
assert.equal(resolved?.id, "google");
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("resolveModelProviderForCommand supports LM Studio as a first-class local provider", () => {
|
||||||
|
const authPath = createAuthPath({});
|
||||||
|
|
||||||
|
const resolved = resolveModelProviderForCommand(authPath, "lm-studio");
|
||||||
|
|
||||||
|
assert.equal(resolved?.kind, "api-key");
|
||||||
|
assert.equal(resolved?.id, "lm-studio");
|
||||||
|
});
|
||||||
|
|
||||||
|
test("resolveModelProviderForCommand supports LiteLLM as a first-class proxy provider", () => {
|
||||||
|
const authPath = createAuthPath({});
|
||||||
|
|
||||||
|
const resolved = resolveModelProviderForCommand(authPath, "litellm");
|
||||||
|
|
||||||
|
assert.equal(resolved?.kind, "api-key");
|
||||||
|
assert.equal(resolved?.id, "litellm");
|
||||||
|
});
|
||||||
|
|
||||||
test("resolveModelProviderForCommand prefers OAuth when a provider supports both auth modes", () => {
|
test("resolveModelProviderForCommand prefers OAuth when a provider supports both auth modes", () => {
|
||||||
const authPath = createAuthPath({});
|
const authPath = createAuthPath({});
|
||||||
|
|
||||||
|
|||||||
@@ -30,3 +30,45 @@ test("upsertProviderConfig creates models.json and merges provider config", () =
|
|||||||
assert.equal(parsed.providers.custom.authHeader, true);
|
assert.equal(parsed.providers.custom.authHeader, true);
|
||||||
assert.deepEqual(parsed.providers.custom.models, [{ id: "llama3.1:8b" }]);
|
assert.deepEqual(parsed.providers.custom.models, [{ id: "llama3.1:8b" }]);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("upsertProviderConfig writes LiteLLM proxy config with master key", () => {
|
||||||
|
const dir = mkdtempSync(join(tmpdir(), "feynman-litellm-"));
|
||||||
|
const modelsPath = join(dir, "models.json");
|
||||||
|
|
||||||
|
const result = upsertProviderConfig(modelsPath, "litellm", {
|
||||||
|
baseUrl: "http://localhost:4000/v1",
|
||||||
|
apiKey: "LITELLM_MASTER_KEY",
|
||||||
|
api: "openai-completions",
|
||||||
|
authHeader: true,
|
||||||
|
models: [{ id: "gpt-4o" }],
|
||||||
|
});
|
||||||
|
assert.deepEqual(result, { ok: true });
|
||||||
|
|
||||||
|
const parsed = JSON.parse(readFileSync(modelsPath, "utf8")) as any;
|
||||||
|
assert.equal(parsed.providers.litellm.baseUrl, "http://localhost:4000/v1");
|
||||||
|
assert.equal(parsed.providers.litellm.apiKey, "LITELLM_MASTER_KEY");
|
||||||
|
assert.equal(parsed.providers.litellm.api, "openai-completions");
|
||||||
|
assert.equal(parsed.providers.litellm.authHeader, true);
|
||||||
|
assert.deepEqual(parsed.providers.litellm.models, [{ id: "gpt-4o" }]);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("upsertProviderConfig writes LiteLLM proxy config without master key", () => {
|
||||||
|
const dir = mkdtempSync(join(tmpdir(), "feynman-litellm-"));
|
||||||
|
const modelsPath = join(dir, "models.json");
|
||||||
|
|
||||||
|
const result = upsertProviderConfig(modelsPath, "litellm", {
|
||||||
|
baseUrl: "http://localhost:4000/v1",
|
||||||
|
apiKey: "local",
|
||||||
|
api: "openai-completions",
|
||||||
|
authHeader: false,
|
||||||
|
models: [{ id: "llama3" }],
|
||||||
|
});
|
||||||
|
assert.deepEqual(result, { ok: true });
|
||||||
|
|
||||||
|
const parsed = JSON.parse(readFileSync(modelsPath, "utf8")) as any;
|
||||||
|
assert.equal(parsed.providers.litellm.baseUrl, "http://localhost:4000/v1");
|
||||||
|
assert.equal(parsed.providers.litellm.apiKey, "local");
|
||||||
|
assert.equal(parsed.providers.litellm.api, "openai-completions");
|
||||||
|
assert.equal(parsed.providers.litellm.authHeader, false);
|
||||||
|
assert.deepEqual(parsed.providers.litellm.models, [{ id: "llama3" }]);
|
||||||
|
});
|
||||||
|
|||||||
@@ -6,13 +6,17 @@ import { join, resolve } from "node:path";
|
|||||||
|
|
||||||
import { installPackageSources, seedBundledWorkspacePackages, updateConfiguredPackages } from "../src/pi/package-ops.js";
|
import { installPackageSources, seedBundledWorkspacePackages, updateConfiguredPackages } from "../src/pi/package-ops.js";
|
||||||
|
|
||||||
function createBundledWorkspace(appRoot: string, packageNames: string[]): void {
|
function createBundledWorkspace(
|
||||||
|
appRoot: string,
|
||||||
|
packageNames: string[],
|
||||||
|
dependenciesByPackage: Record<string, Record<string, string>> = {},
|
||||||
|
): void {
|
||||||
for (const packageName of packageNames) {
|
for (const packageName of packageNames) {
|
||||||
const packageDir = resolve(appRoot, ".feynman", "npm", "node_modules", packageName);
|
const packageDir = resolve(appRoot, ".feynman", "npm", "node_modules", packageName);
|
||||||
mkdirSync(packageDir, { recursive: true });
|
mkdirSync(packageDir, { recursive: true });
|
||||||
writeFileSync(
|
writeFileSync(
|
||||||
join(packageDir, "package.json"),
|
join(packageDir, "package.json"),
|
||||||
JSON.stringify({ name: packageName, version: "1.0.0" }, null, 2) + "\n",
|
JSON.stringify({ name: packageName, version: "1.0.0", dependencies: dependenciesByPackage[packageName] }, null, 2) + "\n",
|
||||||
"utf8",
|
"utf8",
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -76,6 +80,34 @@ test("seedBundledWorkspacePackages preserves existing installed packages", () =>
|
|||||||
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), false);
|
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), false);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("seedBundledWorkspacePackages repairs broken existing bundled packages", () => {
|
||||||
|
const appRoot = mkdtempSync(join(tmpdir(), "feynman-bundle-"));
|
||||||
|
const homeRoot = mkdtempSync(join(tmpdir(), "feynman-home-"));
|
||||||
|
const agentDir = resolve(homeRoot, "agent");
|
||||||
|
const existingPackageDir = resolve(homeRoot, "npm-global", "lib", "node_modules", "pi-markdown-preview");
|
||||||
|
|
||||||
|
mkdirSync(agentDir, { recursive: true });
|
||||||
|
createBundledWorkspace(appRoot, ["pi-markdown-preview", "puppeteer-core"], {
|
||||||
|
"pi-markdown-preview": { "puppeteer-core": "^24.0.0" },
|
||||||
|
});
|
||||||
|
mkdirSync(existingPackageDir, { recursive: true });
|
||||||
|
writeFileSync(
|
||||||
|
resolve(existingPackageDir, "package.json"),
|
||||||
|
JSON.stringify({ name: "pi-markdown-preview", version: "broken", dependencies: { "puppeteer-core": "^24.0.0" } }) + "\n",
|
||||||
|
"utf8",
|
||||||
|
);
|
||||||
|
|
||||||
|
const seeded = seedBundledWorkspacePackages(agentDir, appRoot, ["npm:pi-markdown-preview"]);
|
||||||
|
|
||||||
|
assert.deepEqual(seeded, ["npm:pi-markdown-preview"]);
|
||||||
|
assert.equal(lstatSync(existingPackageDir).isSymbolicLink(), true);
|
||||||
|
assert.equal(lstatSync(resolve(homeRoot, "npm-global", "lib", "node_modules", "puppeteer-core")).isSymbolicLink(), true);
|
||||||
|
assert.equal(
|
||||||
|
readFileSync(resolve(existingPackageDir, "package.json"), "utf8").includes('"version": "1.0.0"'),
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
test("installPackageSources filters noisy npm chatter but preserves meaningful output", async () => {
|
test("installPackageSources filters noisy npm chatter but preserves meaningful output", async () => {
|
||||||
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
const workingDir = resolve(root, "project");
|
const workingDir = resolve(root, "project");
|
||||||
@@ -156,6 +188,46 @@ test("installPackageSources skips native packages on unsupported Node majors bef
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("installPackageSources disables inherited npm dry-run config for child installs", async () => {
|
||||||
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
|
const workingDir = resolve(root, "project");
|
||||||
|
const agentDir = resolve(root, "agent");
|
||||||
|
const markerPath = resolve(root, "install-env-ok.txt");
|
||||||
|
mkdirSync(workingDir, { recursive: true });
|
||||||
|
|
||||||
|
const scriptPath = writeFakeNpmScript(root, [
|
||||||
|
`import { writeFileSync } from "node:fs";`,
|
||||||
|
`if (process.env.npm_config_dry_run !== "false" || process.env.NPM_CONFIG_DRY_RUN !== "false") process.exit(42);`,
|
||||||
|
`writeFileSync(${JSON.stringify(markerPath)}, "ok\\n", "utf8");`,
|
||||||
|
"process.exit(0);",
|
||||||
|
].join("\n"));
|
||||||
|
|
||||||
|
writeSettings(agentDir, {
|
||||||
|
npmCommand: [process.execPath, scriptPath],
|
||||||
|
});
|
||||||
|
|
||||||
|
const originalLower = process.env.npm_config_dry_run;
|
||||||
|
const originalUpper = process.env.NPM_CONFIG_DRY_RUN;
|
||||||
|
process.env.npm_config_dry_run = "true";
|
||||||
|
process.env.NPM_CONFIG_DRY_RUN = "true";
|
||||||
|
try {
|
||||||
|
const result = await installPackageSources(workingDir, agentDir, ["npm:test-package"]);
|
||||||
|
assert.deepEqual(result.installed, ["npm:test-package"]);
|
||||||
|
assert.equal(existsSync(markerPath), true);
|
||||||
|
} finally {
|
||||||
|
if (originalLower === undefined) {
|
||||||
|
delete process.env.npm_config_dry_run;
|
||||||
|
} else {
|
||||||
|
process.env.npm_config_dry_run = originalLower;
|
||||||
|
}
|
||||||
|
if (originalUpper === undefined) {
|
||||||
|
delete process.env.NPM_CONFIG_DRY_RUN;
|
||||||
|
} else {
|
||||||
|
process.env.NPM_CONFIG_DRY_RUN = originalUpper;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
test("updateConfiguredPackages batches multiple npm updates into a single install per scope", async () => {
|
test("updateConfiguredPackages batches multiple npm updates into a single install per scope", async () => {
|
||||||
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
const root = mkdtempSync(join(tmpdir(), "feynman-package-ops-"));
|
||||||
const workingDir = resolve(root, "project");
|
const workingDir = resolve(root, "project");
|
||||||
@@ -186,7 +258,7 @@ test("updateConfiguredPackages batches multiple npm updates into a single instal
|
|||||||
globalThis.fetch = (async () => ({
|
globalThis.fetch = (async () => ({
|
||||||
ok: true,
|
ok: true,
|
||||||
json: async () => ({ version: "2.0.0" }),
|
json: async () => ({ version: "2.0.0" }),
|
||||||
})) as typeof fetch;
|
})) as unknown as typeof fetch;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const result = await updateConfiguredPackages(workingDir, agentDir);
|
const result = await updateConfiguredPackages(workingDir, agentDir);
|
||||||
@@ -234,7 +306,7 @@ test("updateConfiguredPackages skips native package updates on unsupported Node
|
|||||||
globalThis.fetch = (async () => ({
|
globalThis.fetch = (async () => ({
|
||||||
ok: true,
|
ok: true,
|
||||||
json: async () => ({ version: "2.0.0" }),
|
json: async () => ({ version: "2.0.0" }),
|
||||||
})) as typeof fetch;
|
})) as unknown as typeof fetch;
|
||||||
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import test from "node:test";
|
import test from "node:test";
|
||||||
import assert from "node:assert/strict";
|
import assert from "node:assert/strict";
|
||||||
|
|
||||||
import { patchPiSubagentsSource } from "../scripts/lib/pi-subagents-patch.mjs";
|
import { patchPiSubagentsSource, stripPiSubagentBuiltinModelSource } from "../scripts/lib/pi-subagents-patch.mjs";
|
||||||
|
|
||||||
const CASES = [
|
const CASES = [
|
||||||
{
|
{
|
||||||
@@ -140,3 +140,22 @@ test("patchPiSubagentsSource rewrites modern agents.ts discovery paths", () => {
|
|||||||
assert.ok(!patched.includes('loadChainsFromDir(userDirNew, "user")'));
|
assert.ok(!patched.includes('loadChainsFromDir(userDirNew, "user")'));
|
||||||
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
|
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("stripPiSubagentBuiltinModelSource removes built-in model pins", () => {
|
||||||
|
const input = [
|
||||||
|
"---",
|
||||||
|
"name: researcher",
|
||||||
|
"description: Web researcher",
|
||||||
|
"model: anthropic/claude-sonnet-4-6",
|
||||||
|
"tools: read, web_search",
|
||||||
|
"---",
|
||||||
|
"",
|
||||||
|
"Body",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = stripPiSubagentBuiltinModelSource(input);
|
||||||
|
|
||||||
|
assert.ok(!patched.includes("model: anthropic/claude-sonnet-4-6"));
|
||||||
|
assert.match(patched, /name: researcher/);
|
||||||
|
assert.match(patched, /tools: read, web_search/);
|
||||||
|
});
|
||||||
|
|||||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.18
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.28
|
||||||
EOF
|
EOF
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.18
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.28
|
||||||
"@
|
"@
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -117,13 +117,13 @@ These installers download the bundled `skills/` and `prompts/` trees plus the re
|
|||||||
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.18
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.28
|
||||||
```
|
```
|
||||||
|
|
||||||
On Windows:
|
On Windows:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.18
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.28
|
||||||
```
|
```
|
||||||
|
|
||||||
## Post-install setup
|
## Post-install setup
|
||||||
|
|||||||
@@ -52,9 +52,41 @@ Amazon Bedrock (AWS credential chain)
|
|||||||
|
|
||||||
Feynman verifies the same AWS credential chain Pi uses at runtime, including `AWS_PROFILE`, `~/.aws` credentials/config, SSO, ECS/IRSA, and EC2 instance roles. Once that check passes, Bedrock models become available in `feynman model list` without needing a traditional API key.
|
Feynman verifies the same AWS credential chain Pi uses at runtime, including `AWS_PROFILE`, `~/.aws` credentials/config, SSO, ECS/IRSA, and EC2 instance roles. Once that check passes, Bedrock models become available in `feynman model list` without needing a traditional API key.
|
||||||
|
|
||||||
### Local models: Ollama, LM Studio, vLLM
|
### Local models: LM Studio, LiteLLM, Ollama, vLLM
|
||||||
|
|
||||||
If you want to use a model running locally, choose the API-key flow and then select:
|
If you want to use LM Studio, start the LM Studio local server, load a model, choose the API-key flow, and then select:
|
||||||
|
|
||||||
|
```text
|
||||||
|
LM Studio (local OpenAI-compatible server)
|
||||||
|
```
|
||||||
|
|
||||||
|
The default settings are:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Base URL: http://localhost:1234/v1
|
||||||
|
Authorization header: No
|
||||||
|
API key: lm-studio
|
||||||
|
```
|
||||||
|
|
||||||
|
Feynman attempts to read LM Studio's `/models` endpoint and prefill the loaded model id.
|
||||||
|
|
||||||
|
For LiteLLM, start the proxy, choose the API-key flow, and then select:
|
||||||
|
|
||||||
|
```text
|
||||||
|
LiteLLM Proxy (OpenAI-compatible gateway)
|
||||||
|
```
|
||||||
|
|
||||||
|
The default settings are:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Base URL: http://localhost:4000/v1
|
||||||
|
API mode: openai-completions
|
||||||
|
Master key: optional, read from LITELLM_MASTER_KEY
|
||||||
|
```
|
||||||
|
|
||||||
|
Feynman attempts to read LiteLLM's `/models` endpoint and prefill model ids from the proxy config.
|
||||||
|
|
||||||
|
For Ollama, vLLM, or another OpenAI-compatible local server, choose:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Custom provider (baseUrl + API key)
|
Custom provider (baseUrl + API key)
|
||||||
@@ -70,7 +102,7 @@ Model ids: llama3.1:8b
|
|||||||
API key: local
|
API key: local
|
||||||
```
|
```
|
||||||
|
|
||||||
That same custom-provider flow also works for other OpenAI-compatible local servers such as LM Studio or vLLM. After saving the provider, run:
|
After saving the provider, run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
feynman model list
|
feynman model list
|
||||||
|
|||||||
@@ -35,6 +35,8 @@ When working from existing session context (after a deep research or literature
|
|||||||
|
|
||||||
The writer pays attention to academic conventions: claims are attributed to their sources with inline citations, methodology sections describe procedures precisely, and limitations are discussed honestly. The draft includes placeholder sections for any content the writer cannot generate from available sources, clearly marking what needs human input.
|
The writer pays attention to academic conventions: claims are attributed to their sources with inline citations, methodology sections describe procedures precisely, and limitations are discussed honestly. The draft includes placeholder sections for any content the writer cannot generate from available sources, clearly marking what needs human input.
|
||||||
|
|
||||||
|
Drafts follow Feynman's system-wide provenance rules: unsupported results, figures, images, tables, or benchmark data should become clearly labeled gaps or TODOs, not plausible-looking claims.
|
||||||
|
|
||||||
## Output format
|
## Output format
|
||||||
|
|
||||||
The draft follows standard academic structure:
|
The draft follows standard academic structure:
|
||||||
|
|||||||
Reference in New Issue
Block a user