Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
40939859b9 | ||
|
|
6f3eeea75b | ||
|
|
1b53e3b7f1 | ||
|
|
ec4cbfb57e | ||
|
|
1cd1a147f2 | ||
|
|
92914acff7 | ||
|
|
f0bbb25910 |
@@ -25,7 +25,7 @@ curl -fsSL https://feynman.is/install | bash
|
|||||||
irm https://feynman.is/install.ps1 | iex
|
irm https://feynman.is/install.ps1 | iex
|
||||||
```
|
```
|
||||||
|
|
||||||
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.25`.
|
The one-line installer fetches the latest tagged release. To pin a version, pass it explicitly, for example `curl -fsSL https://feynman.is/install | bash -s -- 0.2.31`.
|
||||||
|
|
||||||
The installer downloads a standalone native bundle with its own Node.js runtime.
|
The installer downloads a standalone native bundle with its own Node.js runtime.
|
||||||
|
|
||||||
|
|||||||
1105
package-lock.json
generated
1105
package-lock.json
generated
File diff suppressed because it is too large
Load Diff
21
package.json
21
package.json
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@companion-ai/feynman",
|
"name": "@companion-ai/feynman",
|
||||||
"version": "0.2.25",
|
"version": "0.2.32",
|
||||||
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
"description": "Research-first CLI agent built on Pi and alphaXiv",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
@@ -61,16 +61,16 @@
|
|||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@clack/prompts": "^1.2.0",
|
"@clack/prompts": "^1.2.0",
|
||||||
"@companion-ai/alpha-hub": "^0.1.3",
|
"@companion-ai/alpha-hub": "^0.1.3",
|
||||||
"@mariozechner/pi-ai": "^0.66.1",
|
"@mariozechner/pi-ai": "^0.67.6",
|
||||||
"@mariozechner/pi-coding-agent": "^0.66.1",
|
"@mariozechner/pi-coding-agent": "^0.67.6",
|
||||||
"@sinclair/typebox": "^0.34.48",
|
"@sinclair/typebox": "^0.34.49",
|
||||||
"dotenv": "^17.3.1"
|
"dotenv": "^17.4.2"
|
||||||
},
|
},
|
||||||
"overrides": {
|
"overrides": {
|
||||||
"basic-ftp": "5.2.2",
|
"basic-ftp": "5.3.0",
|
||||||
"@modelcontextprotocol/sdk": {
|
"@modelcontextprotocol/sdk": {
|
||||||
"@hono/node-server": "1.19.13",
|
"@hono/node-server": "1.19.14",
|
||||||
"hono": "4.12.12"
|
"hono": "4.12.14"
|
||||||
},
|
},
|
||||||
"express": {
|
"express": {
|
||||||
"router": {
|
"router": {
|
||||||
@@ -80,16 +80,17 @@
|
|||||||
"proxy-agent": {
|
"proxy-agent": {
|
||||||
"pac-proxy-agent": {
|
"pac-proxy-agent": {
|
||||||
"get-uri": {
|
"get-uri": {
|
||||||
"basic-ftp": "5.2.2"
|
"basic-ftp": "5.3.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"protobufjs": "7.5.5",
|
||||||
"minimatch": {
|
"minimatch": {
|
||||||
"brace-expansion": "5.0.5"
|
"brace-expansion": "5.0.5"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/node": "^25.5.0",
|
"@types/node": "^25.6.0",
|
||||||
"tsx": "^4.21.0",
|
"tsx": "^4.21.0",
|
||||||
"typescript": "^5.9.3"
|
"typescript": "^5.9.3"
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -6,6 +6,8 @@ topLevelCli: true
|
|||||||
---
|
---
|
||||||
Run a deep research workflow for: $@
|
Run a deep research workflow for: $@
|
||||||
|
|
||||||
|
This is an execution request, not a request to explain or implement the workflow instructions. Carry out the workflow with tools and durable files. Do not answer by describing the protocol, converting it into programming steps, or saying how someone could implement it.
|
||||||
|
|
||||||
You are the Lead Researcher. You plan, delegate, evaluate, verify, write, and cite. Internal orchestration is invisible to the user unless they ask.
|
You are the Lead Researcher. You plan, delegate, evaluate, verify, write, and cite. Internal orchestration is invisible to the user unless they ask.
|
||||||
|
|
||||||
## 1. Plan
|
## 1. Plan
|
||||||
@@ -17,6 +19,8 @@ Analyze the research question using extended thinking. Develop a research strate
|
|||||||
- Source types and time periods that matter
|
- Source types and time periods that matter
|
||||||
- Acceptance criteria: what evidence would make the answer "sufficient"
|
- Acceptance criteria: what evidence would make the answer "sufficient"
|
||||||
|
|
||||||
|
Make the scale decision before assigning owners in the plan. If the topic is a narrow "what is X" explainer, the plan must use lead-owned direct search tasks only; do not allocate researcher subagents in the task ledger.
|
||||||
|
|
||||||
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words — e.g. "cloud-sandbox-pricing" not "deepresearch-plan"). Write the plan to `outputs/.plans/<slug>.md` as a self-contained artifact. Use this same slug for all artifacts in this run.
|
Derive a short slug from the topic (lowercase, hyphens, no filler words, ≤5 words — e.g. "cloud-sandbox-pricing" not "deepresearch-plan"). Write the plan to `outputs/.plans/<slug>.md` as a self-contained artifact. Use this same slug for all artifacts in this run.
|
||||||
If `CHANGELOG.md` exists, read the most recent relevant entries before finalizing the plan. Once the workflow becomes multi-round or spans enough work to merit resume support, append concise entries to `CHANGELOG.md` after meaningful progress and before stopping.
|
If `CHANGELOG.md` exists, read the most recent relevant entries before finalizing the plan. Once the workflow becomes multi-round or spans enough work to merit resume support, append concise entries to `CHANGELOG.md` after meaningful progress and before stopping.
|
||||||
|
|
||||||
@@ -59,15 +63,19 @@ Do not stop after planning. If live search, subagents, web access, alphaXiv, or
|
|||||||
|
|
||||||
| Query type | Execution |
|
| Query type | Execution |
|
||||||
|---|---|
|
|---|---|
|
||||||
| Single fact or narrow question | Search directly yourself, no subagents, 3-10 tool calls |
|
| Single fact or narrow question, including "what is X" explainers | Search directly yourself, no subagents, 3-10 tool calls |
|
||||||
| Direct comparison (2-3 items) | 2 parallel `researcher` subagents |
|
| Direct comparison (2-3 items) | 2 parallel `researcher` subagents |
|
||||||
| Broad survey or multi-faceted topic | 3-4 parallel `researcher` subagents |
|
| Broad survey or multi-faceted topic | 3-4 parallel `researcher` subagents |
|
||||||
| Complex multi-domain research | 4-6 parallel `researcher` subagents |
|
| Complex multi-domain research | 4-6 parallel `researcher` subagents |
|
||||||
|
|
||||||
Never spawn subagents for work you can do in 5 tool calls.
|
Never spawn subagents for work you can do in 5 tool calls.
|
||||||
|
For "what is X" explainer topics, you MUST NOT spawn researcher subagents unless the user explicitly asks for comprehensive coverage, current landscape, benchmarks, or production deployment.
|
||||||
|
Do not inflate a simple explainer into a multi-agent survey.
|
||||||
|
|
||||||
## 3. Spawn researchers
|
## 3. Spawn researchers
|
||||||
|
|
||||||
|
Skip this section entirely when the scale decision chose direct search/no subagents. In that case, gather evidence yourself with search/fetch/paper tools, write notes directly to `<slug>-research-direct.md`, and continue to Section 4.
|
||||||
|
|
||||||
Launch parallel `researcher` subagents via `subagent`. Each gets a structured brief with:
|
Launch parallel `researcher` subagents via `subagent`. Each gets a structured brief with:
|
||||||
- **Objective:** what to find
|
- **Objective:** what to find
|
||||||
- **Output format:** numbered sources, evidence table, inline source references
|
- **Output format:** numbered sources, evidence table, inline source references
|
||||||
@@ -76,12 +84,16 @@ Launch parallel `researcher` subagents via `subagent`. Each gets a structured br
|
|||||||
- **Task IDs:** the specific ledger rows they own and must report back on
|
- **Task IDs:** the specific ledger rows they own and must report back on
|
||||||
|
|
||||||
Assign each researcher a clearly disjoint dimension — different source types, geographic scopes, time periods, or technical angles. Never duplicate coverage.
|
Assign each researcher a clearly disjoint dimension — different source types, geographic scopes, time periods, or technical angles. Never duplicate coverage.
|
||||||
|
Keep `subagent` tool-call JSON small and valid. For detailed task instructions, write a per-researcher brief first, e.g. `outputs/.plans/<slug>-T1.md`, then pass a short task string that points to that brief and the required output file. Do not place multi-paragraph instructions inside the `subagent` JSON.
|
||||||
|
Use only supported `subagent` keys. Do not add extra keys such as `artifacts` unless the tool schema explicitly exposes them.
|
||||||
|
When using parallel researchers, always set `failFast: false` so one blocked researcher does not abort the whole workflow.
|
||||||
|
Do not name exact tool commands in subagent tasks unless those tool names are visible in the current tool set. Prefer broad guidance such as "use paper search and web search"; if a PDF parser or paper fetch fails, the researcher must continue from metadata, abstracts, and web sources and mark PDF parsing as blocked.
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
tasks: [
|
tasks: [
|
||||||
{ agent: "researcher", task: "...", output: "<slug>-research-web.md" },
|
{ agent: "researcher", task: "Read outputs/.plans/<slug>-T1.md and write <slug>-research-web.md.", output: "<slug>-research-web.md" },
|
||||||
{ agent: "researcher", task: "...", output: "<slug>-research-papers.md" }
|
{ agent: "researcher", task: "Read outputs/.plans/<slug>-T2.md and write <slug>-research-papers.md.", output: "<slug>-research-papers.md" }
|
||||||
],
|
],
|
||||||
concurrency: 4,
|
concurrency: 4,
|
||||||
failFast: false
|
failFast: false
|
||||||
@@ -148,25 +160,29 @@ Save this draft to `outputs/.drafts/<slug>-draft.md`.
|
|||||||
Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output:
|
Spawn the `verifier` agent to post-process YOUR draft. The verifier agent adds inline citations, verifies every source URL, and produces the final output:
|
||||||
|
|
||||||
```
|
```
|
||||||
{ agent: "verifier", task: "Add inline citations to <slug>-draft.md using the research files as source material. Verify every URL.", output: "<slug>-brief.md" }
|
{ agent: "verifier", task: "Add inline citations to outputs/.drafts/<slug>-draft.md using the research files as source material. Verify every URL. Write the complete cited brief to outputs/.drafts/<slug>-cited.md.", output: "outputs/.drafts/<slug>-cited.md" }
|
||||||
```
|
```
|
||||||
|
|
||||||
The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
|
The verifier agent does not rewrite the report — it only anchors claims to sources and builds the numbered Sources section.
|
||||||
|
This step is mandatory and must complete before any reviewer runs. Do not run the `verifier` and `reviewer` in the same parallel `subagent` call.
|
||||||
|
After the verifier returns, verify on disk that `outputs/.drafts/<slug>-cited.md` exists. If the verifier wrote to a different path, find the cited file, move or copy it to `outputs/.drafts/<slug>-cited.md`, and use that path from this point forward.
|
||||||
|
|
||||||
## 7. Verify
|
## 7. Verify
|
||||||
|
|
||||||
Spawn the `reviewer` agent against the cited draft. The reviewer checks for:
|
Only after `outputs/.drafts/<slug>-cited.md` exists, spawn the `reviewer` agent against that cited draft. The reviewer checks for:
|
||||||
- Unsupported claims that slipped past citation
|
- Unsupported claims that slipped past citation
|
||||||
- Logical gaps or contradictions between sections
|
- Logical gaps or contradictions between sections
|
||||||
- Single-source claims on critical findings
|
- Single-source claims on critical findings
|
||||||
- Overstated confidence relative to evidence quality
|
- Overstated confidence relative to evidence quality
|
||||||
|
|
||||||
```
|
```
|
||||||
{ agent: "reviewer", task: "Verify <slug>-brief.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "<slug>-verification.md" }
|
{ agent: "reviewer", task: "Verify outputs/.drafts/<slug>-cited.md — flag any claims that lack sufficient source backing, identify logical gaps, and check that confidence levels match evidence strength. This is a verification pass, not a peer review.", output: "<slug>-verification.md" }
|
||||||
```
|
```
|
||||||
|
|
||||||
If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted.
|
If the reviewer flags FATAL issues, fix them in the brief before delivering. MAJOR issues get noted in the Open Questions section. MINOR issues are accepted.
|
||||||
After fixes, run at least one more review-style verification pass if any FATAL issues were found. Do not assume one fix solved everything.
|
After fixes, run at least one more review-style verification pass if any FATAL issues were found. Do not assume one fix solved everything.
|
||||||
|
When applying reviewer fixes, do not issue one giant `edit` tool call with many replacements. Use small localized edits only when there are 1-3 simple corrections. For section rewrites, table rewrites, or more than 3 substantive fixes, read the cited draft and write a corrected full file to `outputs/.drafts/<slug>-revised.md` instead. Then run the follow-up review against `outputs/.drafts/<slug>-revised.md`.
|
||||||
|
The final candidate is `outputs/.drafts/<slug>-revised.md` if it exists; otherwise it is `outputs/.drafts/<slug>-cited.md`.
|
||||||
|
|
||||||
## 8. Deliver
|
## 8. Deliver
|
||||||
|
|
||||||
@@ -194,11 +210,11 @@ Write a provenance record alongside it as `<slug>.provenance.md`:
|
|||||||
Before you stop, verify on disk that all of these exist:
|
Before you stop, verify on disk that all of these exist:
|
||||||
- `outputs/.plans/<slug>.md`
|
- `outputs/.plans/<slug>.md`
|
||||||
- `outputs/.drafts/<slug>-draft.md`
|
- `outputs/.drafts/<slug>-draft.md`
|
||||||
- `<slug>-brief.md` intermediate cited brief
|
- `outputs/.drafts/<slug>-cited.md` intermediate cited brief
|
||||||
- `outputs/<slug>.md` or `papers/<slug>.md` final promoted deliverable
|
- `outputs/<slug>.md` or `papers/<slug>.md` final promoted deliverable
|
||||||
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md` provenance sidecar
|
- `outputs/<slug>.provenance.md` or `papers/<slug>.provenance.md` provenance sidecar
|
||||||
|
|
||||||
Do not stop at `<slug>-brief.md` alone. If the cited brief exists but the promoted final output or provenance sidecar does not, create them before responding.
|
Do not stop at the cited or revised draft alone. If the cited/revised brief exists but the promoted final output or provenance sidecar does not, create them before responding.
|
||||||
If full verification could not be completed, still create the final deliverable and provenance sidecar with `Verification: BLOCKED` or `PASS WITH NOTES` and list the missing checks. Never end with only an explanation in chat.
|
If full verification could not be completed, still create the final deliverable and provenance sidecar with `Verification: BLOCKED` or `PASS WITH NOTES` and list the missing checks. Never end with only an explanation in chat.
|
||||||
|
|
||||||
## Background execution
|
## Background execution
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.25
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
|
||||||
"@
|
"@
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.25
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
|
||||||
EOF
|
EOF
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -1,2 +1,3 @@
|
|||||||
export const PI_SUBAGENTS_PATCH_TARGETS: string[];
|
export const PI_SUBAGENTS_PATCH_TARGETS: string[];
|
||||||
export function patchPiSubagentsSource(relativePath: string, source: string): string;
|
export function patchPiSubagentsSource(relativePath: string, source: string): string;
|
||||||
|
export function stripPiSubagentBuiltinModelSource(source: string): string;
|
||||||
|
|||||||
@@ -5,11 +5,13 @@ export const PI_SUBAGENTS_PATCH_TARGETS = [
|
|||||||
"run-history.ts",
|
"run-history.ts",
|
||||||
"skills.ts",
|
"skills.ts",
|
||||||
"chain-clarify.ts",
|
"chain-clarify.ts",
|
||||||
|
"subagent-executor.ts",
|
||||||
|
"schemas.ts",
|
||||||
];
|
];
|
||||||
|
|
||||||
const RESOLVE_PI_AGENT_DIR_HELPER = [
|
const RESOLVE_PI_AGENT_DIR_HELPER = [
|
||||||
"function resolvePiAgentDir(): string {",
|
"function resolvePiAgentDir(): string {",
|
||||||
' const configured = process.env.PI_CODING_AGENT_DIR?.trim();',
|
' const configured = process.env.FEYNMAN_CODING_AGENT_DIR?.trim() || process.env.PI_CODING_AGENT_DIR?.trim();',
|
||||||
' if (!configured) return path.join(os.homedir(), ".pi", "agent");',
|
' if (!configured) return path.join(os.homedir(), ".pi", "agent");',
|
||||||
' return configured.startsWith("~/") ? path.join(os.homedir(), configured.slice(2)) : configured;',
|
' return configured.startsWith("~/") ? path.join(os.homedir(), configured.slice(2)) : configured;',
|
||||||
"}",
|
"}",
|
||||||
@@ -94,6 +96,11 @@ export function patchPiSubagentsSource(relativePath, source) {
|
|||||||
'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
|
'const configPath = path.join(os.homedir(), ".pi", "agent", "extensions", "subagent", "config.json");',
|
||||||
'const configPath = path.join(resolvePiAgentDir(), "extensions", "subagent", "config.json");',
|
'const configPath = path.join(resolvePiAgentDir(), "extensions", "subagent", "config.json");',
|
||||||
);
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
"• PARALLEL: { tasks: [{agent,task,count?}, ...], concurrency?: number, worktree?: true } - concurrent execution (worktree: isolate each task in a git worktree)",
|
||||||
|
"• PARALLEL: { tasks: [{agent,task,count?,output?}, ...], concurrency?: number, worktree?: true } - concurrent execution (output: per-task file target, worktree: isolate each task in a git worktree)",
|
||||||
|
);
|
||||||
break;
|
break;
|
||||||
case "agents.ts":
|
case "agents.ts":
|
||||||
patched = replaceAll(
|
patched = replaceAll(
|
||||||
@@ -190,6 +197,138 @@ export function patchPiSubagentsSource(relativePath, source) {
|
|||||||
'const dir = path.join(resolvePiAgentDir(), "agents");',
|
'const dir = path.join(resolvePiAgentDir(), "agents");',
|
||||||
);
|
);
|
||||||
break;
|
break;
|
||||||
|
case "subagent-executor.ts":
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\tcwd?: string;",
|
||||||
|
"\tcount?: number;",
|
||||||
|
"\tmodel?: string;",
|
||||||
|
"\tskill?: string | string[] | boolean;",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\tcwd?: string;",
|
||||||
|
"\tcount?: number;",
|
||||||
|
"\tmodel?: string;",
|
||||||
|
"\tskill?: string | string[] | boolean;",
|
||||||
|
"\toutput?: string | false;",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\t\tcwd: task.cwd,",
|
||||||
|
"\t\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\t\tcwd: task.cwd,",
|
||||||
|
"\t\t\toutput: task.output,",
|
||||||
|
"\t\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\tcwd: task.cwd,",
|
||||||
|
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\tcwd: task.cwd,",
|
||||||
|
"\t\toutput: task.output,",
|
||||||
|
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\t\t\tcwd: t.cwd,",
|
||||||
|
"\t\t\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\t\t\tcwd: t.cwd,",
|
||||||
|
"\t\t\t\toutput: t.output,",
|
||||||
|
"\t\t\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\tcwd: t.cwd,",
|
||||||
|
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\tcwd: t.cwd,",
|
||||||
|
"\t\toutput: t.output,",
|
||||||
|
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
|
||||||
|
"\t\t\tresolveStepBehavior(c, { skills: skillOverrides[i] }),",
|
||||||
|
"\t\t);",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
|
||||||
|
"\t\t\tresolveStepBehavior(c, { output: tasks[i]?.output, skills: skillOverrides[i] }),",
|
||||||
|
"\t\t);",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
"\tconst behaviors = agentConfigs.map((config) => resolveStepBehavior(config, {}));",
|
||||||
|
"\tconst behaviors = agentConfigs.map((config, i) => resolveStepBehavior(config, { output: tasks[i]?.output, skills: skillOverrides[i] }));",
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
|
||||||
|
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, input.taskTexts[index]!, {",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
|
||||||
|
"\t\tconst outputPath = typeof input.behaviors[index]?.output === \"string\"",
|
||||||
|
"\t\t\t? resolveSingleOutputPath(input.behaviors[index]?.output, input.ctx.cwd, taskCwd)",
|
||||||
|
"\t\t\t: undefined;",
|
||||||
|
"\t\tconst taskText = injectSingleOutputInstruction(input.taskTexts[index]!, outputPath);",
|
||||||
|
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, taskText, {",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\t\t\tmaxOutput: input.maxOutput,",
|
||||||
|
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\t\t\tmaxOutput: input.maxOutput,",
|
||||||
|
"\t\t\toutputPath,",
|
||||||
|
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
break;
|
||||||
|
case "schemas.ts":
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
[
|
||||||
|
"\tcwd: Type.Optional(Type.String()),",
|
||||||
|
'\tcount: Type.Optional(Type.Integer({ minimum: 1, description: "Repeat this parallel task N times with the same settings." })),',
|
||||||
|
'\tmodel: Type.Optional(Type.String({ description: "Override model for this task (e.g. \'google/gemini-3-pro\')" })),',
|
||||||
|
].join("\n"),
|
||||||
|
[
|
||||||
|
"\tcwd: Type.Optional(Type.String()),",
|
||||||
|
'\tcount: Type.Optional(Type.Integer({ minimum: 1, description: "Repeat this parallel task N times with the same settings." })),',
|
||||||
|
'\toutput: Type.Optional(Type.Any({ description: "Output file for this parallel task (string), or false to disable. Relative paths resolve against cwd." })),',
|
||||||
|
'\tmodel: Type.Optional(Type.String({ description: "Override model for this task (e.g. \'google/gemini-3-pro\')" })),',
|
||||||
|
].join("\n"),
|
||||||
|
);
|
||||||
|
patched = replaceAll(
|
||||||
|
patched,
|
||||||
|
'tasks: Type.Optional(Type.Array(TaskItem, { description: "PARALLEL mode: [{agent, task, count?}, ...]" })),',
|
||||||
|
'tasks: Type.Optional(Type.Array(TaskItem, { description: "PARALLEL mode: [{agent, task, count?, output?}, ...]" })),',
|
||||||
|
);
|
||||||
|
break;
|
||||||
default:
|
default:
|
||||||
return source;
|
return source;
|
||||||
}
|
}
|
||||||
@@ -198,5 +337,5 @@ export function patchPiSubagentsSource(relativePath, source) {
|
|||||||
return source;
|
return source;
|
||||||
}
|
}
|
||||||
|
|
||||||
return injectResolvePiAgentDirHelper(patched);
|
return patched.includes("resolvePiAgentDir()") ? injectResolvePiAgentDirHelper(patched) : patched;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import { existsSync, mkdirSync, readdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
|
import { existsSync, mkdirSync, readdirSync, readFileSync, rmSync, statSync, writeFileSync } from "node:fs";
|
||||||
|
import { createHash } from "node:crypto";
|
||||||
import { resolve } from "node:path";
|
import { resolve } from "node:path";
|
||||||
import { spawnSync } from "node:child_process";
|
import { spawnSync } from "node:child_process";
|
||||||
|
|
||||||
@@ -6,6 +7,8 @@ import { stripPiSubagentBuiltinModelSource } from "./lib/pi-subagents-patch.mjs"
|
|||||||
|
|
||||||
const appRoot = resolve(import.meta.dirname, "..");
|
const appRoot = resolve(import.meta.dirname, "..");
|
||||||
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
const settingsPath = resolve(appRoot, ".feynman", "settings.json");
|
||||||
|
const packageJsonPath = resolve(appRoot, "package.json");
|
||||||
|
const packageLockPath = resolve(appRoot, "package-lock.json");
|
||||||
const feynmanDir = resolve(appRoot, ".feynman");
|
const feynmanDir = resolve(appRoot, ".feynman");
|
||||||
const workspaceDir = resolve(appRoot, ".feynman", "npm");
|
const workspaceDir = resolve(appRoot, ".feynman", "npm");
|
||||||
const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
|
const workspaceNodeModulesDir = resolve(workspaceDir, "node_modules");
|
||||||
@@ -13,16 +16,29 @@ const manifestPath = resolve(workspaceDir, ".runtime-manifest.json");
|
|||||||
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
const workspacePackageJsonPath = resolve(workspaceDir, "package.json");
|
||||||
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
|
const workspaceArchivePath = resolve(feynmanDir, "runtime-workspace.tgz");
|
||||||
const PRUNE_VERSION = 4;
|
const PRUNE_VERSION = 4;
|
||||||
|
const PINNED_RUNTIME_PACKAGES = [
|
||||||
|
"@mariozechner/pi-agent-core",
|
||||||
|
"@mariozechner/pi-ai",
|
||||||
|
"@mariozechner/pi-coding-agent",
|
||||||
|
"@mariozechner/pi-tui",
|
||||||
|
];
|
||||||
|
|
||||||
function readPackageSpecs() {
|
function readPackageSpecs() {
|
||||||
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
|
const settings = JSON.parse(readFileSync(settingsPath, "utf8"));
|
||||||
if (!Array.isArray(settings.packages)) {
|
const packageSpecs = Array.isArray(settings.packages)
|
||||||
return [];
|
? settings.packages
|
||||||
|
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
|
||||||
|
.map((value) => value.slice(4))
|
||||||
|
: [];
|
||||||
|
|
||||||
|
for (const packageName of PINNED_RUNTIME_PACKAGES) {
|
||||||
|
const version = readLockedPackageVersion(packageName);
|
||||||
|
if (version) {
|
||||||
|
packageSpecs.push(`${packageName}@${version}`);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return settings.packages
|
return Array.from(new Set(packageSpecs));
|
||||||
.filter((value) => typeof value === "string" && value.startsWith("npm:"))
|
|
||||||
.map((value) => value.slice(4));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function parsePackageName(spec) {
|
function parsePackageName(spec) {
|
||||||
@@ -30,10 +46,41 @@ function parsePackageName(spec) {
|
|||||||
return match?.[1] ?? spec;
|
return match?.[1] ?? spec;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function readLockedPackageVersion(packageName) {
|
||||||
|
if (!existsSync(packageLockPath)) {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
const lockfile = JSON.parse(readFileSync(packageLockPath, "utf8"));
|
||||||
|
const entry = lockfile.packages?.[`node_modules/${packageName}`];
|
||||||
|
return typeof entry?.version === "string" ? entry.version : undefined;
|
||||||
|
} catch {
|
||||||
|
return undefined;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
function arraysMatch(left, right) {
|
function arraysMatch(left, right) {
|
||||||
return left.length === right.length && left.every((value, index) => value === right[index]);
|
return left.length === right.length && left.every((value, index) => value === right[index]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function hashFile(path) {
|
||||||
|
if (!existsSync(path)) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
return createHash("sha256").update(readFileSync(path)).digest("hex");
|
||||||
|
}
|
||||||
|
|
||||||
|
function getRuntimeInputHash() {
|
||||||
|
const hash = createHash("sha256");
|
||||||
|
for (const path of [packageJsonPath, packageLockPath, settingsPath]) {
|
||||||
|
hash.update(path);
|
||||||
|
hash.update("\0");
|
||||||
|
hash.update(hashFile(path) ?? "missing");
|
||||||
|
hash.update("\0");
|
||||||
|
}
|
||||||
|
return hash.digest("hex");
|
||||||
|
}
|
||||||
|
|
||||||
function workspaceIsCurrent(packageSpecs) {
|
function workspaceIsCurrent(packageSpecs) {
|
||||||
if (!existsSync(manifestPath) || !existsSync(workspaceNodeModulesDir)) {
|
if (!existsSync(manifestPath) || !existsSync(workspaceNodeModulesDir)) {
|
||||||
return false;
|
return false;
|
||||||
@@ -44,6 +91,9 @@ function workspaceIsCurrent(packageSpecs) {
|
|||||||
if (!Array.isArray(manifest.packageSpecs) || !arraysMatch(manifest.packageSpecs, packageSpecs)) {
|
if (!Array.isArray(manifest.packageSpecs) || !arraysMatch(manifest.packageSpecs, packageSpecs)) {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
if (manifest.runtimeInputHash !== getRuntimeInputHash()) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
if (
|
if (
|
||||||
manifest.nodeAbi !== process.versions.modules ||
|
manifest.nodeAbi !== process.versions.modules ||
|
||||||
manifest.platform !== process.platform ||
|
manifest.platform !== process.platform ||
|
||||||
@@ -97,8 +147,8 @@ function prepareWorkspace(packageSpecs) {
|
|||||||
const result = spawnSync(
|
const result = spawnSync(
|
||||||
process.env.npm_execpath ? process.execPath : "npm",
|
process.env.npm_execpath ? process.execPath : "npm",
|
||||||
process.env.npm_execpath
|
process.env.npm_execpath
|
||||||
? [process.env.npm_execpath, "install", "--prefer-offline", "--no-audit", "--no-fund", "--no-dry-run", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
|
? [process.env.npm_execpath, "install", "--prefer-online", "--no-audit", "--no-fund", "--no-dry-run", "--legacy-peer-deps", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs]
|
||||||
: ["install", "--prefer-offline", "--no-audit", "--no-fund", "--no-dry-run", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
|
: ["install", "--prefer-online", "--no-audit", "--no-fund", "--no-dry-run", "--legacy-peer-deps", "--loglevel", "error", "--prefix", workspaceDir, ...packageSpecs],
|
||||||
{ stdio: "inherit", env: childNpmInstallEnv() },
|
{ stdio: "inherit", env: childNpmInstallEnv() },
|
||||||
);
|
);
|
||||||
if (result.status !== 0) {
|
if (result.status !== 0) {
|
||||||
@@ -110,15 +160,16 @@ function writeManifest(packageSpecs) {
|
|||||||
writeFileSync(
|
writeFileSync(
|
||||||
manifestPath,
|
manifestPath,
|
||||||
JSON.stringify(
|
JSON.stringify(
|
||||||
{
|
{
|
||||||
packageSpecs,
|
packageSpecs,
|
||||||
generatedAt: new Date().toISOString(),
|
runtimeInputHash: getRuntimeInputHash(),
|
||||||
nodeAbi: process.versions.modules,
|
generatedAt: new Date().toISOString(),
|
||||||
nodeVersion: process.version,
|
nodeAbi: process.versions.modules,
|
||||||
platform: process.platform,
|
nodeVersion: process.version,
|
||||||
arch: process.arch,
|
platform: process.platform,
|
||||||
pruneVersion: PRUNE_VERSION,
|
arch: process.arch,
|
||||||
},
|
pruneVersion: PRUNE_VERSION,
|
||||||
|
},
|
||||||
null,
|
null,
|
||||||
2,
|
2,
|
||||||
) + "\n",
|
) + "\n",
|
||||||
|
|||||||
@@ -558,6 +558,7 @@ export async function main(): Promise<void> {
|
|||||||
normalizeFeynmanSettings(feynmanSettingsPath, bundledSettingsPath, thinkingLevel, feynmanAuthPath);
|
normalizeFeynmanSettings(feynmanSettingsPath, bundledSettingsPath, thinkingLevel, feynmanAuthPath);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const workflowCommandNames = new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name));
|
||||||
await launchPiChat({
|
await launchPiChat({
|
||||||
appRoot,
|
appRoot,
|
||||||
workingDir,
|
workingDir,
|
||||||
@@ -568,6 +569,6 @@ export async function main(): Promise<void> {
|
|||||||
thinkingLevel,
|
thinkingLevel,
|
||||||
explicitModelSpec,
|
explicitModelSpec,
|
||||||
oneShotPrompt: values.prompt,
|
oneShotPrompt: values.prompt,
|
||||||
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, new Set(readPromptSpecs(appRoot).filter((s) => s.topLevelCli).map((s) => s.name))),
|
initialPrompt: resolveInitialPrompt(command, rest, values.prompt, workflowCommandNames),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -123,6 +123,8 @@ export function buildPiEnv(options: PiRuntimeOptions): NodeJS.ProcessEnv {
|
|||||||
FEYNMAN_BIN_PATH: resolve(options.appRoot, "bin", "feynman.js"),
|
FEYNMAN_BIN_PATH: resolve(options.appRoot, "bin", "feynman.js"),
|
||||||
FEYNMAN_NPM_PREFIX: feynmanNpmPrefixPath,
|
FEYNMAN_NPM_PREFIX: feynmanNpmPrefixPath,
|
||||||
// Ensure the Pi child process uses Feynman's agent dir for auth/models/settings.
|
// Ensure the Pi child process uses Feynman's agent dir for auth/models/settings.
|
||||||
|
// Patched Pi uses FEYNMAN_CODING_AGENT_DIR; upstream Pi uses PI_CODING_AGENT_DIR.
|
||||||
|
FEYNMAN_CODING_AGENT_DIR: options.feynmanAgentDir,
|
||||||
PI_CODING_AGENT_DIR: options.feynmanAgentDir,
|
PI_CODING_AGENT_DIR: options.feynmanAgentDir,
|
||||||
PANDOC_PATH: process.env.PANDOC_PATH ?? resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS),
|
PANDOC_PATH: process.env.PANDOC_PATH ?? resolveExecutable("pandoc", PANDOC_FALLBACK_PATHS),
|
||||||
PI_HARDWARE_CURSOR: process.env.PI_HARDWARE_CURSOR ?? "1",
|
PI_HARDWARE_CURSOR: process.env.PI_HARDWARE_CURSOR ?? "1",
|
||||||
|
|||||||
@@ -65,11 +65,42 @@ test("deepresearch workflow requires durable artifacts even when blocked", () =>
|
|||||||
assert.match(systemPrompt, /Do not claim you are only a static model/i);
|
assert.match(systemPrompt, /Do not claim you are only a static model/i);
|
||||||
assert.match(systemPrompt, /write the requested durable artifact/i);
|
assert.match(systemPrompt, /write the requested durable artifact/i);
|
||||||
assert.match(deepResearchPrompt, /Do not stop after planning/i);
|
assert.match(deepResearchPrompt, /Do not stop after planning/i);
|
||||||
|
assert.match(deepResearchPrompt, /not a request to explain or implement/i);
|
||||||
|
assert.match(deepResearchPrompt, /Do not answer by describing the protocol/i);
|
||||||
assert.match(deepResearchPrompt, /degraded mode/i);
|
assert.match(deepResearchPrompt, /degraded mode/i);
|
||||||
assert.match(deepResearchPrompt, /Verification: BLOCKED/i);
|
assert.match(deepResearchPrompt, /Verification: BLOCKED/i);
|
||||||
assert.match(deepResearchPrompt, /Never end with only an explanation in chat/i);
|
assert.match(deepResearchPrompt, /Never end with only an explanation in chat/i);
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("deepresearch citation and review stages are sequential and avoid giant edits", () => {
|
||||||
|
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
|
||||||
|
|
||||||
|
assert.match(deepResearchPrompt, /must complete before any reviewer runs/i);
|
||||||
|
assert.match(deepResearchPrompt, /Do not run the `verifier` and `reviewer` in the same parallel `subagent` call/i);
|
||||||
|
assert.match(deepResearchPrompt, /outputs\/\.drafts\/<slug>-cited\.md/i);
|
||||||
|
assert.match(deepResearchPrompt, /do not issue one giant `edit` tool call/i);
|
||||||
|
assert.match(deepResearchPrompt, /outputs\/\.drafts\/<slug>-revised\.md/i);
|
||||||
|
assert.match(deepResearchPrompt, /The final candidate is `outputs\/\.drafts\/<slug>-revised\.md` if it exists/i);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("deepresearch keeps subagent tool calls small and skips subagents for narrow explainers", () => {
|
||||||
|
const deepResearchPrompt = readFileSync(join(repoRoot, "prompts", "deepresearch.md"), "utf8");
|
||||||
|
|
||||||
|
assert.match(deepResearchPrompt, /including "what is X" explainers/i);
|
||||||
|
assert.match(deepResearchPrompt, /Make the scale decision before assigning owners/i);
|
||||||
|
assert.match(deepResearchPrompt, /lead-owned direct search tasks only/i);
|
||||||
|
assert.match(deepResearchPrompt, /MUST NOT spawn researcher subagents/i);
|
||||||
|
assert.match(deepResearchPrompt, /Do not inflate a simple explainer into a multi-agent survey/i);
|
||||||
|
assert.match(deepResearchPrompt, /Skip this section entirely when the scale decision chose direct search\/no subagents/i);
|
||||||
|
assert.match(deepResearchPrompt, /<slug>-research-direct\.md/i);
|
||||||
|
assert.match(deepResearchPrompt, /Keep `subagent` tool-call JSON small and valid/i);
|
||||||
|
assert.match(deepResearchPrompt, /write a per-researcher brief first/i);
|
||||||
|
assert.match(deepResearchPrompt, /Do not place multi-paragraph instructions inside the `subagent` JSON/i);
|
||||||
|
assert.match(deepResearchPrompt, /Do not add extra keys such as `artifacts`/i);
|
||||||
|
assert.match(deepResearchPrompt, /always set `failFast: false`/i);
|
||||||
|
assert.match(deepResearchPrompt, /if a PDF parser or paper fetch fails/i);
|
||||||
|
});
|
||||||
|
|
||||||
test("workflow prompts do not introduce implicit confirmation gates", () => {
|
test("workflow prompts do not introduce implicit confirmation gates", () => {
|
||||||
const workflowPrompts = [
|
const workflowPrompts = [
|
||||||
"audit.md",
|
"audit.md",
|
||||||
|
|||||||
@@ -243,6 +243,10 @@ test("updateConfiguredPackages batches multiple npm updates into a single instal
|
|||||||
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
||||||
` process.exit(0);`,
|
` process.exit(0);`,
|
||||||
`}`,
|
`}`,
|
||||||
|
`if (args.length >= 4 && args[0] === "view" && args[2] === "version" && args[3] === "--json") {`,
|
||||||
|
` console.log(JSON.stringify("2.0.0"));`,
|
||||||
|
` process.exit(0);`,
|
||||||
|
`}`,
|
||||||
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
||||||
"process.exit(0);",
|
"process.exit(0);",
|
||||||
].join("\n"));
|
].join("\n"));
|
||||||
@@ -258,7 +262,7 @@ test("updateConfiguredPackages batches multiple npm updates into a single instal
|
|||||||
globalThis.fetch = (async () => ({
|
globalThis.fetch = (async () => ({
|
||||||
ok: true,
|
ok: true,
|
||||||
json: async () => ({ version: "2.0.0" }),
|
json: async () => ({ version: "2.0.0" }),
|
||||||
})) as typeof fetch;
|
})) as unknown as typeof fetch;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const result = await updateConfiguredPackages(workingDir, agentDir);
|
const result = await updateConfiguredPackages(workingDir, agentDir);
|
||||||
@@ -290,6 +294,10 @@ test("updateConfiguredPackages skips native package updates on unsupported Node
|
|||||||
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
` console.log(resolve(${JSON.stringify(root)}, "npm-global", "lib", "node_modules"));`,
|
||||||
` process.exit(0);`,
|
` process.exit(0);`,
|
||||||
`}`,
|
`}`,
|
||||||
|
`if (args.length >= 4 && args[0] === "view" && args[2] === "version" && args[3] === "--json") {`,
|
||||||
|
` console.log(JSON.stringify("2.0.0"));`,
|
||||||
|
` process.exit(0);`,
|
||||||
|
`}`,
|
||||||
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
`appendFileSync(${JSON.stringify(logPath)}, JSON.stringify(args) + "\\n", "utf8");`,
|
||||||
"process.exit(0);",
|
"process.exit(0);",
|
||||||
].join("\n"));
|
].join("\n"));
|
||||||
@@ -306,7 +314,7 @@ test("updateConfiguredPackages skips native package updates on unsupported Node
|
|||||||
globalThis.fetch = (async () => ({
|
globalThis.fetch = (async () => ({
|
||||||
ok: true,
|
ok: true,
|
||||||
json: async () => ({ version: "2.0.0" }),
|
json: async () => ({ version: "2.0.0" }),
|
||||||
})) as typeof fetch;
|
})) as unknown as typeof fetch;
|
||||||
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
Object.defineProperty(process.versions, "node", { value: "25.0.0", configurable: true });
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
|||||||
@@ -54,6 +54,7 @@ test("buildPiEnv wires Feynman paths into the Pi environment", () => {
|
|||||||
assert.equal(env.FEYNMAN_NPM_PREFIX, "/home/.feynman/npm-global");
|
assert.equal(env.FEYNMAN_NPM_PREFIX, "/home/.feynman/npm-global");
|
||||||
assert.equal(env.NPM_CONFIG_PREFIX, "/home/.feynman/npm-global");
|
assert.equal(env.NPM_CONFIG_PREFIX, "/home/.feynman/npm-global");
|
||||||
assert.equal(env.npm_config_prefix, "/home/.feynman/npm-global");
|
assert.equal(env.npm_config_prefix, "/home/.feynman/npm-global");
|
||||||
|
assert.equal(env.FEYNMAN_CODING_AGENT_DIR, "/home/.feynman/agent");
|
||||||
assert.equal(env.PI_CODING_AGENT_DIR, "/home/.feynman/agent");
|
assert.equal(env.PI_CODING_AGENT_DIR, "/home/.feynman/agent");
|
||||||
assert.ok(
|
assert.ok(
|
||||||
env.PATH?.startsWith(
|
env.PATH?.startsWith(
|
||||||
|
|||||||
@@ -83,7 +83,7 @@ for (const scenario of CASES) {
|
|||||||
const patched = patchPiSubagentsSource(scenario.file, scenario.input);
|
const patched = patchPiSubagentsSource(scenario.file, scenario.input);
|
||||||
|
|
||||||
assert.match(patched, /function resolvePiAgentDir\(\): string \{/);
|
assert.match(patched, /function resolvePiAgentDir\(\): string \{/);
|
||||||
assert.match(patched, /process\.env\.PI_CODING_AGENT_DIR\?\.trim\(\)/);
|
assert.match(patched, /process\.env\.FEYNMAN_CODING_AGENT_DIR\?\.trim\(\) \|\| process\.env\.PI_CODING_AGENT_DIR\?\.trim\(\)/);
|
||||||
assert.ok(patched.includes(scenario.expected));
|
assert.ok(patched.includes(scenario.expected));
|
||||||
assert.ok(!patched.includes(scenario.original));
|
assert.ok(!patched.includes(scenario.original));
|
||||||
});
|
});
|
||||||
@@ -141,6 +141,139 @@ test("patchPiSubagentsSource rewrites modern agents.ts discovery paths", () => {
|
|||||||
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
|
assert.ok(!patched.includes('fs.existsSync(userDirNew) ? userDirNew : userDirOld'));
|
||||||
});
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource preserves output on top-level parallel tasks", () => {
|
||||||
|
const input = [
|
||||||
|
"interface TaskParam {",
|
||||||
|
"\tagent: string;",
|
||||||
|
"\ttask: string;",
|
||||||
|
"\tcwd?: string;",
|
||||||
|
"\tcount?: number;",
|
||||||
|
"\tmodel?: string;",
|
||||||
|
"\tskill?: string | string[] | boolean;",
|
||||||
|
"}",
|
||||||
|
"function run(params: { tasks: TaskParam[] }) {",
|
||||||
|
"\tconst modelOverrides = params.tasks.map(() => undefined);",
|
||||||
|
"\tconst skillOverrides = params.tasks.map(() => undefined);",
|
||||||
|
"\tconst parallelTasks = params.tasks.map((task, index) => ({",
|
||||||
|
"\t\tagent: task.agent,",
|
||||||
|
"\t\ttask: params.context === \"fork\" ? wrapForkTask(task.task) : task.task,",
|
||||||
|
"\t\tcwd: task.cwd,",
|
||||||
|
"\t\t...(modelOverrides[index] ? { model: modelOverrides[index] } : {}),",
|
||||||
|
"\t\t...(skillOverrides[index] !== undefined ? { skill: skillOverrides[index] } : {}),",
|
||||||
|
"\t}));",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /output\?: string \| false;/);
|
||||||
|
assert.match(patched, /\n\t\toutput: task\.output,/);
|
||||||
|
assert.doesNotMatch(patched, /resolvePiAgentDir/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource preserves output in async parallel task handoff", () => {
|
||||||
|
const input = [
|
||||||
|
"function run(tasks: TaskParam[]) {",
|
||||||
|
"\tconst modelOverrides = tasks.map(() => undefined);",
|
||||||
|
"\tconst skillOverrides = tasks.map(() => undefined);",
|
||||||
|
"\tconst parallelTasks = tasks.map((t, i) => ({",
|
||||||
|
"\t\tagent: t.agent,",
|
||||||
|
"\t\ttask: params.context === \"fork\" ? wrapForkTask(taskTexts[i]!) : taskTexts[i]!,",
|
||||||
|
"\t\tcwd: t.cwd,",
|
||||||
|
"\t\t...(modelOverrides[i] ? { model: modelOverrides[i] } : {}),",
|
||||||
|
"\t\t...(skillOverrides[i] !== undefined ? { skill: skillOverrides[i] } : {}),",
|
||||||
|
"\t}));",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /\n\t\toutput: t\.output,/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource uses task output when resolving foreground parallel behavior", () => {
|
||||||
|
const input = [
|
||||||
|
"async function run(tasks: TaskParam[]) {",
|
||||||
|
"\tconst skillOverrides = tasks.map((t) => normalizeSkillInput(t.skill));",
|
||||||
|
"\tif (params.clarify === true && ctx.hasUI) {",
|
||||||
|
"\t\tconst behaviors = agentConfigs.map((c, i) =>",
|
||||||
|
"\t\t\tresolveStepBehavior(c, { skills: skillOverrides[i] }),",
|
||||||
|
"\t\t);",
|
||||||
|
"\t}",
|
||||||
|
"\tconst behaviors = agentConfigs.map((config) => resolveStepBehavior(config, {}));",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /resolveStepBehavior\(c, \{ output: tasks\[i\]\?\.output, skills: skillOverrides\[i\] \}\)/);
|
||||||
|
assert.match(patched, /resolveStepBehavior\(config, \{ output: tasks\[i\]\?\.output, skills: skillOverrides\[i\] \}\)/);
|
||||||
|
assert.doesNotMatch(patched, /resolveStepBehavior\(config, \{\}\)/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource passes foreground parallel output paths into runSync", () => {
|
||||||
|
const input = [
|
||||||
|
"async function runForegroundParallelTasks(input: ForegroundParallelRunInput): Promise<SingleResult[]> {",
|
||||||
|
"\treturn mapConcurrent(input.tasks, input.concurrencyLimit, async (task, index) => {",
|
||||||
|
"\t\tconst overrideSkills = input.skillOverrides[index];",
|
||||||
|
"\t\tconst effectiveSkills = overrideSkills === undefined ? input.behaviors[index]?.skills : overrideSkills;",
|
||||||
|
"\t\tconst taskCwd = resolveParallelTaskCwd(task, input.paramsCwd, input.worktreeSetup, index);",
|
||||||
|
"\t\treturn runSync(input.ctx.cwd, input.agents, task.agent, input.taskTexts[index]!, {",
|
||||||
|
"\t\t\tcwd: taskCwd,",
|
||||||
|
"\t\t\tsignal: input.signal,",
|
||||||
|
"\t\t\tmaxOutput: input.maxOutput,",
|
||||||
|
"\t\t\tmaxSubagentDepth: input.maxSubagentDepths[index],",
|
||||||
|
"\t\t});",
|
||||||
|
"\t});",
|
||||||
|
"}",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("subagent-executor.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /const outputPath = typeof input\.behaviors\[index\]\?\.output === "string"/);
|
||||||
|
assert.match(patched, /const taskText = injectSingleOutputInstruction\(input\.taskTexts\[index\]!, outputPath\)/);
|
||||||
|
assert.match(patched, /runSync\(input\.ctx\.cwd, input\.agents, task\.agent, taskText, \{/);
|
||||||
|
assert.match(patched, /\n\t\t\toutputPath,/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource documents output in top-level task schema", () => {
|
||||||
|
const input = [
|
||||||
|
"export const TaskItem = Type.Object({ ",
|
||||||
|
"\tagent: Type.String(), ",
|
||||||
|
"\ttask: Type.String(), ",
|
||||||
|
"\tcwd: Type.Optional(Type.String()),",
|
||||||
|
"\tcount: Type.Optional(Type.Integer({ minimum: 1, description: \"Repeat this parallel task N times with the same settings.\" })),",
|
||||||
|
"\tmodel: Type.Optional(Type.String({ description: \"Override model for this task (e.g. 'google/gemini-3-pro')\" })),",
|
||||||
|
"\tskill: Type.Optional(SkillOverride),",
|
||||||
|
"});",
|
||||||
|
"export const SubagentParams = Type.Object({",
|
||||||
|
"\ttasks: Type.Optional(Type.Array(TaskItem, { description: \"PARALLEL mode: [{agent, task, count?}, ...]\" })),",
|
||||||
|
"});",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("schemas.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /output: Type\.Optional\(Type\.Any/);
|
||||||
|
assert.match(patched, /count\?, output\?/);
|
||||||
|
assert.doesNotMatch(patched, /resolvePiAgentDir/);
|
||||||
|
});
|
||||||
|
|
||||||
|
test("patchPiSubagentsSource documents output in top-level parallel help", () => {
|
||||||
|
const input = [
|
||||||
|
'import * as os from "node:os";',
|
||||||
|
'import * as path from "node:path";',
|
||||||
|
"const help = `",
|
||||||
|
"• PARALLEL: { tasks: [{agent,task,count?}, ...], concurrency?: number, worktree?: true } - concurrent execution (worktree: isolate each task in a git worktree)",
|
||||||
|
"`;",
|
||||||
|
].join("\n");
|
||||||
|
|
||||||
|
const patched = patchPiSubagentsSource("index.ts", input);
|
||||||
|
|
||||||
|
assert.match(patched, /output\?/);
|
||||||
|
assert.match(patched, /per-task file target/);
|
||||||
|
assert.doesNotMatch(patched, /function resolvePiAgentDir/);
|
||||||
|
});
|
||||||
|
|
||||||
test("stripPiSubagentBuiltinModelSource removes built-in model pins", () => {
|
test("stripPiSubagentBuiltinModelSource removes built-in model pins", () => {
|
||||||
const input = [
|
const input = [
|
||||||
"---",
|
"---",
|
||||||
|
|||||||
12
website/package-lock.json
generated
12
website/package-lock.json
generated
@@ -1544,9 +1544,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@hono/node-server": {
|
"node_modules/@hono/node-server": {
|
||||||
"version": "1.19.13",
|
"version": "1.19.14",
|
||||||
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.13.tgz",
|
"resolved": "https://registry.npmjs.org/@hono/node-server/-/node-server-1.19.14.tgz",
|
||||||
"integrity": "sha512-TsQLe4i2gvoTtrHje625ngThGBySOgSK3Xo2XRYOdqGN1teR8+I7vchQC46uLJi8OF62YTYA3AhSpumtkhsaKQ==",
|
"integrity": "sha512-GwtvgtXxnWsucXvbQXkRgqksiH2Qed37H9xHZocE5sA3N8O8O8/8FA3uclQXxXVzc9XBZuEOMK7+r02FmSpHtw==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=18.14.1"
|
"node": ">=18.14.1"
|
||||||
@@ -7998,9 +7998,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/hono": {
|
"node_modules/hono": {
|
||||||
"version": "4.12.12",
|
"version": "4.12.14",
|
||||||
"resolved": "https://registry.npmjs.org/hono/-/hono-4.12.12.tgz",
|
"resolved": "https://registry.npmjs.org/hono/-/hono-4.12.14.tgz",
|
||||||
"integrity": "sha512-p1JfQMKaceuCbpJKAPKVqyqviZdS0eUxH9v82oWo1kb9xjQ5wA6iP3FNVAPDFlz5/p7d45lO+BpSk1tuSZMF4Q==",
|
"integrity": "sha512-am5zfg3yu6sqn5yjKBNqhnTX7Cv+m00ox+7jbaKkrLMRJ4rAdldd1xPd/JzbBWspqaQv6RSTrgFN95EsfhC+7w==",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=16.9.0"
|
"node": ">=16.9.0"
|
||||||
|
|||||||
@@ -36,8 +36,8 @@
|
|||||||
},
|
},
|
||||||
"overrides": {
|
"overrides": {
|
||||||
"@modelcontextprotocol/sdk": {
|
"@modelcontextprotocol/sdk": {
|
||||||
"@hono/node-server": "1.19.13",
|
"@hono/node-server": "1.19.14",
|
||||||
"hono": "4.12.12"
|
"hono": "4.12.14"
|
||||||
},
|
},
|
||||||
"router": {
|
"router": {
|
||||||
"path-to-regexp": "8.4.2"
|
"path-to-regexp": "8.4.2"
|
||||||
|
|||||||
@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.25
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
|
||||||
EOF
|
EOF
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
|
|||||||
Workarounds:
|
Workarounds:
|
||||||
- try again after the release finishes publishing
|
- try again after the release finishes publishing
|
||||||
- pass the latest published version explicitly, e.g.:
|
- pass the latest published version explicitly, e.g.:
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.25
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
|
||||||
"@
|
"@
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -117,13 +117,13 @@ These installers download the bundled `skills/` and `prompts/` trees plus the re
|
|||||||
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -fsSL https://feynman.is/install | bash -s -- 0.2.25
|
curl -fsSL https://feynman.is/install | bash -s -- 0.2.31
|
||||||
```
|
```
|
||||||
|
|
||||||
On Windows:
|
On Windows:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.25
|
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.31
|
||||||
```
|
```
|
||||||
|
|
||||||
## Post-install setup
|
## Post-install setup
|
||||||
|
|||||||
@@ -22,7 +22,9 @@ These are installed by default with every Feynman installation. They provide the
|
|||||||
| `pi-mermaid` | Render Mermaid diagrams in the terminal UI |
|
| `pi-mermaid` | Render Mermaid diagrams in the terminal UI |
|
||||||
| `@aliou/pi-processes` | Manage long-running experiments, background tasks, and log tailing |
|
| `@aliou/pi-processes` | Manage long-running experiments, background tasks, and log tailing |
|
||||||
| `pi-zotero` | Integration with Zotero for citation library management |
|
| `pi-zotero` | Integration with Zotero for citation library management |
|
||||||
|
| `@kaiserlich-dev/pi-session-search` | Indexed session recall with summarize and resume UI. Powers session lookup |
|
||||||
| `pi-schedule-prompt` | Schedule recurring and deferred research jobs. Powers the `/watch` workflow |
|
| `pi-schedule-prompt` | Schedule recurring and deferred research jobs. Powers the `/watch` workflow |
|
||||||
|
| `@samfp/pi-memory` | Pi-managed preference and correction memory across sessions |
|
||||||
| `@tmustier/pi-ralph-wiggum` | Long-running agent loops for iterative development. Powers `/autoresearch` |
|
| `@tmustier/pi-ralph-wiggum` | Long-running agent loops for iterative development. Powers `/autoresearch` |
|
||||||
|
|
||||||
These packages are updated together when you run `feynman update`. You do not need to install them individually.
|
These packages are updated together when you run `feynman update`. You do not need to install them individually.
|
||||||
@@ -34,8 +36,6 @@ Install on demand with `feynman packages install <preset>`. These extend Feynman
|
|||||||
| Package | Preset | Purpose |
|
| Package | Preset | Purpose |
|
||||||
| --- | --- | --- |
|
| --- | --- | --- |
|
||||||
| `pi-generative-ui` | `generative-ui` | Interactive HTML-style widgets for rich output |
|
| `pi-generative-ui` | `generative-ui` | Interactive HTML-style widgets for rich output |
|
||||||
| `@kaiserlich-dev/pi-session-search` | `session-search` | Indexed session recall with summarize and resume UI. Powers `/search` |
|
|
||||||
| `@samfp/pi-memory` | `memory` | Automatic preference and correction memory across sessions |
|
|
||||||
|
|
||||||
## Installing and managing packages
|
## Installing and managing packages
|
||||||
|
|
||||||
@@ -48,17 +48,9 @@ feynman packages list
|
|||||||
Install a specific optional preset:
|
Install a specific optional preset:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
feynman packages install session-search
|
|
||||||
feynman packages install memory
|
|
||||||
feynman packages install generative-ui
|
feynman packages install generative-ui
|
||||||
```
|
```
|
||||||
|
|
||||||
Install all optional packages at once:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
feynman packages install all-extras
|
|
||||||
```
|
|
||||||
|
|
||||||
## Updating packages
|
## Updating packages
|
||||||
|
|
||||||
Update all installed packages to their latest versions:
|
Update all installed packages to their latest versions:
|
||||||
|
|||||||
Reference in New Issue
Block a user