Add first-class LM Studio setup

This commit is contained in:
Advait Paliwal
2026-04-16 15:34:32 -07:00
parent ca559dfd91
commit d2570188f9
11 changed files with 105 additions and 14 deletions

View File

@@ -261,7 +261,7 @@ This usually means the release exists, but not all platform bundles were uploade
Workarounds:
- try again after the release finishes publishing
- pass the latest published version explicitly, e.g.:
curl -fsSL https://feynman.is/install | bash -s -- 0.2.21
curl -fsSL https://feynman.is/install | bash -s -- 0.2.22
EOF
exit 1
fi

View File

@@ -110,7 +110,7 @@ This usually means the release exists, but not all platform bundles were uploade
Workarounds:
- try again after the release finishes publishing
- pass the latest published version explicitly, e.g.:
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.21
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.22
"@
}

View File

@@ -117,13 +117,13 @@ These installers download the bundled `skills/` and `prompts/` trees plus the re
The one-line installer already targets the latest tagged release. To pin an exact version, pass it explicitly:
```bash
curl -fsSL https://feynman.is/install | bash -s -- 0.2.21
curl -fsSL https://feynman.is/install | bash -s -- 0.2.22
```
On Windows:
```powershell
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.21
& ([scriptblock]::Create((irm https://feynman.is/install.ps1))) -Version 0.2.22
```
## Post-install setup

View File

@@ -52,9 +52,25 @@ Amazon Bedrock (AWS credential chain)
Feynman verifies the same AWS credential chain Pi uses at runtime, including `AWS_PROFILE`, `~/.aws` credentials/config, SSO, ECS/IRSA, and EC2 instance roles. Once that check passes, Bedrock models become available in `feynman model list` without needing a traditional API key.
### Local models: Ollama, LM Studio, vLLM
### Local models: LM Studio, Ollama, vLLM
If you want to use a model running locally, choose the API-key flow and then select:
If you want to use LM Studio, start the LM Studio local server, load a model, choose the API-key flow, and then select:
```text
LM Studio (local OpenAI-compatible server)
```
The default settings are:
```text
Base URL: http://localhost:1234/v1
Authorization header: No
API key: lm-studio
```
Feynman attempts to read LM Studio's `/models` endpoint and prefill the loaded model id.
For Ollama, vLLM, or another OpenAI-compatible local server, choose:
```text
Custom provider (baseUrl + API key)
@@ -70,7 +86,7 @@ Model ids: llama3.1:8b
API key: local
```
That same custom-provider flow also works for other OpenAI-compatible local servers such as LM Studio or vLLM. After saving the provider, run:
After saving the provider, run:
```bash
feynman model list