Files
feynman/website/src/content/docs/getting-started/setup.md
2026-04-16 15:34:32 -07:00

3.6 KiB

title, description, section, order
title description section order
Setup Walk through the guided setup wizard to configure Feynman. Getting Started 3

The feynman setup wizard configures your model provider, API keys, and optional packages. It runs automatically on first launch, but you can re-run it at any time to change your configuration.

Running setup

feynman setup

The wizard walks you through three stages: model configuration, authentication, and optional package installation.

Stage 1: Model selection

Feynman supports multiple model providers. The setup wizard presents a list of available providers and models. Select your preferred default model using the arrow keys:

? Select your default model:
  anthropic:claude-sonnet-4-20250514
> anthropic:claude-opus-4-20250514
  openai:gpt-4o
  openai:o3
  google:gemini-2.5-pro

The model you choose here becomes the default for all sessions. You can override it per-session with the --model flag or change it later via feynman model set <provider/model> or feynman model set <provider:model>.

Stage 2: Authentication

Depending on your chosen provider, setup prompts you for an API key or walks you through OAuth login. For providers that support Pi OAuth (like Anthropic and OpenAI), Feynman opens a browser window to complete the sign-in flow. Your credentials are stored securely in the Pi auth storage at ~/.feynman/.

For API key providers, you are prompted to paste your key directly:

? Enter your API key: sk-ant-...

Keys are encrypted at rest and never sent anywhere except the provider's API endpoint.

Amazon Bedrock

For Amazon Bedrock, choose:

Amazon Bedrock (AWS credential chain)

Feynman verifies the same AWS credential chain Pi uses at runtime, including AWS_PROFILE, ~/.aws credentials/config, SSO, ECS/IRSA, and EC2 instance roles. Once that check passes, Bedrock models become available in feynman model list without needing a traditional API key.

Local models: LM Studio, Ollama, vLLM

If you want to use LM Studio, start the LM Studio local server, load a model, choose the API-key flow, and then select:

LM Studio (local OpenAI-compatible server)

The default settings are:

Base URL: http://localhost:1234/v1
Authorization header: No
API key: lm-studio

Feynman attempts to read LM Studio's /models endpoint and prefill the loaded model id.

For Ollama, vLLM, or another OpenAI-compatible local server, choose:

Custom provider (baseUrl + API key)

For Ollama, the typical settings are:

API mode: openai-completions
Base URL: http://localhost:11434/v1
Authorization header: No
Model ids: llama3.1:8b
API key: local

After saving the provider, run:

feynman model list
feynman model set <provider>/<model-id>

to confirm the local model is available and make it the default.

Stage 3: Optional packages

Feynman's core ships with the essentials, but some features require additional packages. The wizard asks if you want to install optional presets:

  • session-search -- Enables searching prior session transcripts for past research
  • memory -- Automatic preference and correction memory across sessions
  • generative-ui -- Interactive HTML-style widgets for rich output

You can skip this step and install packages later with feynman packages install <preset>.

Re-running setup

Configuration is stored in ~/.feynman/settings.json. Running feynman setup again overwrites previous settings. If you only need to change a specific value, edit the config file directly or use the targeted commands like feynman model set or feynman alpha login.