Learn to run your goose agent with local AI models powered by ollama

> Run models with Ollama locally and point Goose at Ollama's local provider. Keep coding prompts on-device and avoid API spend.
Goose supports an Ollama provider for local models running on http://localhost:11434. That lets you keep the same Goose workflow, approvals, extensions, and repo context while running a local model instead of paying for a cloud API on every prompt.
Why Ollama for Goose?
ollama --version
ollama serve
ollama pull qwen3.5:4b
Quick checks:
curl http://localhost:11434/api/version
ollama ls
If ollama serve is already managed by the desktop app or a service, you only need to make sure the server is listening on localhost:11434.
Option A: Goose Desktop
Open Goose, go to provider setup, choose Ollama, leave the host as http://localhost:11434, and select an installed local model such as qwen3.5:4b.
Option B: Goose CLI
Run the interactive setup once:
goose configure
Or launch a session with explicit local-provider environment variables:
export GOOSE_PROVIDER=ollama
export OLLAMA_HOST=http://localhost:11434
export GOOSE_MODEL=qwen3.5:4b
export GOOSE_MODE=approve
goose session --name local-ollama
Goose stores shared Desktop and CLI configuration under ~/.config/goose/config.yaml, so you only need to configure this once per machine.
curl http://localhost:11434/api/versionollama lsapprove mode if the repo has write accessQuick first run
# Terminal 1
ollama serve
# Terminal 2
cd your-project
goose session --name local-ollama
Inside Goose:
Summarize this codebase and point out the highest-risk areaPropose one small safe refactor and tell me how to verify itIf you want Ollama to stay your default Goose provider, keep it in ~/.config/goose/config.yaml:
GOOSE_PROVIDER: ollama
OLLAMA_HOST: http://localhost:11434
GOOSE_MODEL: qwen3.5:4b
GOOSE_MODE: approve
This gives you a durable local default for future Desktop and CLI sessions on the same machine.
Goose works best with models that support tool calling. If you want to experiment with a model that does not, Goose has an experimental Ollama tool shim:
ollama pull mistral-nemo
export GOOSE_TOOLSHIM=true
export GOOSE_TOOLSHIM_OLLAMA_MODEL=mistral-nemo
goose session --name local-toolshim
If tool use feels unreliable with local models, increase the Ollama server context window before starting Goose:
OLLAMA_CONTEXT_LENGTH=32768 ollama serve
.gooseignore to fence secrets, generated files, or infrastructure paths you do not want Goose to touch.export GOOSE_INPUT_LIMIT=32000
127.0.0.1; do not expose port 11434 to the public internet.approve or smart_approve when working in sensitive repos..gooseignore to keep secrets, deployment files, and credentials out of the agent's working set.Quick refs
ollama ls
ollama show qwen3.5:4b
curl http://localhost:11434/api/version
goose configure
goose session --name local-ollama
See also:
> Run models with Ollama locally and point Goose at Ollama's local provider. Keep coding prompts on-device and avoid API spend.
Goose supports an Ollama provider for local models running on http://localhost:11434. That lets you keep the same Goose workflow, approvals, extensions, and repo context while running a local model instead of paying for a cloud API on every prompt.
Why Ollama for Goose?
ollama --version
ollama serve
ollama pull qwen3.5:4b
Quick checks:
curl http://localhost:11434/api/version
ollama ls
If ollama serve is already managed by the desktop app or a service, you only need to make sure the server is listening on localhost:11434.
Option A: Goose Desktop
Open Goose, go to provider setup, choose Ollama, leave the host as http://localhost:11434, and select an installed local model such as qwen3.5:4b.
Option B: Goose CLI
Run the interactive setup once:
goose configure
Or launch a session with explicit local-provider environment variables:
export GOOSE_PROVIDER=ollama
export OLLAMA_HOST=http://localhost:11434
export GOOSE_MODEL=qwen3.5:4b
export GOOSE_MODE=approve
goose session --name local-ollama
Goose stores shared Desktop and CLI configuration under ~/.config/goose/config.yaml, so you only need to configure this once per machine.
curl http://localhost:11434/api/versionollama lsapprove mode if the repo has write accessQuick first run
# Terminal 1
ollama serve
# Terminal 2
cd your-project
goose session --name local-ollama
Inside Goose:
Summarize this codebase and point out the highest-risk areaPropose one small safe refactor and tell me how to verify itIf you want Ollama to stay your default Goose provider, keep it in ~/.config/goose/config.yaml:
GOOSE_PROVIDER: ollama
OLLAMA_HOST: http://localhost:11434
GOOSE_MODEL: qwen3.5:4b
GOOSE_MODE: approve
This gives you a durable local default for future Desktop and CLI sessions on the same machine.
Goose works best with models that support tool calling. If you want to experiment with a model that does not, Goose has an experimental Ollama tool shim:
ollama pull mistral-nemo
export GOOSE_TOOLSHIM=true
export GOOSE_TOOLSHIM_OLLAMA_MODEL=mistral-nemo
goose session --name local-toolshim
If tool use feels unreliable with local models, increase the Ollama server context window before starting Goose:
OLLAMA_CONTEXT_LENGTH=32768 ollama serve
.gooseignore to fence secrets, generated files, or infrastructure paths you do not want Goose to touch.export GOOSE_INPUT_LIMIT=32000
127.0.0.1; do not expose port 11434 to the public internet.approve or smart_approve when working in sensitive repos..gooseignore to keep secrets, deployment files, and credentials out of the agent's working set.Quick refs
ollama ls
ollama show qwen3.5:4b
curl http://localhost:11434/api/version
goose configure
goose session --name local-ollama
See also: