Learn to run your opencode coding agent with local AI models powered by ollama

> Run models with Ollama locally and point OpenCode at Ollama's OpenAI-compatible API. Keep coding prompts on-device and avoid API spend.
OpenCode can connect to any OpenAI-compatible provider. Ollama exposes one at http://localhost:11434/v1, so you can run OpenCode against a fully local model while keeping the same TUI workflow, project rules, and file-editing flow.
Why Ollama for OpenCode?
ollama --version
ollama serve
ollama pull qwen3.5:3b
Quick API checks:
curl http://localhost:11434/api/version
curl http://localhost:11434/v1/models
If ollama serve is already managed by the desktop app or a service, you only need to make sure the server is listening on localhost:11434.
Option A: Global config
Create or edit ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": {
"name": "Qwen 3.5 3B (local)"
}
}
}
},
"model": "ollama/qwen3.5:3b",
"small_model": "ollama/qwen3.5:3b"
}
Option B: Project-local config
Create ./opencode.json in the repo you want OpenCode to use. OpenCode merges it on top of your global config, which is useful when only some projects should default to Ollama.
TUI flow
opencode/modelsollama/qwen3.5:3bcurl http://localhost:11434/api/versionollama lsopencode/init if the project does not already have AGENTS.md/models to select your Ollama-backed modelQuick first run
# Terminal 1
ollama serve
# Terminal 2
cd your-project
opencode
Inside OpenCode:
/models → choose ollama/qwen3.5:3bSummarize this codebase and point out the highest-risk areaUse a per-project opencode.json when you want one repo pinned to local models without changing your global default:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": { "name": "Qwen 3.5 3B (local)" },
"qwen3.5:7b": { "name": "Qwen 3.5 7B (local)" }
}
}
},
"model": "ollama/qwen3.5:7b",
"small_model": "ollama/qwen3.5:3b",
"share": "disabled"
}
This gives you a larger default local model for coding and a smaller one for lightweight tasks.
Keep a cloud model available for difficult refactors while making Ollama your cheap default:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": { "name": "Qwen 3.5 3B (local)" }
}
}
},
"model": "anthropic/claude-sonnet-4-5",
"small_model": "ollama/qwen3.5:3b"
}
Use /models to switch when a task needs more reasoning depth.
http://localhost:11434/api/version and http://localhost:11434/v1/models.small_model to your Ollama model so cheap background tasks stay local."share": "disabled" if you do not want session sharing available.{
"share": "disabled",
"disabled_providers": ["openai", "anthropic", "gemini"]
}
127.0.0.1; do not expose port 11434 to the public internet./models before making broad repo changes.Quick refs
ollama ls
ollama show qwen3.5:3b
curl http://localhost:11434/v1/models
opencode
See also: ./ollama-cli.md, ./ollama-desktop.md, ../opencode/opencode.md, and ../llamacpp/llamacpp-opencode.md.
> Run models with Ollama locally and point OpenCode at Ollama's OpenAI-compatible API. Keep coding prompts on-device and avoid API spend.
OpenCode can connect to any OpenAI-compatible provider. Ollama exposes one at http://localhost:11434/v1, so you can run OpenCode against a fully local model while keeping the same TUI workflow, project rules, and file-editing flow.
Why Ollama for OpenCode?
ollama --version
ollama serve
ollama pull qwen3.5:3b
Quick API checks:
curl http://localhost:11434/api/version
curl http://localhost:11434/v1/models
If ollama serve is already managed by the desktop app or a service, you only need to make sure the server is listening on localhost:11434.
Option A: Global config
Create or edit ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": {
"name": "Qwen 3.5 3B (local)"
}
}
}
},
"model": "ollama/qwen3.5:3b",
"small_model": "ollama/qwen3.5:3b"
}
Option B: Project-local config
Create ./opencode.json in the repo you want OpenCode to use. OpenCode merges it on top of your global config, which is useful when only some projects should default to Ollama.
TUI flow
opencode/modelsollama/qwen3.5:3bcurl http://localhost:11434/api/versionollama lsopencode/init if the project does not already have AGENTS.md/models to select your Ollama-backed modelQuick first run
# Terminal 1
ollama serve
# Terminal 2
cd your-project
opencode
Inside OpenCode:
/models → choose ollama/qwen3.5:3bSummarize this codebase and point out the highest-risk areaUse a per-project opencode.json when you want one repo pinned to local models without changing your global default:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": { "name": "Qwen 3.5 3B (local)" },
"qwen3.5:7b": { "name": "Qwen 3.5 7B (local)" }
}
}
},
"model": "ollama/qwen3.5:7b",
"small_model": "ollama/qwen3.5:3b",
"share": "disabled"
}
This gives you a larger default local model for coding and a smaller one for lightweight tasks.
Keep a cloud model available for difficult refactors while making Ollama your cheap default:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": { "name": "Qwen 3.5 3B (local)" }
}
}
},
"model": "anthropic/claude-sonnet-4-5",
"small_model": "ollama/qwen3.5:3b"
}
Use /models to switch when a task needs more reasoning depth.
http://localhost:11434/api/version and http://localhost:11434/v1/models.small_model to your Ollama model so cheap background tasks stay local."share": "disabled" if you do not want session sharing available.{
"share": "disabled",
"disabled_providers": ["openai", "anthropic", "gemini"]
}
127.0.0.1; do not expose port 11434 to the public internet./models before making broad repo changes.Quick refs
ollama ls
ollama show qwen3.5:3b
curl http://localhost:11434/v1/models
opencode
See also: ./ollama-cli.md, ./ollama-desktop.md, ../opencode/opencode.md, and ../llamacpp/llamacpp-opencode.md.