SearXNG is a self-hosted metasearch engine that lets you run private web search on your own machine. For local AI workflows, it pairs especially well with coding agents and local model hosts because it exposes a simple JSON search API.

What it is. SearXNG is a metasearch engine that aggregates results from other search engines without storing user information or building profiles. It is a strong fit for privacy-conscious AI workflows because you can run it yourself and expose JSON search results to local tools. (SearXNG, SearXNG)
How this fits the stack
That means there is no special "SearXNG-to-Ollama" integration by itself. The common pattern is: agent uses Ollama for inference and SearXNG for web search. (SearXNG, Goose Custom Extensions, OpenCode MCP)
Why this setup works
Create the config directory and a minimal settings.yml before starting the container. This keeps defaults, enables JSON output for MCP integrations, and binds to all container interfaces so Docker port-forwarding works. The settings docs show use_default_settings plus server secret_key, and the search settings list default formats (HTML only by default). (SearXNG, SearXNG, SearXNG)
mkdir -p ./searxng/config/ ./searxng/data/
cat > ./searxng/config/settings.yml << 'EOF'
use_default_settings: true
search:
# Enable JSON output for /search?format=json
formats:
- html
- json
server:
# Bind to all interfaces so Docker port-forwarding works
# (host-side restriction is handled by -p 127.0.0.1:8080:8080)
bind_address: "0.0.0.0"
# Change this before exposing beyond localhost
secret_key: "change-me-please"
EOF
The official container guide documents Docker-based setup; here is a minimal local example that binds to localhost:8080 and persists config/cache to local volumes. (SearXNG)
cd ./searxng/
docker run --name searxng -d \
-p 127.0.0.1:8080:8080 \
-v "./config/:/etc/searxng/" \
-v "./data/:/var/cache/searxng/" \
docker.io/searxng/searxng:latest
The Search API supports format=json when json is present in search.formats. (SearXNG, SearXNG)
curl "http://localhost:8080/search?q=searxng&format=json"
If you want a full local stack, start Ollama so Goose or OpenCode can use local models while SearXNG handles search.
ollama serve
ollama pull qwen3.5:3b
Quick checks:
curl http://localhost:11434/api/version
ollama ls
Goose supports custom MCP extensions, which makes SearXNG straightforward to add as a local search tool. (Goose Custom Extensions)
Start the MCP server:
npx mcp-searxng
Add a Custom Extension in Goose with:
# Extension command
npx mcp-searxng
# Extension env
SEARXNG_URL=http://localhost:8080
If you also want Goose to use Ollama for inference:
http://localhost:11434.export GOOSE_PROVIDER=ollama
export OLLAMA_HOST=http://localhost:11434
export GOOSE_MODEL=qwen3.5:3b
goose session --name private-search
OpenCode supports MCP servers for tools and OpenAI-compatible providers for local inference. That makes SearXNG + Ollama + OpenCode a clean single-machine workflow. (OpenCode MCP, OpenCode Config, OpenCode Providers)
Add the SearXNG MCP server and optional Ollama provider to ~/.config/opencode/opencode.json or a project-local ./opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": {
"name": "Qwen 3.5 3B (local)"
}
}
}
},
"model": "ollama/qwen3.5:3b",
"small_model": "ollama/qwen3.5:3b",
"mcp": {
"searxng": {
"type": "local",
"command": ["uvx", "mcp-searxng"],
"enabled": true,
"environment": {
"SEARXNG_URL": "http://localhost:8080"
}
}
},
"tools": {
"websearch": false
}
}
This tells OpenCode to use Ollama for model calls, SearXNG for web search, and the built-in hosted websearch tool stays off.
http://localhost:8080 and search in the browser./search?format=json and pass q, categories, or engines. (SearXNG)opencode, select your Ollama-backed model with /models, and ask for a search-based summary or research task.engines.remove or engines.keep_only. The settings docs show how to override and filter engines with use_default_settings. (SearXNG)websearch tool. (OpenCode Tools)127.0.0.1, or place it behind a VPN/reverse proxy only if you truly need remote access. (SearXNG).gooseignore when working in sensitive repos. (Goose Allowlists, Goose Ignore)-p 127.0.0.1:8080:8080). (SearXNG)secret_key before exposing the instance beyond your machine. (SearXNG, SearXNG)use_default_settings: true and override only what you need to reduce misconfiguration risk. (SearXNG)127.0.0.1 unless you add firewall rules and authentication in front of it.Minimal settings.yml
use_default_settings: true
search:
formats:
- html
- json
server:
bind_address: "0.0.0.0"
secret_key: "change-me-please"
Quick checklist
http://localhost:8080settings.yml has json in search.formatshttp://localhost:11434 if you want local inferenceSEARXNG_URL=http://localhost:8080Two common local stacks
# Goose + Ollama + SearXNG
ollama serve
npx mcp-searxng
goose session --name private-search
# OpenCode + Ollama + SearXNG
ollama serve
opencode
What it is. SearXNG is a metasearch engine that aggregates results from other search engines without storing user information or building profiles. It is a strong fit for privacy-conscious AI workflows because you can run it yourself and expose JSON search results to local tools. (SearXNG, SearXNG)
How this fits the stack
That means there is no special "SearXNG-to-Ollama" integration by itself. The common pattern is: agent uses Ollama for inference and SearXNG for web search. (SearXNG, Goose Custom Extensions, OpenCode MCP)
Why this setup works
Create the config directory and a minimal settings.yml before starting the container. This keeps defaults, enables JSON output for MCP integrations, and binds to all container interfaces so Docker port-forwarding works. The settings docs show use_default_settings plus server secret_key, and the search settings list default formats (HTML only by default). (SearXNG, SearXNG, SearXNG)
mkdir -p ./searxng/config/ ./searxng/data/
cat > ./searxng/config/settings.yml << 'EOF'
use_default_settings: true
search:
# Enable JSON output for /search?format=json
formats:
- html
- json
server:
# Bind to all interfaces so Docker port-forwarding works
# (host-side restriction is handled by -p 127.0.0.1:8080:8080)
bind_address: "0.0.0.0"
# Change this before exposing beyond localhost
secret_key: "change-me-please"
EOF
The official container guide documents Docker-based setup; here is a minimal local example that binds to localhost:8080 and persists config/cache to local volumes. (SearXNG)
cd ./searxng/
docker run --name searxng -d \
-p 127.0.0.1:8080:8080 \
-v "./config/:/etc/searxng/" \
-v "./data/:/var/cache/searxng/" \
docker.io/searxng/searxng:latest
The Search API supports format=json when json is present in search.formats. (SearXNG, SearXNG)
curl "http://localhost:8080/search?q=searxng&format=json"
If you want a full local stack, start Ollama so Goose or OpenCode can use local models while SearXNG handles search.
ollama serve
ollama pull qwen3.5:3b
Quick checks:
curl http://localhost:11434/api/version
ollama ls
Goose supports custom MCP extensions, which makes SearXNG straightforward to add as a local search tool. (Goose Custom Extensions)
Start the MCP server:
npx mcp-searxng
Add a Custom Extension in Goose with:
# Extension command
npx mcp-searxng
# Extension env
SEARXNG_URL=http://localhost:8080
If you also want Goose to use Ollama for inference:
http://localhost:11434.export GOOSE_PROVIDER=ollama
export OLLAMA_HOST=http://localhost:11434
export GOOSE_MODEL=qwen3.5:3b
goose session --name private-search
OpenCode supports MCP servers for tools and OpenAI-compatible providers for local inference. That makes SearXNG + Ollama + OpenCode a clean single-machine workflow. (OpenCode MCP, OpenCode Config, OpenCode Providers)
Add the SearXNG MCP server and optional Ollama provider to ~/.config/opencode/opencode.json or a project-local ./opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://localhost:11434/v1"
},
"models": {
"qwen3.5:3b": {
"name": "Qwen 3.5 3B (local)"
}
}
}
},
"model": "ollama/qwen3.5:3b",
"small_model": "ollama/qwen3.5:3b",
"mcp": {
"searxng": {
"type": "local",
"command": ["uvx", "mcp-searxng"],
"enabled": true,
"environment": {
"SEARXNG_URL": "http://localhost:8080"
}
}
},
"tools": {
"websearch": false
}
}
This tells OpenCode to use Ollama for model calls, SearXNG for web search, and the built-in hosted websearch tool stays off.
http://localhost:8080 and search in the browser./search?format=json and pass q, categories, or engines. (SearXNG)opencode, select your Ollama-backed model with /models, and ask for a search-based summary or research task.engines.remove or engines.keep_only. The settings docs show how to override and filter engines with use_default_settings. (SearXNG)websearch tool. (OpenCode Tools)127.0.0.1, or place it behind a VPN/reverse proxy only if you truly need remote access. (SearXNG).gooseignore when working in sensitive repos. (Goose Allowlists, Goose Ignore)-p 127.0.0.1:8080:8080). (SearXNG)secret_key before exposing the instance beyond your machine. (SearXNG, SearXNG)use_default_settings: true and override only what you need to reduce misconfiguration risk. (SearXNG)127.0.0.1 unless you add firewall rules and authentication in front of it.Minimal settings.yml
use_default_settings: true
search:
formats:
- html
- json
server:
bind_address: "0.0.0.0"
secret_key: "change-me-please"
Quick checklist
http://localhost:8080settings.yml has json in search.formatshttp://localhost:11434 if you want local inferenceSEARXNG_URL=http://localhost:8080Two common local stacks
# Goose + Ollama + SearXNG
ollama serve
npx mcp-searxng
goose session --name private-search
# OpenCode + Ollama + SearXNG
ollama serve
opencode