Configuration File
8gent reads its configuration from .8gent/config.json in your project root. A global configuration can also be placed at ~/.8gent/config.json. Project-level settings override global ones.
{
"provider": "8gent",
"model": "eight-1.0-q3:14b",
"temperature": 0.7,
"trainingProxy": {
"enabled": false,
"proxyUrl": "http://localhost:30000",
"autoStart": false
}
}This matches the out-of-the-box default: the 8gent local provider running eight-1.0-q3:14b. No Ollama dependency, no API key.
Switch to Ollama
If you would rather route through Ollama (for example, to share a model already pulled for other tools), point the provider at it and pick a pulled model:
{
"provider": "ollama",
"model": "qwen3.5",
"ollamaUrl": "http://localhost:11434"
}Model Selection
8gent supports any model available through Ollama or OpenRouter. Use the /model command in the TUI to switch models interactively, or set the default in your config:
{
"model": "eight-1.0-q3:14b"
}Recommended Models
| Model | Size | Use Case |
|---|---|---|
eight-1.0-q3:14b | ~12GB VRAM | Default active local model for the 8gent provider |
qwen3.5 | ~18GB VRAM | Recommended (Ollama): strong upstream coding performance |
qwen3:14b | ~12GB VRAM | Alternative (Ollama): stronger reasoning, larger model |
The /model command accepts any arbitrary model identifier. If the model is available in your Ollama instance, 8gent will use it.
Provider Configuration
8gent ships with an adaptive router. 11 providers are wired in the registry: 8gent, ollama, openrouter, groq, grok, openai, anthropic, mistral, together, fireworks, replicate. Out of the box, 8gent (local) is the active provider and ollama is also enabled. Everything past ollama is opt-in via API key. Default failover chain (text channel): eight:latest (Ollama), then qwen3.5:latest (Ollama), then meta-llama/llama-3-8b-instruct:free (OpenRouter).
{
"provider": "8gent"
}8gent (Default, local)
Runs the default local model (eight-1.0-q3:14b). No API key needed. No Ollama dependency.
Ollama (local)
Runs models locally through Ollama. No API key needed. Ollama must be running on the configured URL.
{
"provider": "ollama",
"ollamaUrl": "http://localhost:11434"
}OpenRouter
Access free and paid cloud models. Set your API key as an environment variable:
export OPENROUTER_API_KEY=sk-or-your-key-hereThen configure the provider:
{
"provider": "openrouter"
}8gent's dynamic model router can automatically select the best available free model:
{
"provider": "openrouter",
"model": "auto:free"
}See the OpenRouter guide for details on available free models.
LM Studio
If you prefer LM Studio over Ollama:
{
"provider": "lmstudio",
"lmStudioUrl": "http://localhost:1234"
}Permissions
Permissions control which shell commands 8gent can execute without asking. Stored at ~/.8gent/permissions.json:
{
"allowedPatterns": ["npm *", "bun *", "git *", "ls *", "cat *"],
"deniedPatterns": ["rm -rf /", "sudo rm -rf"],
"autoApprove": false,
"logPath": "~/.8gent/logs/permissions.log"
}Dangerous commands (like rm -rf, sudo, git push --force) always require explicit approval regardless of your allowed patterns. Safe patterns like git status, bun run, and eslint are auto-approved by default.
Use /permissions in the TUI to view current settings, /allow <pattern> to add safe patterns, and /deny <pattern> to block patterns.
Hooks
Hooks run custom code at key points in the agent workflow. Configure them in ~/.8gent/hooks.json:
{
"hooks": [
{
"type": "onComplete",
"name": "Voice Notification",
"mode": "shell",
"command": "say -v Moira 'Task completed'",
"enabled": true,
"async": true
}
],
"globalTimeout": 30000,
"enabled": true
}Hook types include beforeTool, afterTool, beforeCommand, afterCommand, onError, onComplete, onStart, and onExit. See the Hooks reference for full documentation.
MCP Servers
Connect external tools via the Model Context Protocol. Store MCP server configurations in .8gent/mcp.json:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
},
"sqlite": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sqlite", "--db-path", "./data.db"]
}
}
}See the MCP Integration guide for setup instructions.
Kernel Fine-Tuning
The RL fine-tuning pipeline is off by default. Enable it to let 8gent continuously improve its local model from your coding sessions:
{
"trainingProxy": {
"enabled": true,
"proxyUrl": "http://localhost:30000",
"autoStart": false
}
}This routes Ollama calls through the training proxy, which collects training data and scores responses with a judge model. See the Kernel Fine-Tuning architecture page for details.
Environment Variables
| Variable | Purpose |
|---|---|
OPENROUTER_API_KEY | API key for OpenRouter cloud models |
TRAINING_PROXY_URL | Override training proxy URL |
OLLAMA_HOST | Override default Ollama URL |