Install 8gent
npm install -g @8gi-foundation/8gent-code
8gentThat's it. Two commands. 8gent will guide you through provider setup on first launch if needed.
Three binary aliases are installed: 8gent, 8gent-code, and 8 (shortcut).
Prerequisites (automatic)
8gent is free and local-first by default. The active provider out of the box is 8gent local (eight-1.0-q3:14b), with ollama also enabled. The adaptive router wires 11 providers (8gent, ollama, openrouter, groq, grok, openai, anthropic, mistral, together, fireworks, replicate); everything past ollama is opt-in via API key. Default failover chain (text channel): eight:latest (Ollama), then qwen3.5:latest (Ollama), then meta-llama/llama-3-8b-instruct:free (OpenRouter).
If you want to run Ollama-backed local models, install it manually:
Ollama (optional local backend)
Download from ollama.ai or install via Homebrew on macOS:
brew install ollama
ollama servePull a Model
# Recommended starting model
ollama pull qwen3.5
# Alternative: larger, stronger reasoning
ollama pull qwen3:14bIf you do not have a local GPU, see the OpenRouter guide for free cloud model access.
Verify Installation
8gentYou should see the 8gent TUI appear with its splash animation. If your chosen backend (local 8gent, Ollama, etc.) is not running, 8gent will show a connection error in the status bar and fall back through the router chain.
Install from Source (Contributors)
git clone https://github.com/8gi-foundation/8gent-code.git
cd 8gent-code
bun install
bun run tuiTroubleshooting
Ollama connection refused
Make sure the Ollama service is running:
ollama serveBy default, Ollama listens on http://localhost:11434. If you have changed the port, update your 8gent configuration accordingly.
Model not found
If 8gent reports a missing model, pull it explicitly:
ollama pull qwen3.5Permission denied on 8gent command
Ensure the symlink target is executable:
chmod +x ~/.local/bin/8gentNext Steps
Head to the Quick Start guide to run your first coding session.