413 lines
9.5 KiB
Markdown
413 lines
9.5 KiB
Markdown
# OpenCode Setup Guide
|
|
|
|
> Configure OpenCode CLI as the AI brain for Aetheel.
|
|
|
|
---
|
|
|
|
## Table of Contents
|
|
|
|
1. [Overview](#overview)
|
|
2. [Install OpenCode](#step-1-install-opencode)
|
|
3. [Configure a Provider](#step-2-configure-a-provider)
|
|
4. [Choose a Runtime Mode](#step-3-choose-a-runtime-mode)
|
|
5. [Configure Aetheel](#step-4-configure-aetheel)
|
|
6. [Test the Integration](#step-5-test-the-integration)
|
|
7. [Architecture](#architecture)
|
|
8. [Troubleshooting](#troubleshooting)
|
|
|
|
---
|
|
|
|
## Overview
|
|
|
|
Aetheel uses [OpenCode](https://opencode.ai) as its AI runtime — the "brain" that
|
|
generates responses to Slack messages. OpenCode is a terminal-native AI coding agent
|
|
that supports multiple LLM providers (Anthropic, OpenAI, Google, etc.).
|
|
|
|
### How It Works
|
|
|
|
```
|
|
Slack Message → Slack Adapter → OpenCode Runtime → LLM → Response → Slack Reply
|
|
```
|
|
|
|
Two runtime modes are available:
|
|
|
|
| Mode | Description | Best For |
|
|
|------|-------------|----------|
|
|
| **CLI** (default) | Runs `opencode run` as a subprocess per request | Simple setup, no persistent server |
|
|
| **SDK** | Talks to `opencode serve` via HTTP API | Lower latency, persistent sessions |
|
|
|
|
### Relationship to OpenClaw
|
|
|
|
This architecture is inspired by OpenClaw's `cli-runner.ts`:
|
|
- OpenClaw spawns CLI agents (Claude CLI, Codex CLI) as subprocesses
|
|
- Each CLI call gets: model args, session ID, system prompt, timeout
|
|
- Output is parsed from JSON/JSONL to extract the response text
|
|
- Sessions are mapped per-thread for conversation isolation
|
|
|
|
We replicate this pattern in Python, adapted for OpenCode's API.
|
|
|
|
---
|
|
|
|
## Step 1: Install OpenCode
|
|
|
|
### macOS / Linux (recommended)
|
|
|
|
```bash
|
|
curl -fsSL https://opencode.ai/install | bash
|
|
```
|
|
|
|
### npm (all platforms)
|
|
|
|
```bash
|
|
npm install -g opencode-ai
|
|
```
|
|
|
|
### Homebrew (macOS)
|
|
|
|
```bash
|
|
brew install anomalyco/tap/opencode
|
|
```
|
|
|
|
### Verify
|
|
|
|
```bash
|
|
opencode --version
|
|
```
|
|
|
|
---
|
|
|
|
## Step 2: Configure a Provider
|
|
|
|
OpenCode needs at least one LLM provider configured. Run:
|
|
|
|
```bash
|
|
opencode auth login
|
|
```
|
|
|
|
This will guide you through connecting to a provider. Options include:
|
|
|
|
| Provider | Auth Method |
|
|
|----------|-------------|
|
|
| **OpenCode Zen** | Token-based (opencode.ai account) |
|
|
| **Anthropic** | API key (`ANTHROPIC_API_KEY`) |
|
|
| **OpenAI** | API key (`OPENAI_API_KEY`) |
|
|
| **Google** | API key (`GEMINI_API_KEY`) |
|
|
|
|
### Using Environment Variables
|
|
|
|
Alternatively, set provider API keys in your `.env`:
|
|
|
|
```env
|
|
# Anthropic
|
|
ANTHROPIC_API_KEY=sk-ant-...
|
|
|
|
# OpenAI
|
|
OPENAI_API_KEY=sk-...
|
|
|
|
# Google Gemini
|
|
GEMINI_API_KEY=AI...
|
|
```
|
|
|
|
### Verify models are available
|
|
|
|
```bash
|
|
opencode models
|
|
```
|
|
|
|
---
|
|
|
|
## Step 3: Choose a Runtime Mode
|
|
|
|
### CLI Mode (Default — Recommended to Start)
|
|
|
|
CLI mode spawns `opencode run` for each message. No persistent server needed.
|
|
|
|
**Pros:**
|
|
- ✅ Simple — just install OpenCode and go
|
|
- ✅ No server to manage
|
|
- ✅ Isolated — each request is independent
|
|
|
|
**Cons:**
|
|
- ⚠️ Higher latency (cold start per request)
|
|
- ⚠️ Limited session continuity (uses `--continue` flag)
|
|
|
|
```env
|
|
OPENCODE_MODE=cli
|
|
```
|
|
|
|
### SDK Mode (Advanced — Lower Latency)
|
|
|
|
SDK mode talks to a running `opencode serve` instance via HTTP.
|
|
|
|
**Pros:**
|
|
- ✅ Lower latency (warm server, no cold start)
|
|
- ✅ Better session management
|
|
- ✅ Full API access
|
|
|
|
**Cons:**
|
|
- ⚠️ Requires running `opencode serve` separately
|
|
- ⚠️ Needs the `opencode-ai` Python package
|
|
|
|
```env
|
|
OPENCODE_MODE=sdk
|
|
```
|
|
|
|
#### Start the OpenCode server:
|
|
|
|
```bash
|
|
# Terminal 1: Start the headless server
|
|
opencode serve --port 4096
|
|
|
|
# Optional: with authentication
|
|
OPENCODE_SERVER_PASSWORD=my-secret opencode serve
|
|
```
|
|
|
|
#### Install the Python SDK:
|
|
|
|
```bash
|
|
pip install opencode-ai
|
|
```
|
|
|
|
---
|
|
|
|
## Step 4: Configure Aetheel
|
|
|
|
Edit your `.env` file:
|
|
|
|
```env
|
|
# --- Slack (see docs/slack-setup.md) ---
|
|
SLACK_BOT_TOKEN=xoxb-...
|
|
SLACK_APP_TOKEN=xapp-...
|
|
|
|
# --- OpenCode Runtime ---
|
|
OPENCODE_MODE=cli
|
|
# OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
|
|
OPENCODE_TIMEOUT=120
|
|
|
|
# --- SDK mode only ---
|
|
# OPENCODE_SERVER_URL=http://localhost:4096
|
|
# OPENCODE_SERVER_PASSWORD=
|
|
|
|
LOG_LEVEL=INFO
|
|
```
|
|
|
|
### Model Selection
|
|
|
|
You can specify a model explicitly, or let OpenCode use its default:
|
|
|
|
```env
|
|
# Anthropic Claude
|
|
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
|
|
|
|
# OpenAI GPT-5
|
|
OPENCODE_MODEL=openai/gpt-5.1
|
|
|
|
# Google Gemini
|
|
OPENCODE_MODEL=google/gemini-3-pro
|
|
|
|
# OpenCode Zen (pay-as-you-go)
|
|
OPENCODE_MODEL=opencode/claude-opus-4-6
|
|
```
|
|
|
|
Or override at launch:
|
|
|
|
```bash
|
|
python main.py --model anthropic/claude-sonnet-4-20250514
|
|
```
|
|
|
|
---
|
|
|
|
## Step 5: Test the Integration
|
|
|
|
### 1. Verify OpenCode works standalone
|
|
|
|
```bash
|
|
# Quick test
|
|
opencode run "What is Python?"
|
|
|
|
# With a specific model
|
|
opencode run --model anthropic/claude-sonnet-4-20250514 "Hello"
|
|
```
|
|
|
|
### 2. Test the runtime directly
|
|
|
|
```bash
|
|
# Quick Python test
|
|
python -c "
|
|
from agent.opencode_runtime import OpenCodeRuntime
|
|
runtime = OpenCodeRuntime()
|
|
print(runtime.get_status())
|
|
response = runtime.chat('Hello, what are you?')
|
|
print(f'Response: {response.text[:200]}')
|
|
print(f'OK: {response.ok}, Duration: {response.duration_ms}ms')
|
|
"
|
|
```
|
|
|
|
### 3. Test via Slack
|
|
|
|
```bash
|
|
# Start in test mode first (echo only, no AI)
|
|
python main.py --test
|
|
|
|
# Then start with AI
|
|
python main.py
|
|
|
|
# Or force a specific mode
|
|
python main.py --cli
|
|
python main.py --sdk
|
|
```
|
|
|
|
### 4. In Slack
|
|
|
|
- Send `status` — see the runtime status
|
|
- Send `help` — see available commands
|
|
- Send any question — get an AI response
|
|
- Reply in a thread — conversation continues in context
|
|
|
|
---
|
|
|
|
## Architecture
|
|
|
|
### Component Diagram
|
|
|
|
```
|
|
┌─────────────────────┐
|
|
│ Slack │
|
|
│ (messages) │
|
|
└──────┬──────────────┘
|
|
│ WebSocket
|
|
│
|
|
┌──────▼──────────────┐
|
|
│ Slack Adapter │
|
|
│ (slack_adapter.py) │
|
|
│ │
|
|
│ • Socket Mode │
|
|
│ • Event handling │
|
|
│ • Thread isolation │
|
|
└──────┬──────────────┘
|
|
│ ai_handler()
|
|
│
|
|
┌──────▼──────────────┐
|
|
│ OpenCode Runtime │
|
|
│ (opencode_runtime) │
|
|
│ │
|
|
│ • Session store │
|
|
│ • System prompt │
|
|
│ • Mode routing │
|
|
└──────┬──────────────┘
|
|
│
|
|
┌────┴────┐
|
|
│ │
|
|
▼ ▼
|
|
CLI Mode SDK Mode
|
|
|
|
┌──────────┐ ┌──────────────┐
|
|
│ opencode │ │ opencode │
|
|
│ run │ │ serve API │
|
|
│ (subproc)│ │ (HTTP/SDK) │
|
|
└──────────┘ └──────────────┘
|
|
│ │
|
|
└──────┬───────┘
|
|
│
|
|
┌──────▼──────┐
|
|
│ LLM │
|
|
│ (Anthropic, │
|
|
│ OpenAI, │
|
|
│ Gemini) │
|
|
└─────────────┘
|
|
```
|
|
|
|
### How OpenClaw Inspired This
|
|
|
|
| OpenClaw Pattern | Aetheel Implementation |
|
|
|------------------|----------------------|
|
|
| `cli-runner.ts` → `runCliAgent()` | `opencode_runtime.py` → `OpenCodeRuntime.chat()` |
|
|
| `cli-backends.ts` → `CliBackendConfig` | `OpenCodeConfig` dataclass |
|
|
| `buildCliArgs()` | `_build_cli_args()` |
|
|
| `runCommandWithTimeout()` | `subprocess.run(timeout=...)` |
|
|
| `parseCliJson()` / `collectText()` | `_parse_cli_output()` / `_collect_text()` |
|
|
| `pickSessionId()` | `_extract_session_id()` |
|
|
| `buildSystemPrompt()` | `build_aetheel_system_prompt()` |
|
|
| Session per thread | `SessionStore` mapping conversation_id → session_id |
|
|
|
|
### File Map
|
|
|
|
| File | Purpose |
|
|
|------|---------|
|
|
| `agent/__init__.py` | Agent package init |
|
|
| `agent/opencode_runtime.py` | OpenCode runtime (CLI + SDK modes) |
|
|
| `adapters/slack_adapter.py` | Slack Socket Mode adapter |
|
|
| `main.py` | Entry point with AI handler |
|
|
| `docs/opencode-setup.md` | This setup guide |
|
|
| `docs/slack-setup.md` | Slack bot setup guide |
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### ❌ "opencode not found in PATH"
|
|
|
|
**Fix:** Install OpenCode:
|
|
```bash
|
|
curl -fsSL https://opencode.ai/install | bash
|
|
```
|
|
|
|
Then verify:
|
|
```bash
|
|
opencode --version
|
|
```
|
|
|
|
### ❌ "CLI command failed" or empty responses
|
|
|
|
**Check:**
|
|
1. Verify OpenCode works standalone: `opencode run "Hello"`
|
|
2. Check that a provider is configured: `opencode auth login`
|
|
3. Check that the model is available: `opencode models`
|
|
4. Check your API key is set (e.g., `ANTHROPIC_API_KEY`)
|
|
|
|
### ❌ "Request timed out"
|
|
|
|
**Fix:** Increase the timeout:
|
|
```env
|
|
OPENCODE_TIMEOUT=300
|
|
```
|
|
|
|
Or simplify your prompt — complex prompts take longer.
|
|
|
|
### ❌ SDK mode: "connection test failed"
|
|
|
|
**Fix:**
|
|
1. Make sure `opencode serve` is running: `opencode serve --port 4096`
|
|
2. Check the URL in `.env`: `OPENCODE_SERVER_URL=http://localhost:4096`
|
|
3. If using auth, set both `OPENCODE_SERVER_PASSWORD` in `.env` and when starting the server
|
|
|
|
### ❌ "opencode-ai SDK not installed"
|
|
|
|
**Fix:**
|
|
```bash
|
|
pip install opencode-ai
|
|
```
|
|
|
|
If you don't want to install the SDK, switch to CLI mode:
|
|
```env
|
|
OPENCODE_MODE=cli
|
|
```
|
|
|
|
### ❌ Responses are cut off or garbled
|
|
|
|
This usually means the output format parsing failed.
|
|
|
|
**Fix:** Try setting the format to text:
|
|
```env
|
|
OPENCODE_FORMAT=text
|
|
```
|
|
|
|
---
|
|
|
|
## Next Steps
|
|
|
|
1. **Memory System** — Add conversation persistence (SQLite)
|
|
2. **Heartbeat** — Proactive messages via cron/scheduler
|
|
3. **Skills** — Loadable skill modules (like OpenClaw's skills/)
|
|
4. **Multi-Channel** — Discord, Telegram adapters
|