feat: openclaw-style secrets (env.vars + \) and per-task model routing
- Replace python-dotenv with config.json env.vars block + \ substitution - Add models section for per-task model routing (heartbeat, subagent, default) - Heartbeat/subagent tasks can use different models/providers than main chat - Remove python-dotenv from dependencies - Update all docs to reflect new config approach - Reorganize docs into project/ and research/ subdirectories
This commit is contained in:
418
docs/project/opencode-setup.md
Normal file
418
docs/project/opencode-setup.md
Normal file
@@ -0,0 +1,418 @@
|
||||
# OpenCode Setup Guide
|
||||
|
||||
> Configure OpenCode CLI as the AI brain for Aetheel.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Install OpenCode](#step-1-install-opencode)
|
||||
3. [Configure a Provider](#step-2-configure-a-provider)
|
||||
4. [Choose a Runtime Mode](#step-3-choose-a-runtime-mode)
|
||||
5. [Configure Aetheel](#step-4-configure-aetheel)
|
||||
6. [Test the Integration](#step-5-test-the-integration)
|
||||
7. [Architecture](#architecture)
|
||||
8. [Troubleshooting](#troubleshooting)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Aetheel uses [OpenCode](https://opencode.ai) as its AI runtime — the "brain" that
|
||||
generates responses to Slack messages. OpenCode is a terminal-native AI coding agent
|
||||
that supports multiple LLM providers (Anthropic, OpenAI, Google, etc.).
|
||||
|
||||
### How It Works
|
||||
|
||||
```
|
||||
Slack Message → Slack Adapter → OpenCode Runtime → LLM → Response → Slack Reply
|
||||
```
|
||||
|
||||
Two runtime modes are available:
|
||||
|
||||
| Mode | Description | Best For |
|
||||
|------|-------------|----------|
|
||||
| **CLI** (default) | Runs `opencode run` as a subprocess per request | Simple setup, no persistent server |
|
||||
| **SDK** | Talks to `opencode serve` via HTTP API | Lower latency, persistent sessions |
|
||||
|
||||
### Relationship to OpenClaw
|
||||
|
||||
This architecture is inspired by OpenClaw's `cli-runner.ts`:
|
||||
- OpenClaw spawns CLI agents (Claude CLI, Codex CLI) as subprocesses
|
||||
- Each CLI call gets: model args, session ID, system prompt, timeout
|
||||
- Output is parsed from JSON/JSONL to extract the response text
|
||||
- Sessions are mapped per-thread for conversation isolation
|
||||
|
||||
We replicate this pattern in Python, adapted for OpenCode's API.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install OpenCode
|
||||
|
||||
### macOS / Linux (recommended)
|
||||
|
||||
```bash
|
||||
curl -fsSL https://opencode.ai/install | bash
|
||||
```
|
||||
|
||||
### npm (all platforms)
|
||||
|
||||
```bash
|
||||
npm install -g opencode-ai
|
||||
```
|
||||
|
||||
### Homebrew (macOS)
|
||||
|
||||
```bash
|
||||
brew install anomalyco/tap/opencode
|
||||
```
|
||||
|
||||
### Verify
|
||||
|
||||
```bash
|
||||
opencode --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure a Provider
|
||||
|
||||
OpenCode needs at least one LLM provider configured. Run:
|
||||
|
||||
```bash
|
||||
opencode auth login
|
||||
```
|
||||
|
||||
This will guide you through connecting to a provider. Options include:
|
||||
|
||||
| Provider | Auth Method |
|
||||
|----------|-------------|
|
||||
| **OpenCode Zen** | Token-based (opencode.ai account) |
|
||||
| **Anthropic** | API key (`ANTHROPIC_API_KEY`) |
|
||||
| **OpenAI** | API key (`OPENAI_API_KEY`) |
|
||||
| **Google** | API key (`GEMINI_API_KEY`) |
|
||||
|
||||
### Using Environment Variables
|
||||
|
||||
Alternatively, set provider API keys in your `config.json` → `env.vars` block:
|
||||
|
||||
```json
|
||||
{
|
||||
"env": {
|
||||
"vars": {
|
||||
"ANTHROPIC_API_KEY": "sk-ant-...",
|
||||
"OPENAI_API_KEY": "sk-...",
|
||||
"GEMINI_API_KEY": "AI..."
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or as process environment variables:
|
||||
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY="sk-ant-..."
|
||||
```
|
||||
|
||||
### Verify models are available
|
||||
|
||||
```bash
|
||||
opencode models
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Choose a Runtime Mode
|
||||
|
||||
### CLI Mode (Default — Recommended to Start)
|
||||
|
||||
CLI mode spawns `opencode run` for each message. No persistent server needed.
|
||||
|
||||
**Pros:**
|
||||
- ✅ Simple — just install OpenCode and go
|
||||
- ✅ No server to manage
|
||||
- ✅ Isolated — each request is independent
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Higher latency (cold start per request)
|
||||
- ⚠️ Limited session continuity (uses `--continue` flag)
|
||||
|
||||
```env
|
||||
OPENCODE_MODE=cli
|
||||
```
|
||||
|
||||
### SDK Mode (Advanced — Lower Latency)
|
||||
|
||||
SDK mode talks to a running `opencode serve` instance via HTTP.
|
||||
|
||||
**Pros:**
|
||||
- ✅ Lower latency (warm server, no cold start)
|
||||
- ✅ Better session management
|
||||
- ✅ Full API access
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Requires running `opencode serve` separately
|
||||
- ⚠️ Needs the `opencode-ai` Python package
|
||||
|
||||
```env
|
||||
OPENCODE_MODE=sdk
|
||||
```
|
||||
|
||||
#### Start the OpenCode server:
|
||||
|
||||
```bash
|
||||
# Terminal 1: Start the headless server
|
||||
opencode serve --port 4096
|
||||
|
||||
# Optional: with authentication
|
||||
OPENCODE_SERVER_PASSWORD=my-secret opencode serve
|
||||
```
|
||||
|
||||
#### Install the Python SDK:
|
||||
|
||||
```bash
|
||||
pip install opencode-ai
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Configure Aetheel
|
||||
|
||||
Edit your `~/.aetheel/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"env": {
|
||||
"vars": {
|
||||
"SLACK_BOT_TOKEN": "xoxb-...",
|
||||
"SLACK_APP_TOKEN": "xapp-..."
|
||||
}
|
||||
},
|
||||
"slack": {
|
||||
"bot_token": "${SLACK_BOT_TOKEN}",
|
||||
"app_token": "${SLACK_APP_TOKEN}"
|
||||
},
|
||||
"runtime": {
|
||||
"mode": "cli"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
|
||||
You can specify a model in config.json or via process env:
|
||||
|
||||
```json
|
||||
{
|
||||
"runtime": {
|
||||
"model": "anthropic/claude-sonnet-4-20250514"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Or override at launch:
|
||||
|
||||
```bash
|
||||
python main.py --model anthropic/claude-sonnet-4-20250514
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Test the Integration
|
||||
|
||||
### 1. Verify OpenCode works standalone
|
||||
|
||||
```bash
|
||||
# Quick test
|
||||
opencode run "What is Python?"
|
||||
|
||||
# With a specific model
|
||||
opencode run --model anthropic/claude-sonnet-4-20250514 "Hello"
|
||||
```
|
||||
|
||||
### 2. Test the runtime directly
|
||||
|
||||
```bash
|
||||
# Quick Python test
|
||||
python -c "
|
||||
from agent.opencode_runtime import OpenCodeRuntime
|
||||
runtime = OpenCodeRuntime()
|
||||
print(runtime.get_status())
|
||||
response = runtime.chat('Hello, what are you?')
|
||||
print(f'Response: {response.text[:200]}')
|
||||
print(f'OK: {response.ok}, Duration: {response.duration_ms}ms')
|
||||
"
|
||||
```
|
||||
|
||||
### 3. Test via Slack
|
||||
|
||||
```bash
|
||||
# Start in test mode first (echo only, no AI)
|
||||
python main.py --test
|
||||
|
||||
# Then start with AI
|
||||
python main.py
|
||||
|
||||
# Or force a specific mode
|
||||
python main.py --cli
|
||||
python main.py --sdk
|
||||
```
|
||||
|
||||
### 4. In Slack
|
||||
|
||||
- Send `status` — see the runtime status
|
||||
- Send `help` — see available commands
|
||||
- Send any question — get an AI response
|
||||
- Reply in a thread — conversation continues in context
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ Slack │
|
||||
│ (messages) │
|
||||
└──────┬──────────────┘
|
||||
│ WebSocket
|
||||
│
|
||||
┌──────▼──────────────┐
|
||||
│ Slack Adapter │
|
||||
│ (slack_adapter.py) │
|
||||
│ │
|
||||
│ • Socket Mode │
|
||||
│ • Event handling │
|
||||
│ • Thread isolation │
|
||||
└──────┬──────────────┘
|
||||
│ ai_handler()
|
||||
│
|
||||
┌──────▼──────────────┐
|
||||
│ OpenCode Runtime │
|
||||
│ (opencode_runtime) │
|
||||
│ │
|
||||
│ • Session store │
|
||||
│ • System prompt │
|
||||
│ • Mode routing │
|
||||
└──────┬──────────────┘
|
||||
│
|
||||
┌────┴────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
CLI Mode SDK Mode
|
||||
|
||||
┌──────────┐ ┌──────────────┐
|
||||
│ opencode │ │ opencode │
|
||||
│ run │ │ serve API │
|
||||
│ (subproc)│ │ (HTTP/SDK) │
|
||||
└──────────┘ └──────────────┘
|
||||
│ │
|
||||
└──────┬───────┘
|
||||
│
|
||||
┌──────▼──────┐
|
||||
│ LLM │
|
||||
│ (Anthropic, │
|
||||
│ OpenAI, │
|
||||
│ Gemini) │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
### How OpenClaw Inspired This
|
||||
|
||||
| OpenClaw Pattern | Aetheel Implementation |
|
||||
|------------------|----------------------|
|
||||
| `cli-runner.ts` → `runCliAgent()` | `opencode_runtime.py` → `OpenCodeRuntime.chat()` |
|
||||
| `cli-backends.ts` → `CliBackendConfig` | `OpenCodeConfig` dataclass |
|
||||
| `buildCliArgs()` | `_build_cli_args()` |
|
||||
| `runCommandWithTimeout()` | `subprocess.run(timeout=...)` |
|
||||
| `parseCliJson()` / `collectText()` | `_parse_cli_output()` / `_collect_text()` |
|
||||
| `pickSessionId()` | `_extract_session_id()` |
|
||||
| `buildSystemPrompt()` | `build_aetheel_system_prompt()` |
|
||||
| Session per thread | `SessionStore` mapping conversation_id → session_id |
|
||||
|
||||
### File Map
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `agent/__init__.py` | Agent package init |
|
||||
| `agent/opencode_runtime.py` | OpenCode runtime (CLI + SDK modes) |
|
||||
| `adapters/slack_adapter.py` | Slack Socket Mode adapter |
|
||||
| `main.py` | Entry point with AI handler |
|
||||
| `docs/opencode-setup.md` | This setup guide |
|
||||
| `docs/slack-setup.md` | Slack bot setup guide |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### ❌ "opencode not found in PATH"
|
||||
|
||||
**Fix:** Install OpenCode:
|
||||
```bash
|
||||
curl -fsSL https://opencode.ai/install | bash
|
||||
```
|
||||
|
||||
Then verify:
|
||||
```bash
|
||||
opencode --version
|
||||
```
|
||||
|
||||
### ❌ "CLI command failed" or empty responses
|
||||
|
||||
**Check:**
|
||||
1. Verify OpenCode works standalone: `opencode run "Hello"`
|
||||
2. Check that a provider is configured: `opencode auth login`
|
||||
3. Check that the model is available: `opencode models`
|
||||
4. Check your API key is set (e.g., `ANTHROPIC_API_KEY`)
|
||||
|
||||
### ❌ "Request timed out"
|
||||
|
||||
**Fix:** Increase the timeout in config.json:
|
||||
```json
|
||||
{
|
||||
"runtime": { "timeout_seconds": 300 }
|
||||
}
|
||||
```
|
||||
|
||||
Or simplify your prompt — complex prompts take longer.
|
||||
|
||||
### ❌ SDK mode: "connection test failed"
|
||||
|
||||
**Fix:**
|
||||
1. Make sure `opencode serve` is running: `opencode serve --port 4096`
|
||||
2. Check the URL in config.json: `runtime.server_url`
|
||||
3. If using auth, set `OPENCODE_SERVER_PASSWORD` in `env.vars` and when starting the server
|
||||
|
||||
### ❌ "opencode-ai SDK not installed"
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
pip install opencode-ai
|
||||
```
|
||||
|
||||
If you don't want to install the SDK, switch to CLI mode in config.json:
|
||||
```json
|
||||
{
|
||||
"runtime": { "mode": "cli" }
|
||||
}
|
||||
```
|
||||
|
||||
### ❌ Responses are cut off or garbled
|
||||
|
||||
This usually means the output format parsing failed.
|
||||
|
||||
**Fix:** Try setting the format to text:
|
||||
```env
|
||||
OPENCODE_FORMAT=text
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Memory System** — Add conversation persistence (SQLite)
|
||||
2. **Heartbeat** — Proactive messages via cron/scheduler
|
||||
3. **Skills** — Loadable skill modules (like OpenClaw's skills/)
|
||||
4. **Multi-Channel** — Discord, Telegram adapters
|
||||
Reference in New Issue
Block a user