Files
Aetheel/docs/project/opencode-setup.md
tanmay11k 82c2640481 feat: openclaw-style secrets (env.vars + \) and per-task model routing
- Replace python-dotenv with config.json env.vars block + \ substitution
- Add models section for per-task model routing (heartbeat, subagent, default)
- Heartbeat/subagent tasks can use different models/providers than main chat
- Remove python-dotenv from dependencies
- Update all docs to reflect new config approach
- Reorganize docs into project/ and research/ subdirectories
2026-02-20 23:49:05 -05:00

9.5 KiB

OpenCode Setup Guide

Configure OpenCode CLI as the AI brain for Aetheel.


Table of Contents

  1. Overview
  2. Install OpenCode
  3. Configure a Provider
  4. Choose a Runtime Mode
  5. Configure Aetheel
  6. Test the Integration
  7. Architecture
  8. Troubleshooting

Overview

Aetheel uses OpenCode as its AI runtime — the "brain" that generates responses to Slack messages. OpenCode is a terminal-native AI coding agent that supports multiple LLM providers (Anthropic, OpenAI, Google, etc.).

How It Works

Slack Message → Slack Adapter → OpenCode Runtime → LLM → Response → Slack Reply

Two runtime modes are available:

Mode Description Best For
CLI (default) Runs opencode run as a subprocess per request Simple setup, no persistent server
SDK Talks to opencode serve via HTTP API Lower latency, persistent sessions

Relationship to OpenClaw

This architecture is inspired by OpenClaw's cli-runner.ts:

  • OpenClaw spawns CLI agents (Claude CLI, Codex CLI) as subprocesses
  • Each CLI call gets: model args, session ID, system prompt, timeout
  • Output is parsed from JSON/JSONL to extract the response text
  • Sessions are mapped per-thread for conversation isolation

We replicate this pattern in Python, adapted for OpenCode's API.


Step 1: Install OpenCode

curl -fsSL https://opencode.ai/install | bash

npm (all platforms)

npm install -g opencode-ai

Homebrew (macOS)

brew install anomalyco/tap/opencode

Verify

opencode --version

Step 2: Configure a Provider

OpenCode needs at least one LLM provider configured. Run:

opencode auth login

This will guide you through connecting to a provider. Options include:

Provider Auth Method
OpenCode Zen Token-based (opencode.ai account)
Anthropic API key (ANTHROPIC_API_KEY)
OpenAI API key (OPENAI_API_KEY)
Google API key (GEMINI_API_KEY)

Using Environment Variables

Alternatively, set provider API keys in your config.jsonenv.vars block:

{
  "env": {
    "vars": {
      "ANTHROPIC_API_KEY": "sk-ant-...",
      "OPENAI_API_KEY": "sk-...",
      "GEMINI_API_KEY": "AI..."
    }
  }
}

Or as process environment variables:

export ANTHROPIC_API_KEY="sk-ant-..."

Verify models are available

opencode models

Step 3: Choose a Runtime Mode

CLI mode spawns opencode run for each message. No persistent server needed.

Pros:

  • Simple — just install OpenCode and go
  • No server to manage
  • Isolated — each request is independent

Cons:

  • ⚠️ Higher latency (cold start per request)
  • ⚠️ Limited session continuity (uses --continue flag)
OPENCODE_MODE=cli

SDK Mode (Advanced — Lower Latency)

SDK mode talks to a running opencode serve instance via HTTP.

Pros:

  • Lower latency (warm server, no cold start)
  • Better session management
  • Full API access

Cons:

  • ⚠️ Requires running opencode serve separately
  • ⚠️ Needs the opencode-ai Python package
OPENCODE_MODE=sdk

Start the OpenCode server:

# Terminal 1: Start the headless server
opencode serve --port 4096

# Optional: with authentication
OPENCODE_SERVER_PASSWORD=my-secret opencode serve

Install the Python SDK:

pip install opencode-ai

Step 4: Configure Aetheel

Edit your ~/.aetheel/config.json:

{
  "env": {
    "vars": {
      "SLACK_BOT_TOKEN": "xoxb-...",
      "SLACK_APP_TOKEN": "xapp-..."
    }
  },
  "slack": {
    "bot_token": "${SLACK_BOT_TOKEN}",
    "app_token": "${SLACK_APP_TOKEN}"
  },
  "runtime": {
    "mode": "cli"
  }
}

Model Selection

You can specify a model in config.json or via process env:

{
  "runtime": {
    "model": "anthropic/claude-sonnet-4-20250514"
  }
}

Or override at launch:

python main.py --model anthropic/claude-sonnet-4-20250514

Step 5: Test the Integration

1. Verify OpenCode works standalone

# Quick test
opencode run "What is Python?"

# With a specific model
opencode run --model anthropic/claude-sonnet-4-20250514 "Hello"

2. Test the runtime directly

# Quick Python test
python -c "
from agent.opencode_runtime import OpenCodeRuntime
runtime = OpenCodeRuntime()
print(runtime.get_status())
response = runtime.chat('Hello, what are you?')
print(f'Response: {response.text[:200]}')
print(f'OK: {response.ok}, Duration: {response.duration_ms}ms')
"

3. Test via Slack

# Start in test mode first (echo only, no AI)
python main.py --test

# Then start with AI
python main.py

# Or force a specific mode
python main.py --cli
python main.py --sdk

4. In Slack

  • Send status — see the runtime status
  • Send help — see available commands
  • Send any question — get an AI response
  • Reply in a thread — conversation continues in context

Architecture

Component Diagram

┌─────────────────────┐
│     Slack           │
│     (messages)      │
└──────┬──────────────┘
       │ WebSocket
       │
┌──────▼──────────────┐
│  Slack Adapter      │
│  (slack_adapter.py) │
│                     │
│  • Socket Mode      │
│  • Event handling   │
│  • Thread isolation │
└──────┬──────────────┘
       │ ai_handler()
       │
┌──────▼──────────────┐
│  OpenCode Runtime   │
│  (opencode_runtime) │
│                     │
│  • Session store    │
│  • System prompt    │
│  • Mode routing     │
└──────┬──────────────┘
       │
  ┌────┴────┐
  │         │
  ▼         ▼
CLI Mode  SDK Mode

┌──────────┐  ┌──────────────┐
│ opencode │  │ opencode     │
│ run      │  │ serve API    │
│ (subproc)│  │ (HTTP/SDK)   │
└──────────┘  └──────────────┘
       │              │
       └──────┬───────┘
              │
       ┌──────▼──────┐
       │  LLM        │
       │  (Anthropic, │
       │   OpenAI,   │
       │   Gemini)   │
       └─────────────┘

How OpenClaw Inspired This

OpenClaw Pattern Aetheel Implementation
cli-runner.tsrunCliAgent() opencode_runtime.pyOpenCodeRuntime.chat()
cli-backends.tsCliBackendConfig OpenCodeConfig dataclass
buildCliArgs() _build_cli_args()
runCommandWithTimeout() subprocess.run(timeout=...)
parseCliJson() / collectText() _parse_cli_output() / _collect_text()
pickSessionId() _extract_session_id()
buildSystemPrompt() build_aetheel_system_prompt()
Session per thread SessionStore mapping conversation_id → session_id

File Map

File Purpose
agent/__init__.py Agent package init
agent/opencode_runtime.py OpenCode runtime (CLI + SDK modes)
adapters/slack_adapter.py Slack Socket Mode adapter
main.py Entry point with AI handler
docs/opencode-setup.md This setup guide
docs/slack-setup.md Slack bot setup guide

Troubleshooting

"opencode not found in PATH"

Fix: Install OpenCode:

curl -fsSL https://opencode.ai/install | bash

Then verify:

opencode --version

"CLI command failed" or empty responses

Check:

  1. Verify OpenCode works standalone: opencode run "Hello"
  2. Check that a provider is configured: opencode auth login
  3. Check that the model is available: opencode models
  4. Check your API key is set (e.g., ANTHROPIC_API_KEY)

"Request timed out"

Fix: Increase the timeout in config.json:

{
  "runtime": { "timeout_seconds": 300 }
}

Or simplify your prompt — complex prompts take longer.

SDK mode: "connection test failed"

Fix:

  1. Make sure opencode serve is running: opencode serve --port 4096
  2. Check the URL in config.json: runtime.server_url
  3. If using auth, set OPENCODE_SERVER_PASSWORD in env.vars and when starting the server

"opencode-ai SDK not installed"

Fix:

pip install opencode-ai

If you don't want to install the SDK, switch to CLI mode in config.json:

{
  "runtime": { "mode": "cli" }
}

Responses are cut off or garbled

This usually means the output format parsing failed.

Fix: Try setting the format to text:

OPENCODE_FORMAT=text

Next Steps

  1. Memory System — Add conversation persistence (SQLite)
  2. Heartbeat — Proactive messages via cron/scheduler
  3. Skills — Loadable skill modules (like OpenClaw's skills/)
  4. Multi-Channel — Discord, Telegram adapters