8.2 KiB
8.2 KiB
OpenCode Runtime Integration — Summary
Integration of OpenCode CLI as the agent runtime for Aetheel. Completed: 2026-02-13
Overview
OpenCode CLI has been integrated as the AI "brain" for Aetheel, replacing the placeholder smart_handler with a full agent runtime. The architecture is directly inspired by OpenClaw's cli-runner.ts and cli-backends.ts, adapted for OpenCode's API and Python.
Files Created & Modified
New Files
| File | Purpose |
|---|---|
agent/__init__.py |
Package init for the agent module |
agent/opencode_runtime.py |
Core runtime — ~750 lines covering both CLI and SDK modes |
docs/opencode-setup.md |
Comprehensive setup guide |
docs/opencode-integration-summary.md |
This summary document |
Modified Files
| File | Change |
|---|---|
main.py |
Rewired to use ai_handler backed by OpenCodeRuntime instead of placeholder smart_handler |
.env.example |
Added all OpenCode config variables |
requirements.txt |
Added optional opencode-ai SDK dependency note |
Architecture
Slack Message → ai_handler() → OpenCodeRuntime.chat() → OpenCode → LLM → Response
Two Runtime Modes
-
CLI Mode (default) — Spawns
opencode runas a subprocess per request. Direct port of OpenClaw'srunCliAgent()→runCommandWithTimeout()pattern fromcli-runner.ts. -
SDK Mode — Connects to
opencode servevia the official Python SDK (opencode-ai). Usesclient.session.create()→client.session.chat()for lower latency and better session management.
Component Diagram
┌─────────────────────┐
│ Slack │
│ (messages) │
└──────┬──────────────┘
│ WebSocket
│
┌──────▼──────────────┐
│ Slack Adapter │
│ (slack_adapter.py) │
│ │
│ • Socket Mode │
│ • Event handling │
│ • Thread isolation │
└──────┬──────────────┘
│ ai_handler()
│
┌──────▼──────────────┐
│ OpenCode Runtime │
│ (opencode_runtime) │
│ │
│ • Session store │
│ • System prompt │
│ • Mode routing │
└──────┬──────────────┘
│
┌────┴────┐
│ │
▼ ▼
CLI Mode SDK Mode
┌──────────┐ ┌──────────────┐
│ opencode │ │ opencode │
│ run │ │ serve API │
│ (subproc)│ │ (HTTP/SDK) │
└──────────┘ └──────────────┘
│ │
└──────┬───────┘
│
┌──────▼──────┐
│ LLM │
│ (Anthropic, │
│ OpenAI, │
│ Gemini) │
└─────────────┘
Key Components (OpenClaw → Aetheel Mapping)
OpenClaw (cli-runner.ts) |
Aetheel (opencode_runtime.py) |
|---|---|
CliBackendConfig |
OpenCodeConfig dataclass |
runCliAgent() |
OpenCodeRuntime.chat() |
buildCliArgs() |
_build_cli_args() |
runCommandWithTimeout() |
subprocess.run(timeout=...) |
parseCliJson() / collectText() |
_parse_cli_output() / _collect_text() |
pickSessionId() |
_extract_session_id() |
buildSystemPrompt() |
build_aetheel_system_prompt() |
| Session per thread | SessionStore (thread_ts → session_id) |
Key Design Decisions
1. Dual-Mode Runtime (CLI + SDK)
- CLI mode is the default because it requires no persistent server — just
opencodein PATH. - SDK mode is preferred for production because it avoids cold-start latency and provides better session management.
- The runtime gracefully falls back from SDK → CLI if the server is unreachable or the SDK is not installed.
2. Session Isolation per Thread
- Each Slack thread (
thread_ts) maps to a unique OpenCode session via theSessionStore. - New threads get new sessions; replies within a thread reuse the same session.
- Stale sessions are cleaned up after
session_ttl_hours(default 24h).
3. System Prompt Injection
build_aetheel_system_prompt()constructs a per-message system prompt with the bot's identity, guidelines, and context (user name, channel, DM vs. mention).- This mirrors OpenClaw's
buildAgentSystemPrompt()fromcli-runner/helpers.ts.
4. Output Parsing (from OpenClaw)
- The
_parse_cli_output()method tries JSON → JSONL → raw text, matching OpenClaw'sparseCliJson()andparseCliJsonl(). - The
_collect_text()method recursively traverses JSON objects to find text content, a direct port of OpenClaw'scollectText().
5. Built-in Commands Bypass AI
- Commands like
status,help,time, andsessionsare handled directly without calling the AI, for instant responses.
Configuration Reference
All settings go in .env:
# Runtime mode
OPENCODE_MODE=cli # "cli" or "sdk"
# Model (optional — uses OpenCode default if not set)
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
# CLI mode settings
OPENCODE_COMMAND=opencode # path to the opencode binary
OPENCODE_TIMEOUT=120 # seconds before timeout
# SDK mode settings (only needed when OPENCODE_MODE=sdk)
OPENCODE_SERVER_URL=http://localhost:4096
OPENCODE_SERVER_PASSWORD= # optional HTTP basic auth
OPENCODE_SERVER_USERNAME=opencode # default username
# Workspace directory for OpenCode
OPENCODE_WORKSPACE=/path/to/project
# Output format
OPENCODE_FORMAT=text # "text" or "json"
CLI flags can override config:
python main.py --cli # force CLI mode
python main.py --sdk # force SDK mode
python main.py --model anthropic/claude-sonnet-4-20250514
python main.py --test # echo-only (no AI)
OpenCode Research Summary
OpenCode CLI
- What: Go-based AI coding agent for the terminal
- Install:
curl -fsSL https://opencode.ai/install | bashornpm install -g opencode-ai - Key commands:
opencode— TUI modeopencode run "prompt"— non-interactive, returns outputopencode serve— headless HTTP server (OpenAPI 3.1 spec)opencode auth login— configure LLM providersopencode models— list available modelsopencode init— generateAGENTS.mdfor a project
OpenCode Server API (via opencode serve)
- Default:
http://localhost:4096 - Auth: HTTP basic auth via
OPENCODE_SERVER_PASSWORD - Key endpoints:
GET /session— list sessionsPOST /session— create sessionPOST /session/:id/message— send message (returnsAssistantMessage)POST /session/:id/abort— abort in-progress requestGET /event— SSE event stream
OpenCode Python SDK (opencode-ai)
- Install:
pip install opencode-ai - Key methods:
client.session.create()→Sessionclient.session.chat(id, parts=[...])→AssistantMessageclient.session.list()→Session[]client.session.abort(id)→ abortclient.app.get()→ app infoclient.app.providers()→ available providers
Quick Start
- Install OpenCode:
curl -fsSL https://opencode.ai/install | bash - Configure a provider:
opencode auth login - Test standalone:
opencode run "Hello, what are you?" - Configure
.env(copy from.env.example) - Run Aetheel:
python main.py - In Slack: send a message to the bot and get an AI response
Next Steps
- Memory System — Add conversation persistence (SQLite) so sessions survive restarts
- Heartbeat — Proactive messages via cron/scheduler
- Skills — Loadable skill modules (like OpenClaw's
skills/directory) - Multi-Channel — Discord, Telegram adapters
- Streaming — Use SSE events from
opencode servefor real-time streaming responses