first commit
This commit is contained in:
371
docs/memory-system.md
Normal file
371
docs/memory-system.md
Normal file
@@ -0,0 +1,371 @@
|
||||
# Aetheel Memory System
|
||||
|
||||
> **Date:** 2026-02-13
|
||||
> **Inspired by:** OpenClaw's `src/memory/` (49 files, 2,300+ LOC manager)
|
||||
> **Implementation:** ~600 lines of Python across 6 modules
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Architecture](#architecture)
|
||||
3. [File Structure](#file-structure)
|
||||
4. [Identity Files](#identity-files)
|
||||
5. [How It Works](#how-it-works)
|
||||
6. [Configuration](#configuration)
|
||||
7. [API Reference](#api-reference)
|
||||
8. [Dependencies](#dependencies)
|
||||
9. [Testing](#testing)
|
||||
10. [OpenClaw Mapping](#openclaw-mapping)
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
The memory system gives Aetheel **persistent, searchable memory** using a combination of markdown files and SQLite. It follows the same design as OpenClaw's memory architecture:
|
||||
|
||||
- **Markdown IS the database** — identity files (`SOUL.md`, `USER.md`, `MEMORY.md`) are human-readable and editable in any text editor or Obsidian
|
||||
- **Hybrid search** — combines vector similarity (cosine, 0.7 weight) with BM25 keyword search (0.3 weight) for accurate retrieval
|
||||
- **Fully local** — uses fastembed ONNX embeddings (384-dim), zero API calls
|
||||
- **Incremental sync** — only re-indexes files that have changed (SHA-256 hash comparison)
|
||||
- **Session logging** — conversation transcripts stored in `daily/` and indexed for search
|
||||
|
||||
---
|
||||
|
||||
## 2. Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────┐
|
||||
│ MemoryManager │
|
||||
│ (memory/manager.py) │
|
||||
├──────────────────────────┤
|
||||
│ • sync() │
|
||||
│ • search() │
|
||||
│ • log_session() │
|
||||
│ • read/update identity │
|
||||
│ • file watching │
|
||||
└────────┬─────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────┐
|
||||
▼ ▼ ▼
|
||||
┌──────────────┐ ┌─────────────┐ ┌──────────────┐
|
||||
│ Workspace │ │ SQLite │ │ fastembed │
|
||||
│ (.md files)│ │ Database │ │ (ONNX) │
|
||||
├──────────────┤ ├─────────────┤ ├──────────────┤
|
||||
│ SOUL.md │ │ files │ │ bge-small │
|
||||
│ USER.md │ │ chunks │ │ 384-dim │
|
||||
│ MEMORY.md │ │ chunks_fts │ │ L2-normalized│
|
||||
│ memory/ │ │ emb_cache │ │ local only │
|
||||
│ daily/ │ │ session_logs│ │ │
|
||||
└──────────────┘ └─────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
### Search Flow
|
||||
|
||||
```
|
||||
Query: "what are my preferences?"
|
||||
│
|
||||
▼
|
||||
┌──────────────────┐ ┌──────────────────┐
|
||||
│ Vector Search │ │ Keyword Search │
|
||||
│ (cosine sim) │ │ (FTS5 / BM25) │
|
||||
│ weight: 0.7 │ │ weight: 0.3 │
|
||||
└────────┬─────────┘ └────────┬─────────┘
|
||||
│ │
|
||||
└──────────┬─────────────┘
|
||||
▼
|
||||
┌───────────────┐
|
||||
│ Hybrid Merge │
|
||||
│ dedupe by ID │
|
||||
│ sort by score│
|
||||
└───────┬───────┘
|
||||
▼
|
||||
Top-N results with
|
||||
score ≥ min_score
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. File Structure
|
||||
|
||||
### Source Code
|
||||
|
||||
```
|
||||
memory/
|
||||
├── __init__.py # Package exports (MemoryManager, MemorySearchResult, MemorySource)
|
||||
├── types.py # Data classes: MemoryConfig, MemorySearchResult, MemoryChunk, etc.
|
||||
├── internal.py # Utilities: hashing, chunking, file discovery, cosine similarity
|
||||
├── hybrid.py # Hybrid search merging (0.7 vector + 0.3 BM25)
|
||||
├── schema.py # SQLite schema (files, chunks, FTS5, embedding cache)
|
||||
├── embeddings.py # Local fastembed ONNX embeddings (384-dim)
|
||||
└── manager.py # Main MemoryManager orchestrator (~400 LOC)
|
||||
```
|
||||
|
||||
### Workspace (Created Automatically)
|
||||
|
||||
```
|
||||
~/.aetheel/workspace/
|
||||
├── SOUL.md # Personality & values — "who you are"
|
||||
├── USER.md # User profile — "who I am"
|
||||
├── MEMORY.md # Long-term memory — decisions, lessons, context
|
||||
├── memory/ # Additional markdown memory files (optional)
|
||||
│ └── *.md
|
||||
└── daily/ # Session logs by date
|
||||
├── 2026-02-13.md
|
||||
├── 2026-02-14.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Identity Files
|
||||
|
||||
Inspired by OpenClaw's template system (`docs/reference/templates/SOUL.md`).
|
||||
|
||||
### SOUL.md — Who You Are
|
||||
|
||||
The agent's personality, values, and behavioral guidelines. Created with sensible defaults:
|
||||
|
||||
- Core truths (be helpful, have opinions, be resourceful)
|
||||
- Boundaries (privacy, external actions)
|
||||
- Continuity rules (files ARE the memory)
|
||||
|
||||
### USER.md — Who I Am
|
||||
|
||||
The user's profile — name, role, timezone, preferences, current focus, tools. Fill this in to personalize the agent.
|
||||
|
||||
### MEMORY.md — Long-Term Memory
|
||||
|
||||
Persistent decisions, lessons learned, and context that carries across sessions. The agent appends entries with timestamps:
|
||||
|
||||
```markdown
|
||||
### [2026-02-13 12:48]
|
||||
|
||||
Learned that the user prefers concise responses with code examples.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. How It Works
|
||||
|
||||
### Sync (`await manager.sync()`)
|
||||
|
||||
1. **Discover files** — scans `SOUL.md`, `USER.md`, `MEMORY.md`, `memory/*.md`
|
||||
2. **Check hashes** — compares SHA-256 content hash against stored hash in `files` table
|
||||
3. **Skip unchanged** — files with matching hashes are skipped (incremental sync)
|
||||
4. **Chunk** — splits changed files into overlapping text chunks (~512 tokens, 50 token overlap)
|
||||
5. **Embed** — generates 384-dim vectors via fastembed (checks embedding cache first)
|
||||
6. **Store** — inserts chunks + embeddings into SQLite, updates FTS5 index
|
||||
7. **Clean** — removes stale entries for deleted files
|
||||
8. **Sessions** — repeats for `daily/*.md` session log files
|
||||
|
||||
### Search (`await manager.search("query")`)
|
||||
|
||||
1. **Auto-sync** — triggers sync if workspace is dirty (configurable)
|
||||
2. **Keyword search** — runs FTS5 `MATCH` query with BM25 ranking
|
||||
3. **Vector search** — embeds query, computes cosine similarity against all chunk embeddings
|
||||
4. **Hybrid merge** — combines results: `score = 0.7 × vector + 0.3 × keyword`
|
||||
5. **Deduplicate** — merges chunks found by both methods (by chunk ID)
|
||||
6. **Filter & rank** — removes results below `min_score`, returns top-N sorted by score
|
||||
|
||||
### Session Logging (`manager.log_session(content)`)
|
||||
|
||||
1. Creates/appends to `daily/YYYY-MM-DD.md`
|
||||
2. Adds timestamped entry with channel label
|
||||
3. Marks index as dirty for next sync
|
||||
|
||||
---
|
||||
|
||||
## 6. Configuration
|
||||
|
||||
```python
|
||||
from memory.types import MemoryConfig
|
||||
|
||||
config = MemoryConfig(
|
||||
# Workspace directory containing identity files
|
||||
workspace_dir="~/.aetheel/workspace",
|
||||
|
||||
# SQLite database path
|
||||
db_path="~/.aetheel/memory.db",
|
||||
|
||||
# Chunking parameters
|
||||
chunk_tokens=512, # ~2048 characters per chunk
|
||||
chunk_overlap=50, # ~200 character overlap between chunks
|
||||
|
||||
# Search parameters
|
||||
max_results=10, # maximum results per search
|
||||
min_score=0.1, # minimum hybrid score threshold
|
||||
vector_weight=0.7, # weight for vector similarity
|
||||
text_weight=0.3, # weight for BM25 keyword score
|
||||
|
||||
# Embedding model (local ONNX)
|
||||
embedding_model="BAAI/bge-small-en-v1.5",
|
||||
embedding_dims=384,
|
||||
|
||||
# Sync behavior
|
||||
watch=True, # enable file watching via watchdog
|
||||
watch_debounce_ms=2000, # debounce file change events
|
||||
sync_on_search=True, # auto-sync before search if dirty
|
||||
|
||||
# Session logs directory (defaults to workspace_dir/daily/)
|
||||
sessions_dir=None,
|
||||
|
||||
# Sources to index
|
||||
sources=["memory", "sessions"],
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. API Reference
|
||||
|
||||
### `MemoryManager`
|
||||
|
||||
```python
|
||||
from memory import MemoryManager
|
||||
from memory.types import MemoryConfig
|
||||
|
||||
# Create with custom config (or defaults)
|
||||
mgr = MemoryManager(config=MemoryConfig(...))
|
||||
|
||||
# Sync workspace → index
|
||||
stats = await mgr.sync(force=False)
|
||||
# Returns: {"files_found": 4, "files_indexed": 4, "chunks_created": 5, ...}
|
||||
|
||||
# Hybrid search
|
||||
results = await mgr.search("what are my preferences?", max_results=5, min_score=0.1)
|
||||
# Returns: list[MemorySearchResult]
|
||||
# .path — relative file path (e.g., "USER.md")
|
||||
# .start_line — chunk start line
|
||||
# .end_line — chunk end line
|
||||
# .score — hybrid score (0.0 - 1.0)
|
||||
# .snippet — text snippet (max 700 chars)
|
||||
# .source — MemorySource.MEMORY or MemorySource.SESSIONS
|
||||
|
||||
# Identity files
|
||||
soul = mgr.read_soul() # Read SOUL.md
|
||||
user = mgr.read_user() # Read USER.md
|
||||
memory = mgr.read_long_term_memory() # Read MEMORY.md
|
||||
mgr.append_to_memory("learned X") # Append timestamped entry to MEMORY.md
|
||||
mgr.update_identity_file("USER.md", new_content) # Overwrite a file
|
||||
|
||||
# Session logging
|
||||
path = mgr.log_session("User: hi\nAssistant: hello", channel="slack")
|
||||
|
||||
# File reading
|
||||
data = mgr.read_file("SOUL.md", from_line=1, num_lines=10)
|
||||
|
||||
# Status
|
||||
status = mgr.status()
|
||||
# Returns: {"files": 5, "chunks": 5, "cached_embeddings": 4, ...}
|
||||
|
||||
# File watching
|
||||
mgr.start_watching() # auto-mark dirty on workspace changes
|
||||
mgr.stop_watching()
|
||||
|
||||
# Cleanup
|
||||
mgr.close()
|
||||
```
|
||||
|
||||
### `MemorySearchResult`
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class MemorySearchResult:
|
||||
path: str # Relative path to the markdown file
|
||||
start_line: int # First line of the matching chunk
|
||||
end_line: int # Last line of the matching chunk
|
||||
score: float # Hybrid score (0.0 - 1.0)
|
||||
snippet: str # Text snippet (max 700 characters)
|
||||
source: MemorySource # "memory" or "sessions"
|
||||
citation: str | None = None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Dependencies
|
||||
|
||||
| Package | Version | Purpose |
|
||||
|---------|---------|---------|
|
||||
| `fastembed` | 0.7.4 | Local ONNX embeddings (BAAI/bge-small-en-v1.5, 384-dim) |
|
||||
| `watchdog` | 6.0.0 | File system watching for auto re-indexing |
|
||||
| `sqlite3` | (stdlib) | Database engine with FTS5 full-text search |
|
||||
|
||||
Added to `pyproject.toml`:
|
||||
```toml
|
||||
dependencies = [
|
||||
"fastembed>=0.7.4",
|
||||
"watchdog>=6.0.0",
|
||||
# ... existing deps
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Testing
|
||||
|
||||
Run the smoke test:
|
||||
|
||||
```bash
|
||||
uv run python test_memory.py
|
||||
```
|
||||
|
||||
### Test Results (2026-02-13)
|
||||
|
||||
| Test | Result |
|
||||
|------|--------|
|
||||
| `hash_text()` | ✅ SHA-256 produces 64-char hex string |
|
||||
| `chunk_markdown()` | ✅ Splits text into overlapping chunks with correct line numbers |
|
||||
| Identity file creation | ✅ SOUL.md (793 chars), USER.md (417 chars), MEMORY.md (324 chars) |
|
||||
| Append to MEMORY.md | ✅ Content grows with timestamped entry |
|
||||
| Session logging | ✅ Creates `daily/2026-02-13.md` with channel + timestamp |
|
||||
| Sync (first run) | ✅ 4 files found, 4 indexed, 5 chunks, 1 session |
|
||||
| Search "personality values" | ✅ 5 results — top: SOUL.md (score 0.595) |
|
||||
| Search "preferences" | ✅ 5 results — top: USER.md (score 0.583) |
|
||||
| FTS5 keyword search | ✅ Available |
|
||||
| Embedding cache | ✅ 4 entries cached (skip re-computation on next sync) |
|
||||
| Status report | ✅ All fields populated correctly |
|
||||
|
||||
---
|
||||
|
||||
## 10. OpenClaw Mapping
|
||||
|
||||
How our Python implementation maps to OpenClaw's TypeScript source:
|
||||
|
||||
| OpenClaw File | Aetheel File | Description |
|
||||
|---------------|-------------|-------------|
|
||||
| `src/memory/types.ts` | `memory/types.py` | Core types (MemorySearchResult, MemorySource, etc.) |
|
||||
| `src/memory/internal.ts` | `memory/internal.py` | hashText, chunkMarkdown, listMemoryFiles, cosineSimilarity |
|
||||
| `src/memory/hybrid.ts` | `memory/hybrid.py` | buildFtsQuery, bm25RankToScore, mergeHybridResults |
|
||||
| `src/memory/memory-schema.ts` | `memory/schema.py` | ensureMemoryIndexSchema → ensure_schema |
|
||||
| `src/memory/embeddings.ts` | `memory/embeddings.py` | createEmbeddingProvider → embed_query/embed_batch (fastembed) |
|
||||
| `src/memory/manager.ts` (2,300 LOC) | `memory/manager.py` (~400 LOC) | MemoryIndexManager → MemoryManager |
|
||||
| `src/memory/sync-memory-files.ts` | Inlined in `manager.py` | syncMemoryFiles → _run_sync |
|
||||
| `src/memory/session-files.ts` | Inlined in `manager.py` | buildSessionEntry → _sync_session_files |
|
||||
| `docs/reference/templates/SOUL.md` | Auto-created by manager | Default identity file templates |
|
||||
|
||||
### Key Simplifications vs. OpenClaw
|
||||
|
||||
| Feature | OpenClaw | Aetheel |
|
||||
|---------|----------|---------|
|
||||
| **Embedding providers** | OpenAI, Voyage, Gemini, local ONNX (4 providers) | fastembed only (local ONNX, zero API calls) |
|
||||
| **Vector storage** | sqlite-vec extension (C library) | JSON-serialized in chunks table (pure Python) |
|
||||
| **File watching** | chokidar (Node.js) | watchdog (Python) |
|
||||
| **Batch embedding** | OpenAI/Voyage batch APIs, concurrency pools | fastembed batch (single-threaded, local) |
|
||||
| **Config system** | JSON5 + TypeBox + Zod schemas (100k+ LOC) | Simple Python dataclass |
|
||||
| **Codebase** | 49 files, 2,300+ LOC manager alone | 6 files, ~600 LOC total |
|
||||
|
||||
### What We Kept
|
||||
|
||||
- ✅ Same identity file pattern (SOUL.md, USER.md, MEMORY.md)
|
||||
- ✅ Same hybrid search algorithm (0.7 vector + 0.3 BM25)
|
||||
- ✅ Same chunking approach (token-based with overlap)
|
||||
- ✅ Same incremental sync (hash-based change detection)
|
||||
- ✅ Same FTS5 full-text search with BM25 ranking
|
||||
- ✅ Same embedding cache (avoids re-computing unchanged chunks)
|
||||
- ✅ Same session log pattern (daily/ directory)
|
||||
|
||||
---
|
||||
|
||||
*This memory system is Phase 1 of the Aetheel build process as outlined in `openclaw-analysis.md`.*
|
||||
232
docs/opencode-integration-summary.md
Normal file
232
docs/opencode-integration-summary.md
Normal file
@@ -0,0 +1,232 @@
|
||||
# OpenCode Runtime Integration — Summary
|
||||
|
||||
> Integration of OpenCode CLI as the agent runtime for Aetheel.
|
||||
> Completed: 2026-02-13
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
OpenCode CLI has been integrated as the AI "brain" for Aetheel, replacing the placeholder `smart_handler` with a full agent runtime. The architecture is directly inspired by OpenClaw's `cli-runner.ts` and `cli-backends.ts`, adapted for OpenCode's API and Python.
|
||||
|
||||
---
|
||||
|
||||
## Files Created & Modified
|
||||
|
||||
### New Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `agent/__init__.py` | Package init for the agent module |
|
||||
| `agent/opencode_runtime.py` | Core runtime — ~750 lines covering both CLI and SDK modes |
|
||||
| `docs/opencode-setup.md` | Comprehensive setup guide |
|
||||
| `docs/opencode-integration-summary.md` | This summary document |
|
||||
|
||||
### Modified Files
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `main.py` | Rewired to use `ai_handler` backed by `OpenCodeRuntime` instead of placeholder `smart_handler` |
|
||||
| `.env.example` | Added all OpenCode config variables |
|
||||
| `requirements.txt` | Added optional `opencode-ai` SDK dependency note |
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Slack Message → ai_handler() → OpenCodeRuntime.chat() → OpenCode → LLM → Response
|
||||
```
|
||||
|
||||
### Two Runtime Modes
|
||||
|
||||
1. **CLI Mode** (default) — Spawns `opencode run` as a subprocess per request.
|
||||
Direct port of OpenClaw's `runCliAgent()` → `runCommandWithTimeout()` pattern
|
||||
from `cli-runner.ts`.
|
||||
|
||||
2. **SDK Mode** — Connects to `opencode serve` via the official Python SDK
|
||||
(`opencode-ai`). Uses `client.session.create()` → `client.session.chat()`
|
||||
for lower latency and better session management.
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ Slack │
|
||||
│ (messages) │
|
||||
└──────┬──────────────┘
|
||||
│ WebSocket
|
||||
│
|
||||
┌──────▼──────────────┐
|
||||
│ Slack Adapter │
|
||||
│ (slack_adapter.py) │
|
||||
│ │
|
||||
│ • Socket Mode │
|
||||
│ • Event handling │
|
||||
│ • Thread isolation │
|
||||
└──────┬──────────────┘
|
||||
│ ai_handler()
|
||||
│
|
||||
┌──────▼──────────────┐
|
||||
│ OpenCode Runtime │
|
||||
│ (opencode_runtime) │
|
||||
│ │
|
||||
│ • Session store │
|
||||
│ • System prompt │
|
||||
│ • Mode routing │
|
||||
└──────┬──────────────┘
|
||||
│
|
||||
┌────┴────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
CLI Mode SDK Mode
|
||||
|
||||
┌──────────┐ ┌──────────────┐
|
||||
│ opencode │ │ opencode │
|
||||
│ run │ │ serve API │
|
||||
│ (subproc)│ │ (HTTP/SDK) │
|
||||
└──────────┘ └──────────────┘
|
||||
│ │
|
||||
└──────┬───────┘
|
||||
│
|
||||
┌──────▼──────┐
|
||||
│ LLM │
|
||||
│ (Anthropic, │
|
||||
│ OpenAI, │
|
||||
│ Gemini) │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Components (OpenClaw → Aetheel Mapping)
|
||||
|
||||
| OpenClaw (`cli-runner.ts`) | Aetheel (`opencode_runtime.py`) |
|
||||
|---|---|
|
||||
| `CliBackendConfig` | `OpenCodeConfig` dataclass |
|
||||
| `runCliAgent()` | `OpenCodeRuntime.chat()` |
|
||||
| `buildCliArgs()` | `_build_cli_args()` |
|
||||
| `runCommandWithTimeout()` | `subprocess.run(timeout=...)` |
|
||||
| `parseCliJson()` / `collectText()` | `_parse_cli_output()` / `_collect_text()` |
|
||||
| `pickSessionId()` | `_extract_session_id()` |
|
||||
| `buildSystemPrompt()` | `build_aetheel_system_prompt()` |
|
||||
| Session per thread | `SessionStore` (thread_ts → session_id) |
|
||||
|
||||
---
|
||||
|
||||
## Key Design Decisions
|
||||
|
||||
### 1. Dual-Mode Runtime (CLI + SDK)
|
||||
- **CLI mode** is the default because it requires no persistent server — just `opencode` in PATH.
|
||||
- **SDK mode** is preferred for production because it avoids cold-start latency and provides better session management.
|
||||
- The runtime gracefully falls back from SDK → CLI if the server is unreachable or the SDK is not installed.
|
||||
|
||||
### 2. Session Isolation per Thread
|
||||
- Each Slack thread (`thread_ts`) maps to a unique OpenCode session via the `SessionStore`.
|
||||
- New threads get new sessions; replies within a thread reuse the same session.
|
||||
- Stale sessions are cleaned up after `session_ttl_hours` (default 24h).
|
||||
|
||||
### 3. System Prompt Injection
|
||||
- `build_aetheel_system_prompt()` constructs a per-message system prompt with the bot's identity, guidelines, and context (user name, channel, DM vs. mention).
|
||||
- This mirrors OpenClaw's `buildAgentSystemPrompt()` from `cli-runner/helpers.ts`.
|
||||
|
||||
### 4. Output Parsing (from OpenClaw)
|
||||
- The `_parse_cli_output()` method tries JSON → JSONL → raw text, matching OpenClaw's `parseCliJson()` and `parseCliJsonl()`.
|
||||
- The `_collect_text()` method recursively traverses JSON objects to find text content, a direct port of OpenClaw's `collectText()`.
|
||||
|
||||
### 5. Built-in Commands Bypass AI
|
||||
- Commands like `status`, `help`, `time`, and `sessions` are handled directly without calling the AI, for instant responses.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
All settings go in `.env`:
|
||||
|
||||
```env
|
||||
# Runtime mode
|
||||
OPENCODE_MODE=cli # "cli" or "sdk"
|
||||
|
||||
# Model (optional — uses OpenCode default if not set)
|
||||
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
|
||||
|
||||
# CLI mode settings
|
||||
OPENCODE_COMMAND=opencode # path to the opencode binary
|
||||
OPENCODE_TIMEOUT=120 # seconds before timeout
|
||||
|
||||
# SDK mode settings (only needed when OPENCODE_MODE=sdk)
|
||||
OPENCODE_SERVER_URL=http://localhost:4096
|
||||
OPENCODE_SERVER_PASSWORD= # optional HTTP basic auth
|
||||
OPENCODE_SERVER_USERNAME=opencode # default username
|
||||
|
||||
# Workspace directory for OpenCode
|
||||
OPENCODE_WORKSPACE=/path/to/project
|
||||
|
||||
# Output format
|
||||
OPENCODE_FORMAT=text # "text" or "json"
|
||||
```
|
||||
|
||||
CLI flags can override config:
|
||||
|
||||
```bash
|
||||
python main.py --cli # force CLI mode
|
||||
python main.py --sdk # force SDK mode
|
||||
python main.py --model anthropic/claude-sonnet-4-20250514
|
||||
python main.py --test # echo-only (no AI)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OpenCode Research Summary
|
||||
|
||||
### OpenCode CLI
|
||||
- **What:** Go-based AI coding agent for the terminal
|
||||
- **Install:** `curl -fsSL https://opencode.ai/install | bash` or `npm install -g opencode-ai`
|
||||
- **Key commands:**
|
||||
- `opencode` — TUI mode
|
||||
- `opencode run "prompt"` — non-interactive, returns output
|
||||
- `opencode serve` — headless HTTP server (OpenAPI 3.1 spec)
|
||||
- `opencode auth login` — configure LLM providers
|
||||
- `opencode models` — list available models
|
||||
- `opencode init` — generate `AGENTS.md` for a project
|
||||
|
||||
### OpenCode Server API (via `opencode serve`)
|
||||
- Default: `http://localhost:4096`
|
||||
- Auth: HTTP basic auth via `OPENCODE_SERVER_PASSWORD`
|
||||
- Key endpoints:
|
||||
- `GET /session` — list sessions
|
||||
- `POST /session` — create session
|
||||
- `POST /session/:id/message` — send message (returns `AssistantMessage`)
|
||||
- `POST /session/:id/abort` — abort in-progress request
|
||||
- `GET /event` — SSE event stream
|
||||
|
||||
### OpenCode Python SDK (`opencode-ai`)
|
||||
- Install: `pip install opencode-ai`
|
||||
- Key methods:
|
||||
- `client.session.create()` → `Session`
|
||||
- `client.session.chat(id, parts=[...])` → `AssistantMessage`
|
||||
- `client.session.list()` → `Session[]`
|
||||
- `client.session.abort(id)` → abort
|
||||
- `client.app.get()` → app info
|
||||
- `client.app.providers()` → available providers
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Install OpenCode: `curl -fsSL https://opencode.ai/install | bash`
|
||||
2. Configure a provider: `opencode auth login`
|
||||
3. Test standalone: `opencode run "Hello, what are you?"`
|
||||
4. Configure `.env` (copy from `.env.example`)
|
||||
5. Run Aetheel: `python main.py`
|
||||
6. In Slack: send a message to the bot and get an AI response
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Memory System** — Add conversation persistence (SQLite) so sessions survive restarts
|
||||
2. **Heartbeat** — Proactive messages via cron/scheduler
|
||||
3. **Skills** — Loadable skill modules (like OpenClaw's `skills/` directory)
|
||||
4. **Multi-Channel** — Discord, Telegram adapters
|
||||
5. **Streaming** — Use SSE events from `opencode serve` for real-time streaming responses
|
||||
412
docs/opencode-setup.md
Normal file
412
docs/opencode-setup.md
Normal file
@@ -0,0 +1,412 @@
|
||||
# OpenCode Setup Guide
|
||||
|
||||
> Configure OpenCode CLI as the AI brain for Aetheel.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Install OpenCode](#step-1-install-opencode)
|
||||
3. [Configure a Provider](#step-2-configure-a-provider)
|
||||
4. [Choose a Runtime Mode](#step-3-choose-a-runtime-mode)
|
||||
5. [Configure Aetheel](#step-4-configure-aetheel)
|
||||
6. [Test the Integration](#step-5-test-the-integration)
|
||||
7. [Architecture](#architecture)
|
||||
8. [Troubleshooting](#troubleshooting)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Aetheel uses [OpenCode](https://opencode.ai) as its AI runtime — the "brain" that
|
||||
generates responses to Slack messages. OpenCode is a terminal-native AI coding agent
|
||||
that supports multiple LLM providers (Anthropic, OpenAI, Google, etc.).
|
||||
|
||||
### How It Works
|
||||
|
||||
```
|
||||
Slack Message → Slack Adapter → OpenCode Runtime → LLM → Response → Slack Reply
|
||||
```
|
||||
|
||||
Two runtime modes are available:
|
||||
|
||||
| Mode | Description | Best For |
|
||||
|------|-------------|----------|
|
||||
| **CLI** (default) | Runs `opencode run` as a subprocess per request | Simple setup, no persistent server |
|
||||
| **SDK** | Talks to `opencode serve` via HTTP API | Lower latency, persistent sessions |
|
||||
|
||||
### Relationship to OpenClaw
|
||||
|
||||
This architecture is inspired by OpenClaw's `cli-runner.ts`:
|
||||
- OpenClaw spawns CLI agents (Claude CLI, Codex CLI) as subprocesses
|
||||
- Each CLI call gets: model args, session ID, system prompt, timeout
|
||||
- Output is parsed from JSON/JSONL to extract the response text
|
||||
- Sessions are mapped per-thread for conversation isolation
|
||||
|
||||
We replicate this pattern in Python, adapted for OpenCode's API.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install OpenCode
|
||||
|
||||
### macOS / Linux (recommended)
|
||||
|
||||
```bash
|
||||
curl -fsSL https://opencode.ai/install | bash
|
||||
```
|
||||
|
||||
### npm (all platforms)
|
||||
|
||||
```bash
|
||||
npm install -g opencode-ai
|
||||
```
|
||||
|
||||
### Homebrew (macOS)
|
||||
|
||||
```bash
|
||||
brew install anomalyco/tap/opencode
|
||||
```
|
||||
|
||||
### Verify
|
||||
|
||||
```bash
|
||||
opencode --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure a Provider
|
||||
|
||||
OpenCode needs at least one LLM provider configured. Run:
|
||||
|
||||
```bash
|
||||
opencode auth login
|
||||
```
|
||||
|
||||
This will guide you through connecting to a provider. Options include:
|
||||
|
||||
| Provider | Auth Method |
|
||||
|----------|-------------|
|
||||
| **OpenCode Zen** | Token-based (opencode.ai account) |
|
||||
| **Anthropic** | API key (`ANTHROPIC_API_KEY`) |
|
||||
| **OpenAI** | API key (`OPENAI_API_KEY`) |
|
||||
| **Google** | API key (`GEMINI_API_KEY`) |
|
||||
|
||||
### Using Environment Variables
|
||||
|
||||
Alternatively, set provider API keys in your `.env`:
|
||||
|
||||
```env
|
||||
# Anthropic
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# OpenAI
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Google Gemini
|
||||
GEMINI_API_KEY=AI...
|
||||
```
|
||||
|
||||
### Verify models are available
|
||||
|
||||
```bash
|
||||
opencode models
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Choose a Runtime Mode
|
||||
|
||||
### CLI Mode (Default — Recommended to Start)
|
||||
|
||||
CLI mode spawns `opencode run` for each message. No persistent server needed.
|
||||
|
||||
**Pros:**
|
||||
- ✅ Simple — just install OpenCode and go
|
||||
- ✅ No server to manage
|
||||
- ✅ Isolated — each request is independent
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Higher latency (cold start per request)
|
||||
- ⚠️ Limited session continuity (uses `--continue` flag)
|
||||
|
||||
```env
|
||||
OPENCODE_MODE=cli
|
||||
```
|
||||
|
||||
### SDK Mode (Advanced — Lower Latency)
|
||||
|
||||
SDK mode talks to a running `opencode serve` instance via HTTP.
|
||||
|
||||
**Pros:**
|
||||
- ✅ Lower latency (warm server, no cold start)
|
||||
- ✅ Better session management
|
||||
- ✅ Full API access
|
||||
|
||||
**Cons:**
|
||||
- ⚠️ Requires running `opencode serve` separately
|
||||
- ⚠️ Needs the `opencode-ai` Python package
|
||||
|
||||
```env
|
||||
OPENCODE_MODE=sdk
|
||||
```
|
||||
|
||||
#### Start the OpenCode server:
|
||||
|
||||
```bash
|
||||
# Terminal 1: Start the headless server
|
||||
opencode serve --port 4096
|
||||
|
||||
# Optional: with authentication
|
||||
OPENCODE_SERVER_PASSWORD=my-secret opencode serve
|
||||
```
|
||||
|
||||
#### Install the Python SDK:
|
||||
|
||||
```bash
|
||||
pip install opencode-ai
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Configure Aetheel
|
||||
|
||||
Edit your `.env` file:
|
||||
|
||||
```env
|
||||
# --- Slack (see docs/slack-setup.md) ---
|
||||
SLACK_BOT_TOKEN=xoxb-...
|
||||
SLACK_APP_TOKEN=xapp-...
|
||||
|
||||
# --- OpenCode Runtime ---
|
||||
OPENCODE_MODE=cli
|
||||
# OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
|
||||
OPENCODE_TIMEOUT=120
|
||||
|
||||
# --- SDK mode only ---
|
||||
# OPENCODE_SERVER_URL=http://localhost:4096
|
||||
# OPENCODE_SERVER_PASSWORD=
|
||||
|
||||
LOG_LEVEL=INFO
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
|
||||
You can specify a model explicitly, or let OpenCode use its default:
|
||||
|
||||
```env
|
||||
# Anthropic Claude
|
||||
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
|
||||
|
||||
# OpenAI GPT-5
|
||||
OPENCODE_MODEL=openai/gpt-5.1
|
||||
|
||||
# Google Gemini
|
||||
OPENCODE_MODEL=google/gemini-3-pro
|
||||
|
||||
# OpenCode Zen (pay-as-you-go)
|
||||
OPENCODE_MODEL=opencode/claude-opus-4-6
|
||||
```
|
||||
|
||||
Or override at launch:
|
||||
|
||||
```bash
|
||||
python main.py --model anthropic/claude-sonnet-4-20250514
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Test the Integration
|
||||
|
||||
### 1. Verify OpenCode works standalone
|
||||
|
||||
```bash
|
||||
# Quick test
|
||||
opencode run "What is Python?"
|
||||
|
||||
# With a specific model
|
||||
opencode run --model anthropic/claude-sonnet-4-20250514 "Hello"
|
||||
```
|
||||
|
||||
### 2. Test the runtime directly
|
||||
|
||||
```bash
|
||||
# Quick Python test
|
||||
python -c "
|
||||
from agent.opencode_runtime import OpenCodeRuntime
|
||||
runtime = OpenCodeRuntime()
|
||||
print(runtime.get_status())
|
||||
response = runtime.chat('Hello, what are you?')
|
||||
print(f'Response: {response.text[:200]}')
|
||||
print(f'OK: {response.ok}, Duration: {response.duration_ms}ms')
|
||||
"
|
||||
```
|
||||
|
||||
### 3. Test via Slack
|
||||
|
||||
```bash
|
||||
# Start in test mode first (echo only, no AI)
|
||||
python main.py --test
|
||||
|
||||
# Then start with AI
|
||||
python main.py
|
||||
|
||||
# Or force a specific mode
|
||||
python main.py --cli
|
||||
python main.py --sdk
|
||||
```
|
||||
|
||||
### 4. In Slack
|
||||
|
||||
- Send `status` — see the runtime status
|
||||
- Send `help` — see available commands
|
||||
- Send any question — get an AI response
|
||||
- Reply in a thread — conversation continues in context
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────┐
|
||||
│ Slack │
|
||||
│ (messages) │
|
||||
└──────┬──────────────┘
|
||||
│ WebSocket
|
||||
│
|
||||
┌──────▼──────────────┐
|
||||
│ Slack Adapter │
|
||||
│ (slack_adapter.py) │
|
||||
│ │
|
||||
│ • Socket Mode │
|
||||
│ • Event handling │
|
||||
│ • Thread isolation │
|
||||
└──────┬──────────────┘
|
||||
│ ai_handler()
|
||||
│
|
||||
┌──────▼──────────────┐
|
||||
│ OpenCode Runtime │
|
||||
│ (opencode_runtime) │
|
||||
│ │
|
||||
│ • Session store │
|
||||
│ • System prompt │
|
||||
│ • Mode routing │
|
||||
└──────┬──────────────┘
|
||||
│
|
||||
┌────┴────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
CLI Mode SDK Mode
|
||||
|
||||
┌──────────┐ ┌──────────────┐
|
||||
│ opencode │ │ opencode │
|
||||
│ run │ │ serve API │
|
||||
│ (subproc)│ │ (HTTP/SDK) │
|
||||
└──────────┘ └──────────────┘
|
||||
│ │
|
||||
└──────┬───────┘
|
||||
│
|
||||
┌──────▼──────┐
|
||||
│ LLM │
|
||||
│ (Anthropic, │
|
||||
│ OpenAI, │
|
||||
│ Gemini) │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
### How OpenClaw Inspired This
|
||||
|
||||
| OpenClaw Pattern | Aetheel Implementation |
|
||||
|------------------|----------------------|
|
||||
| `cli-runner.ts` → `runCliAgent()` | `opencode_runtime.py` → `OpenCodeRuntime.chat()` |
|
||||
| `cli-backends.ts` → `CliBackendConfig` | `OpenCodeConfig` dataclass |
|
||||
| `buildCliArgs()` | `_build_cli_args()` |
|
||||
| `runCommandWithTimeout()` | `subprocess.run(timeout=...)` |
|
||||
| `parseCliJson()` / `collectText()` | `_parse_cli_output()` / `_collect_text()` |
|
||||
| `pickSessionId()` | `_extract_session_id()` |
|
||||
| `buildSystemPrompt()` | `build_aetheel_system_prompt()` |
|
||||
| Session per thread | `SessionStore` mapping conversation_id → session_id |
|
||||
|
||||
### File Map
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `agent/__init__.py` | Agent package init |
|
||||
| `agent/opencode_runtime.py` | OpenCode runtime (CLI + SDK modes) |
|
||||
| `adapters/slack_adapter.py` | Slack Socket Mode adapter |
|
||||
| `main.py` | Entry point with AI handler |
|
||||
| `docs/opencode-setup.md` | This setup guide |
|
||||
| `docs/slack-setup.md` | Slack bot setup guide |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### ❌ "opencode not found in PATH"
|
||||
|
||||
**Fix:** Install OpenCode:
|
||||
```bash
|
||||
curl -fsSL https://opencode.ai/install | bash
|
||||
```
|
||||
|
||||
Then verify:
|
||||
```bash
|
||||
opencode --version
|
||||
```
|
||||
|
||||
### ❌ "CLI command failed" or empty responses
|
||||
|
||||
**Check:**
|
||||
1. Verify OpenCode works standalone: `opencode run "Hello"`
|
||||
2. Check that a provider is configured: `opencode auth login`
|
||||
3. Check that the model is available: `opencode models`
|
||||
4. Check your API key is set (e.g., `ANTHROPIC_API_KEY`)
|
||||
|
||||
### ❌ "Request timed out"
|
||||
|
||||
**Fix:** Increase the timeout:
|
||||
```env
|
||||
OPENCODE_TIMEOUT=300
|
||||
```
|
||||
|
||||
Or simplify your prompt — complex prompts take longer.
|
||||
|
||||
### ❌ SDK mode: "connection test failed"
|
||||
|
||||
**Fix:**
|
||||
1. Make sure `opencode serve` is running: `opencode serve --port 4096`
|
||||
2. Check the URL in `.env`: `OPENCODE_SERVER_URL=http://localhost:4096`
|
||||
3. If using auth, set both `OPENCODE_SERVER_PASSWORD` in `.env` and when starting the server
|
||||
|
||||
### ❌ "opencode-ai SDK not installed"
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
pip install opencode-ai
|
||||
```
|
||||
|
||||
If you don't want to install the SDK, switch to CLI mode:
|
||||
```env
|
||||
OPENCODE_MODE=cli
|
||||
```
|
||||
|
||||
### ❌ Responses are cut off or garbled
|
||||
|
||||
This usually means the output format parsing failed.
|
||||
|
||||
**Fix:** Try setting the format to text:
|
||||
```env
|
||||
OPENCODE_FORMAT=text
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Memory System** — Add conversation persistence (SQLite)
|
||||
2. **Heartbeat** — Proactive messages via cron/scheduler
|
||||
3. **Skills** — Loadable skill modules (like OpenClaw's skills/)
|
||||
4. **Multi-Channel** — Discord, Telegram adapters
|
||||
363
docs/slack-setup.md
Normal file
363
docs/slack-setup.md
Normal file
@@ -0,0 +1,363 @@
|
||||
# Slack Bot Setup Guide
|
||||
|
||||
> Complete guide to creating a Slack bot and connecting it to Aetheel.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Create a Slack App](#step-1-create-a-slack-app)
|
||||
3. [Configure Bot Permissions](#step-2-configure-bot-permissions)
|
||||
4. [Enable Socket Mode](#step-3-enable-socket-mode)
|
||||
5. [Enable Event Subscriptions](#step-4-enable-event-subscriptions)
|
||||
6. [Install the App to Your Workspace](#step-5-install-the-app-to-your-workspace)
|
||||
7. [Get Your Tokens](#step-6-get-your-tokens)
|
||||
8. [Configure Aetheel](#step-7-configure-aetheel)
|
||||
9. [Run and Test](#step-8-run-and-test)
|
||||
10. [Troubleshooting](#troubleshooting)
|
||||
11. [Architecture Reference](#architecture-reference)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Aetheel connects to Slack using **Socket Mode**, which means:
|
||||
- ✅ **No public URL needed** — works behind firewalls and NAT
|
||||
- ✅ **No webhook setup** — Slack pushes events via WebSocket
|
||||
- ✅ **Real-time** — instant message delivery
|
||||
- ✅ **Secure** — encrypted WebSocket connection
|
||||
|
||||
This is the same approach used by [OpenClaw](https://github.com/openclaw/openclaw) (see `src/slack/monitor/provider.ts`), where they use `@slack/bolt` with `socketMode: true`.
|
||||
|
||||
### What You'll Need
|
||||
|
||||
| Item | Description |
|
||||
|------|-------------|
|
||||
| **Slack Workspace** | A Slack workspace where you have admin permissions |
|
||||
| **Bot Token** | `xoxb-...` — for API calls (sending messages, reading info) |
|
||||
| **App Token** | `xapp-...` — for Socket Mode connection |
|
||||
| **Python 3.10+** | Runtime for the Aetheel service |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Create a Slack App
|
||||
|
||||
1. Go to [https://api.slack.com/apps](https://api.slack.com/apps)
|
||||
2. Click **"Create New App"**
|
||||
3. Choose **"From scratch"**
|
||||
4. Fill in:
|
||||
- **App Name:** `Aetheel` (or any name you prefer)
|
||||
- **Workspace:** Select your workspace
|
||||
5. Click **"Create App"**
|
||||
|
||||
You'll be taken to your app's **Basic Information** page.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure Bot Permissions
|
||||
|
||||
Navigate to **OAuth & Permissions** in the left sidebar.
|
||||
|
||||
Scroll down to **Scopes** → **Bot Token Scopes** and add the following:
|
||||
|
||||
### Required Scopes
|
||||
|
||||
| Scope | Purpose |
|
||||
|-------|---------|
|
||||
| `app_mentions:read` | Receive @mentions in channels |
|
||||
| `channels:history` | Read messages in public channels |
|
||||
| `channels:read` | View basic channel info |
|
||||
| `chat:write` | Send messages |
|
||||
| `groups:history` | Read messages in private channels |
|
||||
| `groups:read` | View private channel info |
|
||||
| `im:history` | Read direct messages |
|
||||
| `im:read` | View DM info |
|
||||
| `im:write` | Open DM conversations |
|
||||
| `mpim:history` | Read group DMs |
|
||||
| `mpim:read` | View group DM info |
|
||||
| `users:read` | Look up user info (for display names) |
|
||||
|
||||
### Optional Scopes (for future features)
|
||||
|
||||
| Scope | Purpose |
|
||||
|-------|---------|
|
||||
| `files:read` | Read files shared in messages |
|
||||
| `files:write` | Upload files |
|
||||
| `reactions:read` | Read emoji reactions |
|
||||
| `reactions:write` | Add emoji reactions |
|
||||
|
||||
> **Tip:** You can always add more scopes later, but you'll need to reinstall the app.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Enable Socket Mode
|
||||
|
||||
1. Navigate to **Socket Mode** in the left sidebar
|
||||
2. Toggle **"Enable Socket Mode"** to **ON**
|
||||
3. You'll be prompted to create an **App-Level Token**:
|
||||
- **Token Name:** `aetheel-socket` (or any name)
|
||||
- **Scopes:** Add `connections:write`
|
||||
4. Click **"Generate"**
|
||||
5. **⚠️ Copy the `xapp-...` token now!** You won't be able to see it again.
|
||||
- Save it somewhere safe — you'll need it in Step 6.
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Enable Event Subscriptions
|
||||
|
||||
1. Navigate to **Event Subscriptions** in the left sidebar
|
||||
2. Toggle **"Enable Events"** to **ON**
|
||||
3. Under **Subscribe to bot events**, add:
|
||||
|
||||
| Event | Description |
|
||||
|-------|-------------|
|
||||
| `message.channels` | Messages in public channels the bot is in |
|
||||
| `message.groups` | Messages in private channels the bot is in |
|
||||
| `message.im` | Direct messages to the bot |
|
||||
| `message.mpim` | Group DMs that include the bot |
|
||||
| `app_mention` | When someone @mentions the bot |
|
||||
|
||||
4. Click **"Save Changes"**
|
||||
|
||||
> **Note:** With Socket Mode enabled, you do NOT need a Request URL.
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Install the App to Your Workspace
|
||||
|
||||
1. Navigate to **Install App** in the left sidebar
|
||||
2. Click **"Install to Workspace"**
|
||||
3. Review the permissions and click **"Allow"**
|
||||
4. You'll see the **Bot User OAuth Token** (`xoxb-...`) — copy it!
|
||||
|
||||
> After installation, invite the bot to any channels where you want it to respond:
|
||||
> - In Slack, go to the channel
|
||||
> - Type `/invite @Aetheel` (or your bot's name)
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Get Your Tokens
|
||||
|
||||
After completing the steps above, you should have two tokens:
|
||||
|
||||
| Token | Format | Where to Find |
|
||||
|-------|--------|---------------|
|
||||
| **Bot Token** | `xoxb-1234-5678-abc...` | **OAuth & Permissions** → Bot User OAuth Token |
|
||||
| **App Token** | `xapp-1-A0123-456...` | **Basic Information** → App-Level Tokens (or from Step 3) |
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Configure Aetheel
|
||||
|
||||
### Option A: Using `.env` file (recommended)
|
||||
|
||||
```bash
|
||||
# Copy the example env file
|
||||
cp .env.example .env
|
||||
|
||||
# Edit .env with your tokens
|
||||
```
|
||||
|
||||
Edit `.env`:
|
||||
|
||||
```env
|
||||
SLACK_BOT_TOKEN=xoxb-your-actual-bot-token
|
||||
SLACK_APP_TOKEN=xapp-your-actual-app-token
|
||||
LOG_LEVEL=INFO
|
||||
```
|
||||
|
||||
### Option B: Export environment variables
|
||||
|
||||
```bash
|
||||
export SLACK_BOT_TOKEN="xoxb-your-actual-bot-token"
|
||||
export SLACK_APP_TOKEN="xapp-your-actual-app-token"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Run and Test
|
||||
|
||||
### Install dependencies
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Run the bot
|
||||
|
||||
```bash
|
||||
# Start with smart handler (default)
|
||||
python main.py
|
||||
|
||||
# Start in test/echo mode
|
||||
python main.py --test
|
||||
|
||||
# Start with debug logging
|
||||
python main.py --log DEBUG
|
||||
```
|
||||
|
||||
### Test sending and receiving
|
||||
|
||||
```bash
|
||||
# Run the test suite — sends test messages to a channel
|
||||
python test_slack.py --channel C0123456789
|
||||
|
||||
# Or send a DM test
|
||||
python test_slack.py --dm U0123456789
|
||||
|
||||
# Send-only (no listening)
|
||||
python test_slack.py --channel C0123456789 --send-only
|
||||
```
|
||||
|
||||
### Verify it's working
|
||||
|
||||
1. **In Slack**, go to a channel where the bot is invited
|
||||
2. Type `@Aetheel help` — you should see the help response
|
||||
3. Type `@Aetheel status` — you should see the bot's status
|
||||
4. Send a DM to the bot — it should echo back with details
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### ❌ "Slack bot token is required"
|
||||
|
||||
**Problem:** `SLACK_BOT_TOKEN` is not set or empty.
|
||||
|
||||
**Fix:**
|
||||
1. Check your `.env` file exists and contains the token
|
||||
2. Make sure there are no extra spaces or quotes around the token
|
||||
3. Verify the token starts with `xoxb-`
|
||||
|
||||
### ❌ "Slack app-level token is required for Socket Mode"
|
||||
|
||||
**Problem:** `SLACK_APP_TOKEN` is not set.
|
||||
|
||||
**Fix:**
|
||||
1. Go to your Slack app → **Basic Information** → **App-Level Tokens**
|
||||
2. If no token exists, generate one with `connections:write` scope
|
||||
3. Add it to your `.env` file
|
||||
|
||||
### ❌ "not_authed" or "invalid_auth"
|
||||
|
||||
**Problem:** The bot token is invalid or revoked.
|
||||
|
||||
**Fix:**
|
||||
1. Go to **OAuth & Permissions** → check the Bot User OAuth Token
|
||||
2. If it says "Not installed", reinstall the app
|
||||
3. If you recently changed scopes, you need to reinstall
|
||||
|
||||
### ❌ Bot doesn't respond in channels
|
||||
|
||||
**Problem:** The bot is not invited to the channel, or you're not @mentioning it.
|
||||
|
||||
**Fix:**
|
||||
1. In the Slack channel, type `/invite @Aetheel`
|
||||
2. Make sure you @mention the bot: `@Aetheel hello`
|
||||
3. For DMs, just message the bot directly — no @mention needed
|
||||
|
||||
### ❌ "channel_not_found" when sending
|
||||
|
||||
**Problem:** Using a channel name instead of ID, or bot isn't in the channel.
|
||||
|
||||
**Fix:**
|
||||
1. Use channel **ID** not name. Find it in Slack:
|
||||
- Right-click the channel name → "View channel details"
|
||||
- The ID is at the bottom (starts with `C`)
|
||||
2. Invite the bot to the channel first
|
||||
|
||||
### ❌ Socket Mode connection drops
|
||||
|
||||
**Problem:** The WebSocket connection is unstable.
|
||||
|
||||
**Fix:**
|
||||
1. Check your internet connection
|
||||
2. The SDK automatically reconnects — this is usually transient
|
||||
3. If persistent, check Slack's [status page](https://status.slack.com/)
|
||||
|
||||
### ❌ "missing_scope"
|
||||
|
||||
**Problem:** The bot token doesn't have the required OAuth scopes.
|
||||
|
||||
**Fix:**
|
||||
1. Go to **OAuth & Permissions** → **Bot Token Scopes**
|
||||
2. Add the missing scope mentioned in the error
|
||||
3. **Reinstall the app** (scope changes require reinstallation)
|
||||
|
||||
---
|
||||
|
||||
## Architecture Reference
|
||||
|
||||
### How It Works
|
||||
|
||||
```
|
||||
┌──────────────────────┐
|
||||
│ Your Slack │
|
||||
│ Workspace │
|
||||
│ │
|
||||
│ #general │
|
||||
│ #random │
|
||||
│ DMs │
|
||||
└──────┬───────────────┘
|
||||
│ WebSocket (Socket Mode)
|
||||
│
|
||||
┌──────▼───────────────┐
|
||||
│ Aetheel Slack │
|
||||
│ Adapter │
|
||||
│ │
|
||||
│ • Token resolution │
|
||||
│ • Event handling │
|
||||
│ • Thread isolation │
|
||||
│ • Message chunking │
|
||||
│ • User/channel │
|
||||
│ name resolution │
|
||||
└──────┬───────────────┘
|
||||
│ Callback
|
||||
│
|
||||
┌──────▼───────────────┐
|
||||
│ Message Handler │
|
||||
│ │
|
||||
│ • Echo (test) │
|
||||
│ • Smart (commands) │
|
||||
│ • AI (future) │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
### Key Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `adapters/slack_adapter.py` | Core Slack adapter (Socket Mode, send/receive) |
|
||||
| `main.py` | Entry point with echo and smart handlers |
|
||||
| `test_slack.py` | Integration test suite |
|
||||
| `.env` | Your Slack tokens (not committed to git) |
|
||||
| `.env.example` | Token template |
|
||||
| `requirements.txt` | Python dependencies |
|
||||
|
||||
### Comparison with OpenClaw
|
||||
|
||||
| Feature | OpenClaw (TypeScript) | Aetheel (Python) |
|
||||
|---------|----------------------|------------------|
|
||||
| **Library** | `@slack/bolt` | `slack_bolt` (official Python SDK) |
|
||||
| **Mode** | Socket Mode (`socketMode: true`) | Socket Mode (`SocketModeHandler`) |
|
||||
| **Auth** | `auth.test()` for identity | `auth_test()` for identity |
|
||||
| **Sending** | `chat.postMessage` with chunking | `chat_postMessage` with chunking |
|
||||
| **Threading** | `thread_ts` for conversation isolation | `thread_ts` for conversation isolation |
|
||||
| **DM Handling** | `conversations.open` for user DMs | `conversations_open` for user DMs |
|
||||
| **Text Limit** | 4000 chars (chunked) | 4000 chars (chunked) |
|
||||
| **Config** | JSON5 config file | `.env` file |
|
||||
| **Accounts** | Multi-account support | Single account (MVP) |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
Once the Slack adapter is working, you can:
|
||||
|
||||
1. **Connect AI** — Replace the echo handler with an AI-powered handler (Claude API)
|
||||
2. **Add Memory** — Integrate the memory system for conversation context
|
||||
3. **Add Heartbeat** — Set up proactive notifications via Slack
|
||||
4. **Add Skills** — Load skills from the `.claude/skills/` directory
|
||||
|
||||
See the main [OpenClaw Analysis](../openclaw-analysis.md) for the full architecture plan.
|
||||
Reference in New Issue
Block a user