move dev/analysis files to archive, clean up repo

This commit is contained in:
Tanmay Karande
2026-02-14 00:04:16 -05:00
parent e708dd2a8e
commit 438bb80416
5 changed files with 1 additions and 1003 deletions

1
.gitignore vendored
View File

@@ -10,6 +10,7 @@ dist/
wheels/
*.egg-info
inspiration/
archive/
# Virtual environments
.venv

View File

@@ -1,232 +0,0 @@
# OpenCode Runtime Integration — Summary
> Integration of OpenCode CLI as the agent runtime for Aetheel.
> Completed: 2026-02-13
---
## Overview
OpenCode CLI has been integrated as the AI "brain" for Aetheel, replacing the placeholder `smart_handler` with a full agent runtime. The architecture is directly inspired by OpenClaw's `cli-runner.ts` and `cli-backends.ts`, adapted for OpenCode's API and Python.
---
## Files Created & Modified
### New Files
| File | Purpose |
|------|---------|
| `agent/__init__.py` | Package init for the agent module |
| `agent/opencode_runtime.py` | Core runtime — ~750 lines covering both CLI and SDK modes |
| `docs/opencode-setup.md` | Comprehensive setup guide |
| `docs/opencode-integration-summary.md` | This summary document |
### Modified Files
| File | Change |
|------|--------|
| `main.py` | Rewired to use `ai_handler` backed by `OpenCodeRuntime` instead of placeholder `smart_handler` |
| `.env.example` | Added all OpenCode config variables |
| `requirements.txt` | Added optional `opencode-ai` SDK dependency note |
---
## Architecture
```
Slack Message → ai_handler() → OpenCodeRuntime.chat() → OpenCode → LLM → Response
```
### Two Runtime Modes
1. **CLI Mode** (default) — Spawns `opencode run` as a subprocess per request.
Direct port of OpenClaw's `runCliAgent()``runCommandWithTimeout()` pattern
from `cli-runner.ts`.
2. **SDK Mode** — Connects to `opencode serve` via the official Python SDK
(`opencode-ai`). Uses `client.session.create()``client.session.chat()`
for lower latency and better session management.
### Component Diagram
```
┌─────────────────────┐
│ Slack │
│ (messages) │
└──────┬──────────────┘
│ WebSocket
┌──────▼──────────────┐
│ Slack Adapter │
│ (slack_adapter.py) │
│ │
│ • Socket Mode │
│ • Event handling │
│ • Thread isolation │
└──────┬──────────────┘
│ ai_handler()
┌──────▼──────────────┐
│ OpenCode Runtime │
│ (opencode_runtime) │
│ │
│ • Session store │
│ • System prompt │
│ • Mode routing │
└──────┬──────────────┘
┌────┴────┐
│ │
▼ ▼
CLI Mode SDK Mode
┌──────────┐ ┌──────────────┐
│ opencode │ │ opencode │
│ run │ │ serve API │
│ (subproc)│ │ (HTTP/SDK) │
└──────────┘ └──────────────┘
│ │
└──────┬───────┘
┌──────▼──────┐
│ LLM │
│ (Anthropic, │
│ OpenAI, │
│ Gemini) │
└─────────────┘
```
---
## Key Components (OpenClaw → Aetheel Mapping)
| OpenClaw (`cli-runner.ts`) | Aetheel (`opencode_runtime.py`) |
|---|---|
| `CliBackendConfig` | `OpenCodeConfig` dataclass |
| `runCliAgent()` | `OpenCodeRuntime.chat()` |
| `buildCliArgs()` | `_build_cli_args()` |
| `runCommandWithTimeout()` | `subprocess.run(timeout=...)` |
| `parseCliJson()` / `collectText()` | `_parse_cli_output()` / `_collect_text()` |
| `pickSessionId()` | `_extract_session_id()` |
| `buildSystemPrompt()` | `build_aetheel_system_prompt()` |
| Session per thread | `SessionStore` (thread_ts → session_id) |
---
## Key Design Decisions
### 1. Dual-Mode Runtime (CLI + SDK)
- **CLI mode** is the default because it requires no persistent server — just `opencode` in PATH.
- **SDK mode** is preferred for production because it avoids cold-start latency and provides better session management.
- The runtime gracefully falls back from SDK → CLI if the server is unreachable or the SDK is not installed.
### 2. Session Isolation per Thread
- Each Slack thread (`thread_ts`) maps to a unique OpenCode session via the `SessionStore`.
- New threads get new sessions; replies within a thread reuse the same session.
- Stale sessions are cleaned up after `session_ttl_hours` (default 24h).
### 3. System Prompt Injection
- `build_aetheel_system_prompt()` constructs a per-message system prompt with the bot's identity, guidelines, and context (user name, channel, DM vs. mention).
- This mirrors OpenClaw's `buildAgentSystemPrompt()` from `cli-runner/helpers.ts`.
### 4. Output Parsing (from OpenClaw)
- The `_parse_cli_output()` method tries JSON → JSONL → raw text, matching OpenClaw's `parseCliJson()` and `parseCliJsonl()`.
- The `_collect_text()` method recursively traverses JSON objects to find text content, a direct port of OpenClaw's `collectText()`.
### 5. Built-in Commands Bypass AI
- Commands like `status`, `help`, `time`, and `sessions` are handled directly without calling the AI, for instant responses.
---
## Configuration Reference
All settings go in `.env`:
```env
# Runtime mode
OPENCODE_MODE=cli # "cli" or "sdk"
# Model (optional — uses OpenCode default if not set)
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
# CLI mode settings
OPENCODE_COMMAND=opencode # path to the opencode binary
OPENCODE_TIMEOUT=120 # seconds before timeout
# SDK mode settings (only needed when OPENCODE_MODE=sdk)
OPENCODE_SERVER_URL=http://localhost:4096
OPENCODE_SERVER_PASSWORD= # optional HTTP basic auth
OPENCODE_SERVER_USERNAME=opencode # default username
# Workspace directory for OpenCode
OPENCODE_WORKSPACE=/path/to/project
# Output format
OPENCODE_FORMAT=text # "text" or "json"
```
CLI flags can override config:
```bash
python main.py --cli # force CLI mode
python main.py --sdk # force SDK mode
python main.py --model anthropic/claude-sonnet-4-20250514
python main.py --test # echo-only (no AI)
```
---
## OpenCode Research Summary
### OpenCode CLI
- **What:** Go-based AI coding agent for the terminal
- **Install:** `curl -fsSL https://opencode.ai/install | bash` or `npm install -g opencode-ai`
- **Key commands:**
- `opencode` — TUI mode
- `opencode run "prompt"` — non-interactive, returns output
- `opencode serve` — headless HTTP server (OpenAPI 3.1 spec)
- `opencode auth login` — configure LLM providers
- `opencode models` — list available models
- `opencode init` — generate `AGENTS.md` for a project
### OpenCode Server API (via `opencode serve`)
- Default: `http://localhost:4096`
- Auth: HTTP basic auth via `OPENCODE_SERVER_PASSWORD`
- Key endpoints:
- `GET /session` — list sessions
- `POST /session` — create session
- `POST /session/:id/message` — send message (returns `AssistantMessage`)
- `POST /session/:id/abort` — abort in-progress request
- `GET /event` — SSE event stream
### OpenCode Python SDK (`opencode-ai`)
- Install: `pip install opencode-ai`
- Key methods:
- `client.session.create()``Session`
- `client.session.chat(id, parts=[...])``AssistantMessage`
- `client.session.list()``Session[]`
- `client.session.abort(id)` → abort
- `client.app.get()` → app info
- `client.app.providers()` → available providers
---
## Quick Start
1. Install OpenCode: `curl -fsSL https://opencode.ai/install | bash`
2. Configure a provider: `opencode auth login`
3. Test standalone: `opencode run "Hello, what are you?"`
4. Configure `.env` (copy from `.env.example`)
5. Run Aetheel: `python main.py`
6. In Slack: send a message to the bot and get an AI response
---
## Next Steps
1. **Memory System** — Add conversation persistence (SQLite) so sessions survive restarts
2. **Heartbeat** — Proactive messages via cron/scheduler
3. **Skills** — Loadable skill modules (like OpenClaw's `skills/` directory)
4. **Multi-Channel** — Discord, Telegram adapters
5. **Streaming** — Use SSE events from `opencode serve` for real-time streaming responses

View File

@@ -1,414 +0,0 @@
# OpenClaw Analysis & "My Own OpenClaw" Comparison
> **Date:** 2026-02-13
> **Source Repo:** `inspiration/openclaw/` (local clone)
> **Diagram Reference:** `inspiration/MyOwnOpenClaw.png`
---
## Table of Contents
1. [What Is OpenClaw?](#what-is-openclaw)
2. [OpenClaw Architecture Deep Dive](#openclaw-architecture-deep-dive)
3. [MyOwnOpenClaw — The Simplified Blueprint](#myownopenclaw--the-simplified-blueprint)
4. [Side-by-Side Comparison](#side-by-side-comparison)
5. [Key Takeaways for Building Our Own](#key-takeaways-for-building-our-own)
6. [Recommended Build Process for Aetheel](#recommended-build-process-for-aetheel)
---
## 1. What Is OpenClaw?
OpenClaw is an **open-source personal AI assistant** (MIT licensed, 176k+ stars, 443 contributors, 175k+ lines of TypeScript). It runs locally on your own devices and acts as a **gateway-centric control plane** that connects an AI agent to every messaging channel you already use.
**Core value proposition:** A single, always-on AI assistant that talks to you through WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Microsoft Teams, Google Chat, Matrix, WebChat, and more — while keeping everything local and under your control.
---
## 2. OpenClaw Architecture Deep Dive
### 2.1 The Four Pillars
Based on both the source code analysis and the `MyOwnOpenClaw.png` diagram, OpenClaw's architecture rests on **four core subsystems**:
---
### Pillar 1: Memory System — "How It Remembers You"
**Source files:** `src/memory/` (49 files, including `manager.ts` at 2,300+ lines)
**How it works:**
| Component | Details |
|-----------|---------|
| **Identity Files** | `SOUL.md` — personality & values; `USER.md` — who you are; `AGENTS.md` — agent behavior rules; `HEARTBEAT.md` — what to proactively check |
| **Long-term Memory** | `MEMORY.md` — persisted decisions, lessons, context |
| **Session Logs** | `daily/` — session logs organized by date |
| **Search** | **Hybrid search** = vector (embeddings) + keyword (BM25) via `sqlite-vec` or `pgvector` |
| **Embedding Providers** | OpenAI, Voyage AI, Gemini, or local via `node-llama-cpp` (ONNX) |
| **Storage** | SQLite database with `sqlite-vec` extension for vector similarity |
| **Sync** | File watcher (chokidar) monitors workspace for changes, auto-re-indexes |
**Key architectural details from the code:**
- `MemoryIndexManager` class (2,300 LOC) manages the full lifecycle: sync → chunk → embed → store → search
- Hybrid search weighting: configurable vector weight + keyword weight (default 0.7 × vector + 0.3 × keyword as shown in the diagram)
- Supports batch embedding with Voyage, OpenAI, and Gemini batch APIs
- FTS5 full-text search table for keyword matching
- Vector table via `sqlite-vec` for similarity search
- Automatic chunking with configurable token sizes and overlap
---
### Pillar 2: Heartbeat — "How It Acts Proactively"
**Source files:** `src/cron/` (37 files including service, scheduling, delivery)
**How it works:**
| Component | Details |
|-----------|---------|
| **Scheduling** | Cron-based scheduling using the `croner` library |
| **Service Architecture** | `src/cron/service/` — manages job lifecycle, timers, catch-up after restarts |
| **Normalization** | `normalize.ts` (13k) — normalizes cron expressions and job definitions |
| **Delivery** | `delivery.ts` — routes cron job output to the correct channel/session |
| **Run Logging** | `run-log.ts` — persists execution history |
| **Session Reaper** | `session-reaper.ts` — cleans up stale sessions |
**What happens on each heartbeat:**
1. Cron fires at scheduled intervals
2. Gateway processes the event
3. Checks all integrated services (Gmail, Calendar, Asana, Slack, etc.)
4. AI reasons over the data
5. Sends notification if needed (e.g., "Meeting in 15 min — prep doc is empty")
6. Or returns `HEARTBEAT_OK` (nothing to report)
**Key detail:** Runs **without user prompting** — this is what makes it feel "proactive."
---
### Pillar 3: Channel Adapters — "How It Works Everywhere"
**Source files:** `src/channels/`, `src/whatsapp/`, `src/telegram/`, `src/discord/`, `src/slack/`, `src/signal/`, `src/imessage/`, `src/web/`, plus `extensions/` (35 extension directories)
**Built-in channels:**
| Channel | Library | Status |
|---------|---------|--------|
| WhatsApp | `@whiskeysockets/baileys` | Core |
| Telegram | `grammy` | Core |
| Slack | `@slack/bolt` | Core |
| Discord | `discord.js` / `@buape/carbon` | Core |
| Signal | `signal-cli` | Core |
| iMessage | BlueBubbles (recommended) or legacy `imsg` | Core |
| WebChat | Built into Gateway WS | Core |
**Extension channels** (via plugin system):
Microsoft Teams, Matrix, Zalo, Zalo Personal, Google Chat, IRC, Mattermost, Twitch, LINE, Feishu, Nextcloud Talk, Nostr, Tlon, voice calls
**Architecture:**
- **Gateway-centric** — all channels connect through a single WebSocket control plane (`ws://127.0.0.1:18789`)
- **Channel Dock** (`src/channels/dock.ts`, 17k) — unified registration and lifecycle management
- **Session isolation** — each channel/conversation gets its own session with isolated context
- **Group routing** — configurable mention gating, reply tags, per-channel chunking
- **DM security** — pairing codes for unknown senders, allowlists
---
### Pillar 4: Skills Registry — "How It Extends to Anything"
**Source files:** `skills/` (52 skill directories)
**How it works:**
| Component | Details |
|-----------|---------|
| **Structure** | Each skill is a directory with a `SKILL.md` file |
| **Installation** | Drop a file in `~/.openclaw/workspace/skills/<skill>/SKILL.md` — instantly available |
| **Registry** | ClawHub (5,700+ skills) — community-built extensions |
| **Types** | Bundled, managed, and workspace skills |
| **Scope** | Local files only — no public registry dependency, no supply chain attack surface |
**Built-in skill examples:**
`1password`, `apple-notes`, `apple-reminders`, `bear-notes`, `github`, `notion`, `obsidian`, `spotify-player`, `weather`, `canvas`, `coding-agent`, `discord`, `slack`, `openai-image-gen`, `openai-whisper`, `session-logs`, `summarize`, `video-frames`, `voice-call`, etc.
---
### 2.2 Gateway Architecture
The Gateway is the **central nervous system** of OpenClaw:
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Teams / WebChat
┌───────────────────────────────┐
│ Gateway │
│ (WS control plane) │
│ ws://127.0.0.1:18789 │
├───────────────────────────────┤
│ • Session management │
│ • Channel routing │
│ • Cron/heartbeat engine │
│ • Tool registration │
│ • Presence & typing │
│ • Auth & pairing │
│ • Plugin loading │
│ • Memory manager │
│ • Config hot-reload │
└──────────────┬────────────────┘
├─ Pi agent (RPC) — AI reasoning engine
├─ CLI (openclaw …)
├─ WebChat UI
├─ macOS app (menu bar)
├─ iOS / Android nodes
└─ Browser control (CDP)
```
**Key source files:**
- `src/gateway/server.impl.ts` (22k) — main gateway server implementation
- `src/gateway/server-http.ts` (17k) — HTTP server
- `src/gateway/ws-log.ts` (14k) — WebSocket logging
- `src/gateway/session-utils.ts` (22k) — session management
- `src/gateway/config-reload.ts` (11k) — hot config reload
### 2.3 Configuration
- **File:** `~/.openclaw/openclaw.json` (JSON5 format)
- **Schema:** Massive TypeBox schema system (`src/config/schema.ts`, `schema.hints.ts` at 46k, `schema.field-metadata.ts` at 45k)
- **Validation:** Zod schemas (`src/config/zod-schema.ts`, 20k)
- **Hot Reload:** Config changes apply without restart
### 2.4 Tech Stack
| Category | Technology |
|----------|-----------|
| **Language** | TypeScript (ESM) |
| **Runtime** | Node.js ≥22 (Bun also supported) |
| **Package Manager** | pnpm (bun optional) |
| **Build** | `tsdown` (based on Rolldown) |
| **Testing** | Vitest with V8 coverage |
| **Linting** | Oxlint + Oxfmt |
| **AI Runtime** | Pi agent (`@mariozechner/pi-agent-core`) in RPC mode |
| **Database** | SQLite with `sqlite-vec` for vector search |
| **Embedding** | OpenAI, Voyage, Gemini, or local ONNX |
| **HTTP** | Express 5 |
| **WebSocket** | `ws` library |
---
## 3. MyOwnOpenClaw — The Simplified Blueprint
The `MyOwnOpenClaw.png` diagram presents a **dramatically simplified** version of the same architecture, built with:
**Tools:** Claude Code + Claude Agent SDK + SQLite + Markdown + Obsidian
### The 4 Custom Modules
#### ① My Memory (SQLite + Markdown + Obsidian)
| Feature | Implementation |
|---------|---------------|
| `SOUL.md` | Personality & values |
| `USER.md` | Who I am, preferences |
| `MEMORY.md` | Decisions & lessons |
| `daily/` | Session logs |
| **Hybrid Search** | 0.7 × vector + 0.3 × keyword (BM25) |
| **Embeddings** | SQLite (or Postgres) + FastEmbed (384-dim, ONNX) |
| **Key principle** | Fully local — zero API calls |
| **Storage philosophy** | "Markdown IS the database" — Obsidian syncs it everywhere |
#### ② My Heartbeat (Claude Agent SDK + Python APIs)
| Feature | Implementation |
|---------|---------------|
| **Frequency** | Every 30 minutes |
| **Action** | Python gathers data from sources: Gmail, Calendar, Asana, Slack |
| **Reasoning** | Claude reasons over the data, decides what's important |
| **Notification** | Sends notification if needed |
| **Example** | "Meeting in 15 min — prep doc is empty" |
| **Fallback** | `HEARTBEAT_OK (nothing to report)` |
#### ③ My Adapters (Slack + Terminal)
| Feature | Implementation |
|---------|---------------|
| **Slack** | Socket Mode — no public URL needed; each thread = persistent conversation |
| **Terminal** | Claude Code — direct interaction; full skill + hook access either way |
| **One-shot** | With Claude Code |
| **Future** | Discord, Teams — add when needed |
#### ④ My Skills (Local `.claude/skills/`)
| Feature | Implementation |
|---------|---------------|
| **Location** | Local `.claude/skills/` directory |
| **Examples** | `content-engine/`, `direct-integrations/`, `yt-script/`, `pptx-generator/`, `excalidraw-diagram/`, `...15+ more` |
| **Installation** | Drop in `SKILL.md` — instantly available |
| **Security** | Local files only — NO public registry, no supply chain attack surface |
### The Vision: "Your Ultra-Personalized AI Agent"
> - 🔵 Remembers your decisions, preferences, and context
> - 🟣 Checks your email and calendar — before you ask
> - 🟢 Talk to it from Slack, terminal, anywhere
> - 🟡 Add any capability with a single file
>
> **"Acts on your behalf. Anticipates what you need. Knows you better every day."**
### Build Stack
```
Claude Code ──→ Claude Agent SDK ──→ SQLite + Markdown ──→ Obsidian
(skills + hooks) (heartbeat + background) (hybrid search, fully local) (your canvas, sync anywhere)
```
**~2,000 lines of Python + Markdown** — "You can build it in just a couple days."
---
## 4. Side-by-Side Comparison
| Feature | OpenClaw (Full) | MyOwnOpenClaw (Custom) |
|---------|----------------|----------------------|
| **Codebase Size** | 175k+ lines TypeScript | ~2,000 lines Python + Markdown |
| **Language** | TypeScript (ESM) | Python |
| **AI Provider** | Any (Anthropic, OpenAI, etc. via Pi) | Claude (via Claude Agent SDK) |
| **Memory System** | SQLite + sqlite-vec, multiple embedding providers | SQLite + FastEmbed (384-dim ONNX) |
| **Hybrid Search** | Vector + BM25 (configurable weights) | 0.7 vector + 0.3 keyword (BM25) |
| **Embeddings** | OpenAI, Voyage, Gemini, local ONNX | FastEmbed local ONNX only — zero API calls |
| **Prompt Files** | SOUL.md, USER.md, AGENTS.md, HEARTBEAT.md, TOOLS.md | SOUL.md, USER.md, MEMORY.md, daily/ |
| **Heartbeat** | Full cron system with croner library | Simple 30-minute Python script |
| **Data Sources** | Configurable via plugins/skills | Gmail, Calendar, Asana, Slack |
| **Channels** | 15+ (WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Teams, Matrix, etc.) | Slack (Socket Mode) + Terminal (Claude Code) |
| **Gateway** | Full WS control plane with auth, routing, sessions | None — direct connection |
| **Skills** | 52 bundled + ClawHub registry (5,700+) | Local `.claude/skills/` directory (15+ custom) |
| **Skill Format** | `SKILL.md` file in directory | `SKILL.md` file in directory (same pattern!) |
| **Apps** | macOS, iOS, Android, WebChat | None — Slack + CLI |
| **Voice** | Voice Wake + Talk Mode (ElevenLabs) | Not included |
| **Browser** | Playwright-based CDP control | Not included |
| **Canvas** | Agent-driven visual workspace (A2UI) | Not included |
| **Config** | JSON5 with massive schema validation | Simple Markdown files |
| **Sync** | File watcher (chokidar) | Obsidian sync |
| **Storage Philosophy** | SQLite is the DB | "Markdown IS the database" — Obsidian syncs everywhere |
| **Installation** | `npm install -g openclaw` + wizard | Clone repo + point Claude Code at it |
| **Security** | DM pairing, allowlists, Docker sandboxing | Local only by default |
| **Multi-agent** | Session isolation, agent-to-agent messaging | Not included |
| **Complexity** | Enterprise-grade, production-ready | Personal, lightweight, hackable |
---
## 5. Key Takeaways for Building Our Own
### What OpenClaw Gets Right (and we should learn from):
1. **The Memory Architecture** — The combination of identity files (`SOUL.md`, `USER.md`) + long-term memory (`MEMORY.md`) + session logs (`daily/`) is the core pattern. Both systems use this.
2. **Hybrid Search** — Vector + keyword search is essential for good memory retrieval. The 0.7/0.3 weighting is a good starting point.
3. **Skill Drop-in Pattern** — Just put a `SKILL.md` file in a directory and it's instantly available. No compilation, no registry. OpenClaw invented this pattern and the custom version copies it directly.
4. **Proactive Heartbeat** — Running on a schedule, checking your data sources before you ask. This is what makes the agent feel like an assistant rather than a chatbot.
5. **The Separation of Concerns** — Memory, Heartbeat, Adapters, and Skills are clean, independent modules. Each can be built and tested separately.
### What MyOwnOpenClaw Simplifies:
1. **No Gateway** — Direct connections instead of a WS control plane. Much simpler but less flexible.
2. **Python over TypeScript** — More accessible for quick prototyping and data processing.
3. **Claude-only** — No model switching, no failover. Simpler but locked to one provider.
4. **Obsidian as sync** — Uses Obsidian's existing sync infrastructure instead of building custom file watching.
5. **Two adapters max** — Slack + Terminal vs. 15+ channels. Start small, add as needed.
### The Process (from the diagram):
> 1. Clone the OpenClaw repository (MIT licensed, 100% open source)
> 2. Point your coding agent at it — "Explain how the memory system works"
> 3. "Now build that into my own system here (optional: with customization XYZ)"
> 4. Repeat for heartbeat, adapters, skills. That's it.
**Use OpenClaw as your blueprint, not your dependency.**
---
## 6. Recommended Build Process for Aetheel
Based on this analysis, here's the recommended order for building a custom AI assistant inspired by OpenClaw:
### Phase 1: Memory System
- Create `SOUL.md`, `USER.md`, `MEMORY.md` files
- Implement SQLite database with `sqlite-vec` or FastEmbed for vector search
- Build hybrid search (vector + BM25 keyword)
- Set up file watching for automatic re-indexing
- Use Obsidian for cross-device sync
### Phase 2: Heartbeat
- Build a Python script using Claude Agent SDK
- Connect to Gmail, Calendar, Asana (start with most-used services)
- Set up 30-minute cron schedule
- Implement notification delivery (start with terminal notifications)
### Phase 3: Adapters
- Start with Terminal (Claude Code) for direct interaction
- Add Slack (Socket Mode) for messaging
- Build conversation threading support
### Phase 4: Skills
- Create `.claude/skills/` directory structure
- Port most-used skills from OpenClaw as inspiration
- Build custom skills specific to your workflow
---
## Appendix: OpenClaw File Structure Reference
```
openclaw/
├── src/ # Core source code (175k+ LOC)
│ ├── memory/ # Memory system (49 files)
│ │ ├── manager.ts # Main memory manager (2,300 LOC)
│ │ ├── hybrid.ts # Hybrid search (vector + keyword)
│ │ ├── embeddings.ts # Embedding provider abstraction
│ │ ├── qmd-manager.ts # Query+doc management (33k)
│ │ └── ...
│ ├── cron/ # Heartbeat/cron system (37 files)
│ │ ├── service/ # Cron service lifecycle
│ │ ├── schedule.ts # Scheduling logic
│ │ ├── delivery.ts # Output delivery
│ │ └── ...
│ ├── channels/ # Channel adapter framework (28 files)
│ │ ├── dock.ts # Unified channel dock (17k)
│ │ ├── registry.ts # Channel registration
│ │ └── ...
│ ├── gateway/ # Gateway WS control plane (129+ files)
│ │ ├── server.impl.ts # Main server (22k)
│ │ ├── server-http.ts # HTTP layer (17k)
│ │ ├── session-utils.ts # Session management (22k)
│ │ └── ...
│ ├── config/ # Configuration system (130+ files)
│ ├── agents/ # Agent runtime
│ ├── browser/ # Browser control (Playwright)
│ └── ...
├── skills/ # Built-in skills (52 directories)
│ ├── obsidian/
│ ├── github/
│ ├── notion/
│ ├── spotify-player/
│ └── ...
├── extensions/ # Extension channels (35 directories)
│ ├── msteams/
│ ├── matrix/
│ ├── voice-call/
│ └── ...
├── apps/ # Companion apps
│ ├── macos/
│ ├── ios/
│ └── android/
├── AGENTS.md # Agent behavior guidelines
├── openclaw.json # Configuration
└── package.json # Dependencies & scripts
```

View File

@@ -1,113 +0,0 @@
"""Quick smoke test for the memory system."""
import asyncio
import os
import shutil
from memory import MemoryManager
from memory.types import MemoryConfig
from memory.internal import chunk_markdown, hash_text, list_memory_files
def test_internals():
print("── Internal utilities ──")
# Hashing
h = hash_text("hello world")
assert len(h) == 64
print(f"✅ hash_text: {h[:16]}...")
# Chunking
text = "# Title\n\nLine1\nLine2\nLine3\n\n## Section\n\nMore text here"
chunks = chunk_markdown(text, chunk_tokens=50, chunk_overlap=10)
assert len(chunks) >= 1
print(f"✅ chunk_markdown: {len(chunks)} chunks")
for c in chunks:
print(f" lines {c.start_line}-{c.end_line}: {c.text[:50]!r}")
print()
async def test_manager():
print("── MemoryManager ──")
# Clean slate
test_dir = "/tmp/aetheel_test_workspace"
test_db = "/tmp/aetheel_test_memory.db"
for p in [test_dir, test_db]:
if os.path.exists(p):
if os.path.isdir(p):
shutil.rmtree(p)
else:
os.remove(p)
config = MemoryConfig(
workspace_dir=test_dir,
db_path=test_db,
)
mgr = MemoryManager(config)
print(f"✅ Created: workspace={mgr._workspace_dir}")
# Identity files
soul = mgr.read_soul()
assert soul and len(soul) > 0
print(f"✅ SOUL.md: {len(soul)} chars")
user = mgr.read_user()
assert user and len(user) > 0
print(f"✅ USER.md: {len(user)} chars")
memory = mgr.read_long_term_memory()
assert memory and len(memory) > 0
print(f"✅ MEMORY.md: {len(memory)} chars")
# Append to memory
mgr.append_to_memory("Test entry: Python 3.14 works great!")
memory2 = mgr.read_long_term_memory()
assert len(memory2) > len(memory)
print(f"✅ Appended to MEMORY.md: {len(memory2)} chars")
# Log a session
log_path = mgr.log_session(
"User: Hello!\nAssistant: Hi, how can I help?",
channel="terminal",
)
assert os.path.exists(log_path)
print(f"✅ Session logged: {log_path}")
# Sync
print("\n⏳ Syncing (loading embedding model on first run)...")
stats = await mgr.sync()
print(f"✅ Sync complete:")
for k, v in stats.items():
print(f" {k}: {v}")
# Search
print("\n🔍 Searching for 'personality values'...")
results = await mgr.search("personality values")
print(f"✅ Found {len(results)} results:")
for i, r in enumerate(results[:3]):
print(f" [{i+1}] score={r.score:.3f} path={r.path} lines={r.start_line}-{r.end_line}")
print(f" {r.snippet[:80]}...")
print("\n🔍 Searching for 'preferences'...")
results2 = await mgr.search("preferences")
print(f"✅ Found {len(results2)} results:")
for i, r in enumerate(results2[:3]):
print(f" [{i+1}] score={r.score:.3f} path={r.path} lines={r.start_line}-{r.end_line}")
print(f" {r.snippet[:80]}...")
# Status
print("\n📊 Status:")
status = mgr.status()
for k, v in status.items():
print(f" {k}: {v}")
mgr.close()
print("\n✅ All memory system tests passed!")
if __name__ == "__main__":
test_internals()
asyncio.run(test_manager())

View File

@@ -1,244 +0,0 @@
#!/usr/bin/env python3
"""
Aetheel Slack Adapter — Integration Test
==========================================
Tests the Slack adapter by:
1. Connecting to Slack via Socket Mode
2. Sending a test message to a specified channel
3. Verifying the bot can send and receive
Usage:
python test_slack.py # Interactive — prompts for channel
python test_slack.py --channel C0123456789 # Send to a specific channel
python test_slack.py --dm U0123456789 # Send a DM to a user
python test_slack.py --send-only # Just send, don't listen
Requirements:
- SLACK_BOT_TOKEN and SLACK_APP_TOKEN set in .env
- Bot must be invited to the target channel
"""
import argparse
import logging
import os
import sys
import time
import threading
from dotenv import load_dotenv
load_dotenv()
from adapters.slack_adapter import SlackAdapter, SlackMessage
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(name)s] %(levelname)s: %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger("aetheel.test")
# ---------------------------------------------------------------------------
# Test 1: Send a message
# ---------------------------------------------------------------------------
def test_send_message(adapter: SlackAdapter, target: str) -> bool:
"""Test sending a message to a channel or user."""
print("\n" + "=" * 60)
print(" TEST 1: Send Message")
print("=" * 60)
try:
result = adapter.send_message(
channel=target,
text=(
"🧪 *Aetheel Slack Test*\n\n"
"If you can see this message, the Slack adapter is working!\n\n"
f"• Bot ID: `{adapter._bot_user_id}`\n"
f"• Bot Name: `@{adapter._bot_user_name}`\n"
f"• Timestamp: `{time.strftime('%Y-%m-%d %H:%M:%S')}`\n"
f"• Mode: Socket Mode\n\n"
"_Reply to this message to test receiving._"
),
)
print(f" ✅ Message sent successfully!")
print(f" Channel: {result.channel_id}")
print(f" Message ID: {result.message_id}")
return True
except Exception as e:
print(f" ❌ Failed to send: {e}")
return False
# ---------------------------------------------------------------------------
# Test 2: Send a threaded reply
# ---------------------------------------------------------------------------
def test_threaded_reply(adapter: SlackAdapter, target: str) -> bool:
"""Test sending a message and then replying in a thread."""
print("\n" + "=" * 60)
print(" TEST 2: Threaded Reply")
print("=" * 60)
try:
# Send parent message
parent = adapter.send_message(
channel=target,
text="🧵 *Thread Test* — This is the parent message.",
)
print(f" ✅ Parent message sent (ts={parent.message_id})")
time.sleep(1)
# Send threaded reply
reply = adapter.send_message(
channel=target,
text="↳ This is a threaded reply! Thread isolation is working.",
thread_ts=parent.message_id,
)
print(f" ✅ Thread reply sent (ts={reply.message_id})")
return True
except Exception as e:
print(f" ❌ Failed: {e}")
return False
# ---------------------------------------------------------------------------
# Test 3: Long message chunking
# ---------------------------------------------------------------------------
def test_long_message(adapter: SlackAdapter, target: str) -> bool:
"""Test that long messages are properly chunked."""
print("\n" + "=" * 60)
print(" TEST 3: Long Message Chunking")
print("=" * 60)
try:
# Create a message that exceeds 4000 chars
long_text = "📜 *Long Message Test*\n\n"
for i in range(1, 101):
long_text += f"{i}. This is line number {i} of the long message test. " \
f"It contains enough text to test the chunking behavior.\n"
result = adapter.send_message(channel=target, text=long_text)
print(f" ✅ Long message sent (length={len(long_text)}, id={result.message_id})")
return True
except Exception as e:
print(f" ❌ Failed: {e}")
return False
# ---------------------------------------------------------------------------
# Test 4: Receive messages (interactive)
# ---------------------------------------------------------------------------
def test_receive_messages(adapter: SlackAdapter, duration: int = 30) -> bool:
"""
Test receiving messages by listening for a specified duration.
The bot will echo back any messages it receives.
"""
print("\n" + "=" * 60)
print(" TEST 4: Receive Messages (Interactive)")
print("=" * 60)
print(f" Listening for {duration} seconds...")
print(f" Send a message to @{adapter._bot_user_name} to test receiving.")
print(f" Press Ctrl+C to stop early.\n")
received = []
def test_handler(msg: SlackMessage) -> str:
received.append(msg)
print(f" 📨 Received: '{msg.text}' from @{msg.user_name}")
return f"✅ Got it! You said: _{msg.text}_"
adapter.on_message(test_handler)
try:
adapter.start_async()
time.sleep(duration)
except KeyboardInterrupt:
print("\n Stopped by user.")
finally:
adapter.stop()
print(f"\n Messages received: {len(received)}")
if received:
print(" ✅ Receive test PASSED")
return True
else:
print(" ⚠️ No messages received (send a message to the bot to test)")
return True # Not a failure — just no one sent a message
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(description="Test the Aetheel Slack Adapter")
group = parser.add_mutually_exclusive_group()
group.add_argument("--channel", help="Channel ID to send test messages to (C...)")
group.add_argument("--dm", help="User ID to DM for testing (U...)")
parser.add_argument(
"--send-only",
action="store_true",
help="Only run send tests (don't listen for messages)",
)
parser.add_argument(
"--duration",
type=int,
default=30,
help="How long to listen for messages in seconds (default: 30)",
)
args = parser.parse_args()
# Validate tokens
if not os.environ.get("SLACK_BOT_TOKEN") or not os.environ.get("SLACK_APP_TOKEN"):
print("❌ Missing SLACK_BOT_TOKEN or SLACK_APP_TOKEN in environment.")
print(" Copy .env.example to .env and fill in your tokens.")
sys.exit(1)
# Get target
target = args.channel or args.dm
if not target:
print("You need to specify a target for send tests.")
print(" --channel C0123456789 (channel ID)")
print(" --dm U0123456789 (user ID for DM)")
target = input("\nEnter a channel or user ID (or press Enter to skip send tests): ").strip()
# Create adapter
adapter = SlackAdapter(log_level="INFO")
# Resolve identity first
adapter._resolve_identity()
# Run tests
results = {}
if target:
results["send"] = test_send_message(adapter, target)
results["thread"] = test_threaded_reply(adapter, target)
results["chunking"] = test_long_message(adapter, target)
else:
print("\n⏭️ Skipping send tests (no target specified)")
if not args.send_only:
results["receive"] = test_receive_messages(adapter, duration=args.duration)
# Summary
print("\n" + "=" * 60)
print(" TEST RESULTS")
print("=" * 60)
for test_name, passed in results.items():
icon = "" if passed else ""
print(f" {icon} {test_name}")
total = len(results)
passed = sum(1 for v in results.values() if v)
print(f"\n {passed}/{total} tests passed")
print("=" * 60)
return 0 if all(results.values()) else 1
if __name__ == "__main__":
sys.exit(main())