feat: openclaw-style secrets (env.vars + \) and per-task model routing

- Replace python-dotenv with config.json env.vars block + \ substitution
- Add models section for per-task model routing (heartbeat, subagent, default)
- Heartbeat/subagent tasks can use different models/providers than main chat
- Remove python-dotenv from dependencies
- Update all docs to reflect new config approach
- Reorganize docs into project/ and research/ subdirectories
This commit is contained in:
2026-02-20 23:49:05 -05:00
parent 55c6767e69
commit 82c2640481
35 changed files with 2904 additions and 422 deletions

View File

@@ -1,15 +0,0 @@
completed
config instead of env
edit its own files and config as well as add skills
start command for all instead of flags use config
customize opencode/claudecode setup like llms and providers during setup, agent creation/modify for claudecode and opencode
install script starts server and adds the aetheel command
llm usage stats
logo
Not complete
agent to agent and agent orchestration
better UI
human in the loop
security
browse plugins and skills from claude marketplace or opencode plugins

View File

@@ -0,0 +1,21 @@
# completed
config instead of env
openclaw-style secrets in config.json (env.vars + ${VAR} substitution, removed python-dotenv dependency)
per-task model routing (models.heartbeat, models.subagent, models.default — different models for different task types)
edit its own files and config as well as add skills
start command for all instead of flags use config
customize opencode/claudecode setup like llms and providers during setup, agent creation/modify for claudecode and opencode
install script starts server and adds the aetheel command
llm usage stats
logo
discord advanced features (reply threading, history context, ack reactions, typing indicators, reaction handling, slash commands, interactive components, exec approvals)
opencode advanced features (agent selection, attach mode, file attachments, session fork/title, models listing, stats, agents listing)
# Not complete
agent to agent and agent orchestration
better UI
human in the loop
security
browse plugins and skills from claude marketplace or opencode
self modification docs

View File

@@ -12,7 +12,7 @@ Type these as regular messages in any channel or DM. No `/` prefix needed — ju
| Command | Description |
|---|---|
| `status` | Show bot status, engine, model, sessions |
| `status` | Show bot status, engine, model, model routes, sessions |
| `help` | Show all available commands |
| `time` | Current server time |
| `sessions` | Active session count + cleanup stale |
@@ -31,6 +31,11 @@ Type these as regular messages in any channel or DM. No `/` prefix needed — ju
| `provider` | Show current provider (OpenCode only) |
| `provider <name>` | Switch provider (e.g. `provider anthropic`, `provider openai`) |
| `usage` | Show LLM usage stats, costs, and rate limit history |
| `models` | List all available models from configured providers (OpenCode only) |
| `models <provider>` | List models for a specific provider (e.g. `models anthropic`) |
| `stats` | Show OpenCode token usage and cost stats (all time) |
| `stats <days>` | Show stats for the last N days (e.g. `stats 7`) |
| `agents` | List available OpenCode agents |
Engine, model, and provider changes take effect immediately and are persisted to `config.json` so they survive restarts.
@@ -177,7 +182,9 @@ All features are controlled by `~/.aetheel/config.json`. No flags required.
"mode": "cli", // "cli" or "sdk"
"model": null, // e.g. "anthropic/claude-sonnet-4-20250514"
"provider": null, // e.g. "anthropic", "openai", "google"
"timeout_seconds": 120
"timeout_seconds": 120,
"agent": null, // OpenCode agent name (from `agents` command)
"attach": null // Attach to running server URL for faster CLI mode
},
"claude": {
"model": null, // e.g. "claude-sonnet-4-20250514"
@@ -190,11 +197,12 @@ All features are controlled by `~/.aetheel/config.json`. No flags required.
"webchat": { "enabled": false, "port": 8080 },
"webhooks": { "enabled": false, "port": 8090, "token": "" },
"heartbeat": { "enabled": true },
"models": { "heartbeat": null, "subagent": null, "default": null },
"hooks": { "enabled": true }
}
```
Adapters auto-enable when their token is set in `.env`, even if `enabled` is `false` in config.
Adapters auto-enable when their token is set in `config.json``env.vars`, even if `enabled` is `false` in config.
---

View File

@@ -8,7 +8,7 @@
1. [Overview](#overview)
2. [Config File](#config-file)
3. [Secrets (.env)](#secrets)
3. [Secrets (env.vars)](#secrets)
4. [CLI Overrides](#cli-overrides)
5. [Priority Order](#priority-order)
6. [Reference](#reference)
@@ -18,14 +18,15 @@
## Overview
Aetheel uses a two-file configuration approach:
Aetheel uses a single JSON config file for everything — settings and secrets:
| File | Location | Purpose |
|------|----------|---------|
| `config.json` | `~/.aetheel/config.json` | All non-secret settings (model, timeouts, channels, paths) |
| `.env` | Project root | Secrets only (tokens, passwords, API keys) |
| `config.json` | `~/.aetheel/config.json` | All settings, secrets (via `env.vars` block), and `${VAR}` references |
On first run, Aetheel auto-creates `~/.aetheel/config.json` with sensible defaults. You only need to edit what you want to change.
Secrets (tokens, API keys) go in the `env.vars` block inside config.json. They can be referenced elsewhere in the config using `${VAR}` syntax. Process environment variables still override everything.
On first run, Aetheel auto-creates `~/.aetheel/config.json` with sensible defaults.
---
@@ -37,13 +38,25 @@ Located at `~/.aetheel/config.json`. Created automatically on first run.
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "",
"SLACK_APP_TOKEN": "",
"TELEGRAM_BOT_TOKEN": "",
"DISCORD_BOT_TOKEN": "",
"ANTHROPIC_API_KEY": "",
"OPENCODE_SERVER_PASSWORD": ""
}
},
"log_level": "INFO",
"runtime": {
"mode": "cli",
"model": null,
"timeout_seconds": 120,
"server_url": "http://localhost:4096",
"format": "json"
"format": "json",
"agent": null,
"attach": null
},
"claude": {
"model": null,
@@ -51,7 +64,18 @@ Located at `~/.aetheel/config.json`. Created automatically on first run.
"max_turns": 3,
"no_tools": true
},
"slack": {
"enabled": true,
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
},
"telegram": {
"enabled": false,
"bot_token": "${TELEGRAM_BOT_TOKEN}"
},
"discord": {
"enabled": false,
"bot_token": "${DISCORD_BOT_TOKEN}",
"listen_channels": []
},
"memory": {
@@ -77,6 +101,8 @@ Controls the OpenCode AI runtime (default).
| `format` | string | `"json"` | CLI output format: `"json"` (structured) or `"default"` (plain text) |
| `workspace` | string\|null | `null` | Working directory for OpenCode. Null uses current directory. |
| `provider` | string\|null | `null` | Provider override, e.g. `"anthropic"`, `"openai"` |
| `agent` | string\|null | `null` | OpenCode agent name. Use `agents` command to list available agents. |
| `attach` | string\|null | `null` | URL of a running `opencode serve` instance to attach to in CLI mode, avoiding MCP cold boot per request. |
### Section: `claude`
@@ -114,6 +140,42 @@ Scheduler storage.
|-----|------|---------|-------------|
| `db_path` | string | `"~/.aetheel/scheduler.db"` | SQLite database for persisted scheduled jobs |
### Section: `models`
Per-task model routing. Each task type can use a different model, provider, and engine. Omit or set to `null` to use the global `runtime` config.
```json
{
"models": {
"heartbeat": {
"engine": "opencode",
"model": "ollama/llama3.2",
"provider": "ollama"
},
"subagent": {
"model": "minimax/minimax-m1",
"provider": "minimax"
},
"default": null
}
}
```
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `heartbeat` | object\|null | `null` | Model override for heartbeat periodic tasks |
| `subagent` | object\|null | `null` | Model override for background subagent tasks |
| `default` | object\|null | `null` | Fallback model override for all other tasks |
Each route object supports:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `engine` | string\|null | `null` | `"opencode"` or `"claude"` — null inherits global |
| `model` | string\|null | `null` | Model ID (e.g. `"ollama/llama3.2"`) — null inherits global |
| `provider` | string\|null | `null` | Provider name (e.g. `"ollama"`, `"minimax"`) — null inherits global |
| `timeout_seconds` | int\|null | `null` | Request timeout — null inherits global |
### Top-level
| Key | Type | Default | Description |
@@ -124,14 +186,38 @@ Scheduler storage.
## Secrets
Secrets live in `.env` in the project root. These are never written to `config.json`.
Secrets live in the `env.vars` block inside `config.json`. Values defined here are injected into the process environment (if not already set), and can be referenced anywhere in the config using `${VAR}` syntax.
Copy the template:
### Example config.json with secrets
```bash
cp .env.example .env
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-your-bot-token",
"SLACK_APP_TOKEN": "xapp-your-app-token",
"DISCORD_BOT_TOKEN": "your-discord-token",
"ANTHROPIC_API_KEY": "sk-ant-..."
}
},
"slack": {
"enabled": true,
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
},
"discord": {
"enabled": true,
"bot_token": "${DISCORD_BOT_TOKEN}"
}
}
```
### How `${VAR}` substitution works
- `${VAR}` → resolved from process env (including `env.vars`)
- `$${VAR}` → literal `${VAR}` (escape sequence)
- Missing vars log a warning and keep the literal `${VAR}` string
### Required (at least one adapter)
| Variable | Format | Description |
@@ -148,6 +234,8 @@ cp .env.example .env
| `OPENCODE_SERVER_PASSWORD` | string | Password for `opencode serve` (SDK mode) |
| `ANTHROPIC_API_KEY` | `sk-ant-...` | Anthropic API key (Claude Code runtime) |
All of these can be set in `env.vars` in config.json, as process environment variables, or both (process env wins).
### Environment Variable Overrides
Any config.json setting can also be overridden via environment variables. These take priority over the config file:
@@ -161,6 +249,8 @@ Any config.json setting can also be overridden via environment variables. These
| `OPENCODE_SERVER_URL` | `runtime.server_url` |
| `OPENCODE_PROVIDER` | `runtime.provider` |
| `OPENCODE_WORKSPACE` | `runtime.workspace` |
| `OPENCODE_AGENT` | `runtime.agent` |
| `OPENCODE_ATTACH` | `runtime.attach` |
| `CLAUDE_MODEL` | `claude.model` |
| `CLAUDE_TIMEOUT` | `claude.timeout_seconds` |
| `CLAUDE_MAX_TURNS` | `claude.max_turns` |
@@ -197,7 +287,7 @@ python main.py [options]
When the same setting is defined in multiple places, the highest priority wins:
```
CLI arguments > Environment variables (.env) > config.json > Defaults
CLI arguments > Process env vars > env.vars block > ${VAR} substitution > config.json values > Defaults
```
For example, if `config.json` sets `runtime.model` to `"anthropic/claude-sonnet-4-20250514"` but you run `python main.py --model openai/gpt-5.1`, the CLI argument wins.
@@ -210,8 +300,7 @@ For example, if `config.json` sets `runtime.model` to `"anthropic/claude-sonnet-
| File | Path | Git-tracked |
|------|------|-------------|
| Config | `~/.aetheel/config.json` | No |
| Secrets | `<project>/.env` | No (in .gitignore) |
| Config + Secrets | `~/.aetheel/config.json` | No |
| Memory DB | `~/.aetheel/memory.db` | No |
| Session DB | `~/.aetheel/sessions.db` | No |
| Scheduler DB | `~/.aetheel/scheduler.db` | No |
@@ -243,13 +332,23 @@ For example, if `config.json` sets `runtime.model` to `"anthropic/claude-sonnet-
### Minimal Setup (Slack + OpenCode CLI)
`.env`:
```env
SLACK_BOT_TOKEN=xoxb-your-token
SLACK_APP_TOKEN=xapp-your-token
`~/.aetheel/config.json`:
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-your-token",
"SLACK_APP_TOKEN": "xapp-your-token"
}
},
"slack": {
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
}
}
```
No config.json changes needed — defaults work.
No other changes needed — defaults work.
```bash
python main.py
@@ -279,29 +378,49 @@ python main.py
`~/.aetheel/config.json`:
```json
{
"env": {
"vars": {
"DISCORD_BOT_TOKEN": "your-discord-token"
}
},
"discord": {
"enabled": true,
"bot_token": "${DISCORD_BOT_TOKEN}",
"listen_channels": ["1234567890123456"]
}
}
```
`.env`:
```env
DISCORD_BOT_TOKEN=your-discord-token
```
```bash
python main.py --discord
```
### Multi-Channel (Slack + Discord + Telegram)
`.env`:
```env
SLACK_BOT_TOKEN=xoxb-your-token
SLACK_APP_TOKEN=xapp-your-token
DISCORD_BOT_TOKEN=your-discord-token
TELEGRAM_BOT_TOKEN=your-telegram-token
`~/.aetheel/config.json`:
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-your-token",
"SLACK_APP_TOKEN": "xapp-your-token",
"DISCORD_BOT_TOKEN": "your-discord-token",
"TELEGRAM_BOT_TOKEN": "your-telegram-token"
}
},
"slack": {
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
},
"discord": {
"enabled": true,
"bot_token": "${DISCORD_BOT_TOKEN}"
},
"telegram": {
"enabled": true,
"bot_token": "${TELEGRAM_BOT_TOKEN}"
}
}
```
```bash

View File

@@ -0,0 +1,326 @@
# Discord Advanced Features
All Discord features are config-driven via `~/.aetheel/config.json` under the `discord` key. No code changes needed.
---
## Default Config
```json
{
"discord": {
"enabled": false,
"listen_channels": [],
"reply_to_mode": "first",
"history_enabled": true,
"history_limit": 20,
"channel_overrides": {},
"ack_reaction": "👀",
"typing_indicator": true,
"reaction_mode": "own",
"exec_approvals": false,
"exec_approval_tools": ["Bash", "Write", "Edit"],
"slash_commands": true,
"components_enabled": true
}
}
```
---
## Reply Threading
Controls whether the bot replies to the user's message using Discord's native reply feature (the quoted message above the response).
```json
"reply_to_mode": "first"
```
| Value | Behavior |
|-------|----------|
| `"off"` | Plain messages, no reply reference |
| `"first"` | First chunk of the response replies to the user's message |
| `"all"` | Every chunk replies to the user's message |
If the original message gets deleted before the bot responds, it falls back to a plain message automatically.
---
## Channel History Context
Injects recent channel messages into the AI's system prompt so it has conversational context beyond the current message.
```json
"history_enabled": true,
"history_limit": 20
```
- `history_enabled` — global toggle, default `true`
- `history_limit` — number of recent messages to fetch, default `20`
- History is only fetched for guild channels, not DMs (DMs already have session continuity)
- Messages are formatted as `[username]: content` and injected under a "Recent Channel History" section in the system prompt
- The bot's own messages appear as `[assistant]`
### Per-Channel Overrides
You can enable, disable, or change the limit per channel:
```json
"channel_overrides": {
"1234567890": {
"history_enabled": true,
"history_limit": 50
},
"9876543210": {
"history_enabled": false
}
}
```
Keys are Discord channel IDs (strings). Any field you omit falls back to the global default.
Use cases:
- Disable history in a high-traffic channel to save context window
- Increase the limit in a project channel where long context matters
- Disable entirely for a channel where privacy is a concern
---
## Ack Reactions
Adds a reaction emoji to the user's message while the bot is processing, then removes it when the response is sent. Gives immediate visual feedback that the bot received the message.
```json
"ack_reaction": "👀"
```
- Set to any valid emoji: `"👀"`, `"⏳"`, `"🤔"`, etc.
- Set to `""` (empty string) to disable
- The reaction is removed automatically after the response is sent
- If the bot lacks Add Reactions permission in a channel, it silently skips
---
## Typing Indicator
Shows the "Aetheel is typing..." indicator in the channel while the AI processes the message.
```json
"typing_indicator": true
```
- `true` — typing indicator shown during processing (default)
- `false` — no typing indicator
The indicator stays active for the entire duration of the AI call. Combined with ack reactions, users get two layers of feedback: the reaction appears instantly, and the typing indicator persists until the response arrives.
---
## Reaction Handling
Controls whether the bot processes emoji reactions as messages to the AI.
```json
"reaction_mode": "own"
```
| Value | Behavior |
|-------|----------|
| `"off"` | Reactions are ignored entirely |
| `"own"` | Only reactions on the bot's own messages are processed |
| `"all"` | Reactions on any message in the channel are processed |
When a reaction is processed, it's sent to the AI as:
```
[Reaction: 👍 on message: <original message text>]
```
The AI can then respond contextually — for example, a 👎 on a suggestion could prompt the bot to offer alternatives.
Bot reactions and reactions from other bots are always ignored.
---
## Slash Commands
Registers native Discord slash commands that appear in the `/` menu.
```json
"slash_commands": true
```
### Available Commands
| Command | Description |
|---------|-------------|
| `/ask <message>` | Ask Aetheel a question. Shows "thinking..." while processing. |
| `/status` | Check bot status (same as typing `status` in chat) |
| `/help` | Show help (same as typing `help` in chat) |
Commands are synced with Discord on bot startup. First sync can take up to an hour to propagate globally — guild-level commands appear faster.
Set to `false` to disable slash command registration entirely.
### Bot Permissions
For slash commands to work, the bot must be invited with the `applications.commands` OAuth2 scope in addition to `bot`. If you originally invited without it, re-invite using:
OAuth2 → URL Generator → Scopes: `bot`, `applications.commands`
---
## Interactive Components
Enables the bot to send messages with buttons and select menus.
```json
"components_enabled": true
```
Components are used internally by:
- Exec approval prompts (approve/deny buttons)
- Any future interactive features
The adapter exposes `send_components_message()` for programmatic use:
```python
adapter.send_components_message(
channel_id="123456789",
text="Choose an option:",
buttons=[
{"label": "Option A", "style": "primary", "custom_id": "opt_a"},
{"label": "Option B", "style": "secondary", "custom_id": "opt_b"},
{"label": "Delete", "style": "danger", "custom_id": "delete"},
],
select_options=[
{"label": "Python", "value": "python", "description": "Snake language"},
{"label": "TypeScript", "value": "ts", "description": "JS but typed"},
],
callback=my_callback_fn,
)
```
Button styles: `primary` (blurple), `secondary` (gray), `success` (green), `danger` (red).
Set to `false` to disable — approval prompts and interactive messages fall back to plain text.
---
## Exec Approvals
Adds a human-in-the-loop confirmation step for dangerous AI tool use. When the AI tries to use a gated tool, a button prompt appears in the channel asking the user to approve or deny.
```json
"exec_approvals": false,
"exec_approval_tools": ["Bash", "Write", "Edit"]
```
- `exec_approvals` — master toggle, default `false`
- `exec_approval_tools` — list of tool names that require approval
### How It Works
1. AI decides to use a gated tool (e.g. `Bash`)
2. Bot sends an embed with approve/deny buttons:
```
⚠️ Exec Approval Required
Tool: Bash
Action: <description of what the AI wants to do>
[✅ Approve] [❌ Deny]
```
3. Only the user who sent the original message can click the buttons
4. If approved, the tool executes normally
5. If denied or timed out (2 minutes), the action is blocked
### Customizing Gated Tools
Add or remove tools from the approval list:
```json
"exec_approval_tools": ["Bash", "Write", "Edit", "WebFetch"]
```
Tools not in this list are auto-approved. Set the list to `[]` to approve everything (while keeping the feature enabled for future use).
---
## Listen Channels
Channels where the bot responds to all messages without requiring an @mention.
```json
"listen_channels": ["1234567890", "9876543210"]
```
In all other guild channels, the bot only responds when @mentioned. DMs always respond to all messages regardless of this setting.
You can also set this via environment variable:
```bash
DISCORD_LISTEN_CHANNELS=1234567890,9876543210
```
---
## Required Bot Permissions
For all features to work, invite the bot with these permissions:
| Permission | Required For |
|------------|-------------|
| Send Messages | Responding to users |
| Read Message History | History context injection |
| View Channels | Seeing channels |
| Add Reactions | Ack reactions |
| Use External Emojis | Custom ack reaction emojis |
| Embed Links | Exec approval prompts |
OAuth2 scopes: `bot`, `applications.commands`
Privileged intents (in Developer Portal → Bot):
- Message Content Intent (required)
- Server Members Intent (recommended)
---
## Example Configs
### Minimal (just the basics)
```json
{
"discord": {
"enabled": true
}
}
```
Uses all defaults: reply threading on first message, history on, ack 👀, typing on, reactions on own messages, slash commands on.
### Privacy-focused
```json
{
"discord": {
"enabled": true,
"history_enabled": false,
"ack_reaction": "",
"reaction_mode": "off",
"slash_commands": false
}
}
```
### Full control with approvals
```json
{
"discord": {
"enabled": true,
"reply_to_mode": "all",
"history_limit": 50,
"ack_reaction": "⏳",
"exec_approvals": true,
"exec_approval_tools": ["Bash", "Write", "Edit", "WebFetch"],
"channel_overrides": {
"123456789": { "history_limit": 100 },
"987654321": { "history_enabled": false }
}
}
}
```

View File

@@ -108,12 +108,22 @@ The bot should now appear in your server's member list (offline until you start
## Step 5: Configure Aetheel
### Option A: Using `.env` file (recommended)
### Option A: Using config.json (recommended)
Edit your `.env` file and add:
Edit `~/.aetheel/config.json` and add your token to the `env.vars` block:
```env
DISCORD_BOT_TOKEN=your-discord-bot-token-here
```json
{
"env": {
"vars": {
"DISCORD_BOT_TOKEN": "your-discord-bot-token-here"
}
},
"discord": {
"enabled": true,
"bot_token": "${DISCORD_BOT_TOKEN}"
}
}
```
### Option B: Export environment variable
@@ -183,7 +193,7 @@ uv run python main.py --discord --log DEBUG
**Problem:** `DISCORD_BOT_TOKEN` is not set or empty.
**Fix:**
1. Check your `.env` file contains the token
1. Check your `config.json` has the token in `env.vars`
2. Make sure there are no extra spaces or quotes
3. Verify the token is from the Bot page, not the application client secret
@@ -231,7 +241,7 @@ uv run python main.py --discord --log DEBUG
**Fix:**
1. Start Aetheel with `--discord` flag
2. Check the console for connection errors
3. Verify the token in `.env` matches the one in the Developer Portal
3. Verify the token in `config.json` matches the one in the Developer Portal
### ❌ "Missing Permissions" when sending messages
@@ -291,7 +301,7 @@ uv run python main.py --discord --log DEBUG
| `adapters/discord_adapter.py` | Core Discord adapter (Gateway, send/receive) |
| `adapters/base.py` | Abstract base class all adapters implement |
| `main.py` | Entry point — `--discord` flag enables this adapter |
| `.env` | Your Discord token (not committed to git) |
| `~/.aetheel/config.json` | Your Discord token (in `env.vars` block) |
### Comparison with Other Adapters

View File

@@ -44,6 +44,31 @@ python main.py --sdk
python main.py --model anthropic/claude-sonnet-4-20250514
```
### OpenCode Advanced Features
Aetheel exposes several OpenCode CLI features via chat commands and config:
| Feature | Chat Command | Config Key |
|---------|-------------|------------|
| Agent selection | `agents` (list), config to set | `runtime.agent` |
| Attach to server | — | `runtime.attach` |
| Model discovery | `models`, `models <provider>` | — |
| Usage stats | `stats`, `stats <days>` | — |
| File attachments | Passed from chat adapters | — |
| Session forking | Internal (subagent branching) | — |
| Session titles | Auto-set from first message | — |
Setting `runtime.attach` to a running `opencode serve` URL (e.g. `"http://localhost:4096"`) makes CLI mode attach to that server instead of spawning a fresh process per request. This avoids MCP server cold boot times and is significantly faster.
```json
{
"runtime": {
"agent": "researcher",
"attach": "http://localhost:4096"
}
}
```
### Claude Code
Uses the [Claude Code](https://docs.anthropic.com/en/docs/claude-code) CLI with native `--system-prompt` support.
@@ -140,7 +165,7 @@ Aetheel connects to messaging platforms via adapters. Each adapter converts plat
### Slack (default)
Requires `SLACK_BOT_TOKEN` and `SLACK_APP_TOKEN` in `.env`. Starts automatically when tokens are present.
Requires `SLACK_BOT_TOKEN` and `SLACK_APP_TOKEN` in `config.json``env.vars`. Starts automatically when tokens are present.
```bash
python main.py
@@ -149,14 +174,14 @@ python main.py
### Telegram
```bash
# Set TELEGRAM_BOT_TOKEN in .env first
# Set TELEGRAM_BOT_TOKEN in config.json env.vars first
python main.py --telegram
```
### Discord
```bash
# Set DISCORD_BOT_TOKEN in .env first
# Set DISCORD_BOT_TOKEN in config.json env.vars first
python main.py --discord
```
@@ -397,6 +422,24 @@ The heartbeat system runs periodic tasks automatically by parsing a user-editabl
Set `enabled` to `false` to disable heartbeat entirely. If `HEARTBEAT.md` doesn't exist, a default one is created automatically.
### Model routing for heartbeat
Heartbeat tasks can use a cheaper/local model to save costs. Configure in the `models` section:
```json
{
"models": {
"heartbeat": {
"engine": "opencode",
"model": "ollama/llama3.2",
"provider": "ollama"
}
}
}
```
When set, heartbeat jobs use a dedicated runtime instance with the specified model instead of the global default. Regular chat messages are unaffected.
### How to test
```bash
@@ -737,26 +780,39 @@ python cli.py doctor
## 14. Configuration
All configuration lives in `~/.aetheel/config.json`. Secrets (tokens) stay in `.env`.
All configuration lives in `~/.aetheel/config.json`, including secrets (in the `env.vars` block).
### Config hierarchy (highest priority wins)
1. CLI arguments (`--model`, `--claude`, etc.)
2. Environment variables
3. `~/.aetheel/config.json`
4. Dataclass defaults
2. Process environment variables
3. `env.vars` block in config.json
4. `${VAR}` substitution in config values
5. `~/.aetheel/config.json` static values
6. Dataclass defaults
### Full config.json example
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-...",
"SLACK_APP_TOKEN": "xapp-...",
"TELEGRAM_BOT_TOKEN": "",
"DISCORD_BOT_TOKEN": "",
"ANTHROPIC_API_KEY": ""
}
},
"log_level": "INFO",
"runtime": {
"mode": "cli",
"model": null,
"timeout_seconds": 120,
"server_url": "http://localhost:4096",
"format": "json"
"format": "json",
"agent": null,
"attach": null
},
"claude": {
"model": null,
@@ -770,7 +826,18 @@ All configuration lives in `~/.aetheel/config.json`. Secrets (tokens) stay in `.
"TeamCreate", "TeamDelete", "SendMessage"
]
},
"slack": {
"enabled": true,
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
},
"telegram": {
"enabled": false,
"bot_token": "${TELEGRAM_BOT_TOKEN}"
},
"discord": {
"enabled": false,
"bot_token": "${DISCORD_BOT_TOKEN}",
"listen_channels": []
},
"memory": {
@@ -794,6 +861,11 @@ All configuration lives in `~/.aetheel/config.json`. Secrets (tokens) stay in `.
"mcp": {
"servers": {}
},
"models": {
"heartbeat": null,
"subagent": null,
"default": null
},
"hooks": {
"enabled": true
},
@@ -806,20 +878,11 @@ All configuration lives in `~/.aetheel/config.json`. Secrets (tokens) stay in `.
}
```
### Environment variables (.env)
### Process environment variable overrides
Process env vars still override everything. Useful for CI, Docker, or systemd:
```bash
# Slack (required for Slack adapter)
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...
# Telegram (required for --telegram)
TELEGRAM_BOT_TOKEN=...
# Discord (required for --discord)
DISCORD_BOT_TOKEN=...
# Runtime overrides
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
CLAUDE_MODEL=claude-sonnet-4-20250514
LOG_LEVEL=DEBUG

View File

@@ -298,12 +298,23 @@ Each imported skill gets its own folder under `~/.aetheel/workspace/skills/<name
Currently usage stats reset on restart. Persist to SQLite so `usage` command shows lifetime stats, daily/weekly/monthly breakdowns, and cost trends.
### Multi-Model Routing
### Multi-Model Routing — ✅ Done
Route different types of requests to different models automatically:
- Quick questions → fast/cheap model (sonnet, gpt-4o-mini)
- Complex reasoning → powerful model (opus, o1)
- Large context → big-context model (gemini-2.5-pro)
Per-task model routing is implemented via the `models` config section. Different task types (heartbeat, subagent, default chat) can each use a different model, provider, and engine:
```json
{
"models": {
"heartbeat": { "model": "ollama/llama3.2", "provider": "ollama" },
"subagent": { "model": "minimax/minimax-m1", "provider": "minimax" }
}
}
```
Future extensions:
- Auto-routing based on message complexity (short → cheap model, complex → powerful model)
- Per-channel model overrides
- Cost-aware routing (switch to cheaper model when budget threshold is hit)
### Conversation Branching

View File

@@ -95,17 +95,24 @@ This will guide you through connecting to a provider. Options include:
### Using Environment Variables
Alternatively, set provider API keys in your `.env`:
Alternatively, set provider API keys in your `config.json``env.vars` block:
```env
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
```json
{
"env": {
"vars": {
"ANTHROPIC_API_KEY": "sk-ant-...",
"OPENAI_API_KEY": "sk-...",
"GEMINI_API_KEY": "AI..."
}
}
}
```
# OpenAI
OPENAI_API_KEY=sk-...
Or as process environment variables:
# Google Gemini
GEMINI_API_KEY=AI...
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
```
### Verify models are available
@@ -172,41 +179,36 @@ pip install opencode-ai
## Step 4: Configure Aetheel
Edit your `.env` file:
Edit your `~/.aetheel/config.json`:
```env
# --- Slack (see docs/slack-setup.md) ---
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...
# --- OpenCode Runtime ---
OPENCODE_MODE=cli
# OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
OPENCODE_TIMEOUT=120
# --- SDK mode only ---
# OPENCODE_SERVER_URL=http://localhost:4096
# OPENCODE_SERVER_PASSWORD=
LOG_LEVEL=INFO
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-...",
"SLACK_APP_TOKEN": "xapp-..."
}
},
"slack": {
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
},
"runtime": {
"mode": "cli"
}
}
```
### Model Selection
You can specify a model explicitly, or let OpenCode use its default:
You can specify a model in config.json or via process env:
```env
# Anthropic Claude
OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
# OpenAI GPT-5
OPENCODE_MODEL=openai/gpt-5.1
# Google Gemini
OPENCODE_MODEL=google/gemini-3-pro
# OpenCode Zen (pay-as-you-go)
OPENCODE_MODEL=opencode/claude-opus-4-6
```json
{
"runtime": {
"model": "anthropic/claude-sonnet-4-20250514"
}
}
```
Or override at launch:
@@ -367,9 +369,11 @@ opencode --version
### ❌ "Request timed out"
**Fix:** Increase the timeout:
```env
OPENCODE_TIMEOUT=300
**Fix:** Increase the timeout in config.json:
```json
{
"runtime": { "timeout_seconds": 300 }
}
```
Or simplify your prompt — complex prompts take longer.
@@ -378,8 +382,8 @@ Or simplify your prompt — complex prompts take longer.
**Fix:**
1. Make sure `opencode serve` is running: `opencode serve --port 4096`
2. Check the URL in `.env`: `OPENCODE_SERVER_URL=http://localhost:4096`
3. If using auth, set both `OPENCODE_SERVER_PASSWORD` in `.env` and when starting the server
2. Check the URL in config.json: `runtime.server_url`
3. If using auth, set `OPENCODE_SERVER_PASSWORD` in `env.vars` and when starting the server
### ❌ "opencode-ai SDK not installed"
@@ -388,9 +392,11 @@ Or simplify your prompt — complex prompts take longer.
pip install opencode-ai
```
If you don't want to install the SDK, switch to CLI mode:
```env
OPENCODE_MODE=cli
If you don't want to install the SDK, switch to CLI mode in config.json:
```json
{
"runtime": { "mode": "cli" }
}
```
### ❌ Responses are cut off or garbled

View File

@@ -95,7 +95,14 @@ Query parameters are logged in web server access logs, browser history, and prox
### 12. Webhook Token Stored in `config.json`
The `webhooks.token` field in `config.py` is read from and written to `config.json`, which is a plaintext file. Secrets should only live in `.env`.
The `webhooks.token` field in `config.py` is read from and written to `config.json`, which is a plaintext file. Consider using the `env.vars` block with a `${VAR}` reference instead of storing the token directly:
```json
{
"env": { "vars": { "WEBHOOK_TOKEN": "your-secret" } },
"webhooks": { "token": "${WEBHOOK_TOKEN}" }
}
```
### 13. No HTTPS on Any HTTP Endpoint
@@ -165,7 +172,7 @@ The most impactful changes to make first:
3. **Add input schema validation** on webhook POST bodies
4. **Validate cron expressions** more strictly before passing to APScheduler
5. **Add rate limiting** to webhook and WebSocket endpoints (e.g., aiohttp middleware)
6. **Move `webhooks.token` to `.env` only**, remove from `config.json`
6. **Use `${VAR}` references for `webhooks.token`** in config.json instead of storing the raw value
7. **Add WebSocket origin checking or token auth** to WebChat
8. **Set explicit `client_max_size`** on aiohttp apps
9. **Pin dependency upper bounds** in `pyproject.toml`

View File

@@ -62,7 +62,7 @@ If you prefer to set things up manually, follow the steps below.
4. [Install an AI Runtime](#4-install-an-ai-runtime)
5. [Clone the Repository](#5-clone-the-repository)
6. [Install Python Dependencies](#6-install-python-dependencies)
7. [Configure Secrets (.env)](#7-configure-secrets)
7. [Configure Secrets (config.json env.vars)](#7-configure-secrets)
8. [Configure Settings (config.json)](#8-configure-settings)
9. [Set Up Messaging Channels](#9-set-up-messaging-channels)
10. [Run the Test Suite](#10-run-the-test-suite)
@@ -215,35 +215,40 @@ uv run python -c "import click; import aiohttp; import apscheduler; print('All p
## 7. Configure Secrets
Secrets (tokens, API keys) go in the `.env` file. This file is gitignored.
Secrets (tokens, API keys) go in the `env.vars` block inside `~/.aetheel/config.json`. Generate the default config first, then edit it:
```bash
# Create from template
cp .env.example .env
# Create the config directory and default config
mkdir -p ~/.aetheel/workspace
uv run python -c "from config import save_default_config; save_default_config()"
# Edit with your tokens
nano .env
# Edit config
nano ~/.aetheel/config.json
```
Fill in the tokens you need:
Fill in the `env.vars` block with your tokens:
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-your-bot-token",
"SLACK_APP_TOKEN": "xapp-your-app-token",
"TELEGRAM_BOT_TOKEN": "your-telegram-token",
"DISCORD_BOT_TOKEN": "your-discord-token",
"ANTHROPIC_API_KEY": "sk-ant-your-key"
}
}
}
```
The default config already has `${VAR}` references in the adapter sections (e.g. `"bot_token": "${SLACK_BOT_TOKEN}"`), so tokens defined in `env.vars` are automatically resolved.
Alternatively, you can set tokens as process environment variables — they override everything:
```bash
# Required for Slack
SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_APP_TOKEN=xapp-your-app-token
# Required for Telegram (if using --telegram)
TELEGRAM_BOT_TOKEN=your-telegram-token
# Required for Discord (if using --discord)
DISCORD_BOT_TOKEN=your-discord-token
# AI provider API key
ANTHROPIC_API_KEY=sk-ant-your-key
# Optional overrides
# OPENCODE_MODEL=anthropic/claude-sonnet-4-20250514
# LOG_LEVEL=DEBUG
export SLACK_BOT_TOKEN="xoxb-your-bot-token"
export ANTHROPIC_API_KEY="sk-ant-your-key"
```
See [docs/slack-setup.md](slack-setup.md) and [docs/discord-setup.md](discord-setup.md) for how to get these tokens.
@@ -252,20 +257,7 @@ See [docs/slack-setup.md](slack-setup.md) and [docs/discord-setup.md](discord-se
## 8. Configure Settings
Non-secret settings go in `~/.aetheel/config.json`. A default is created on first run, but you can create it now:
```bash
# Create the config directory
mkdir -p ~/.aetheel/workspace
# Generate default config
uv run python -c "from config import save_default_config; save_default_config()"
# View it
cat ~/.aetheel/config.json
```
Edit if needed:
Non-secret settings also live in `~/.aetheel/config.json` (same file as secrets). If you already created it in step 7, just edit the relevant sections:
```bash
nano ~/.aetheel/config.json
@@ -323,7 +315,7 @@ Key settings to review:
1. Message @BotFather on Telegram
2. `/newbot` → follow prompts → copy token
3. Set `TELEGRAM_BOT_TOKEN` in `.env`
3. Set `TELEGRAM_BOT_TOKEN` in `config.json``env.vars`
---
@@ -438,8 +430,9 @@ Restart=on-failure
RestartSec=10
Environment=PATH=/home/your-username/.local/bin:/usr/local/bin:/usr/bin:/bin
# Load .env file
EnvironmentFile=/home/your-username/Aetheel/.env
# Secrets are loaded from ~/.aetheel/config.json env.vars block.
# If you need process-level env overrides, add them here:
# Environment=ANTHROPIC_API_KEY=sk-ant-...
# Optional: add more adapters
# ExecStart=/home/your-username/.local/bin/uv run python main.py --discord --webchat
@@ -774,7 +767,7 @@ sudo systemctl restart aetheel
### "No channel adapters initialized!"
No messaging tokens are set. Check your `.env` file has at least one of:
No messaging tokens are set. Check your `~/.aetheel/config.json` has tokens in the `env.vars` block:
- `SLACK_BOT_TOKEN` + `SLACK_APP_TOKEN`
- `TELEGRAM_BOT_TOKEN` (with `--telegram` flag)
- `DISCORD_BOT_TOKEN` (with `--discord` flag)
@@ -832,7 +825,6 @@ sudo journalctl -u aetheel -n 50 --no-pager
Common issues:
- Wrong `WorkingDirectory` path
- Wrong `User`
- `.env` file not found (check `EnvironmentFile` path)
- uv not in PATH (check `Environment=PATH=...`)
### Run diagnostics

View File

@@ -150,21 +150,24 @@ After completing the steps above, you should have two tokens:
## Step 7: Configure Aetheel
### Option A: Using `.env` file (recommended)
### Option A: Using config.json (recommended)
```bash
# Copy the example env file
cp .env.example .env
Edit `~/.aetheel/config.json` and add your tokens to the `env.vars` block:
# Edit .env with your tokens
```
Edit `.env`:
```env
SLACK_BOT_TOKEN=xoxb-your-actual-bot-token
SLACK_APP_TOKEN=xapp-your-actual-app-token
LOG_LEVEL=INFO
```json
{
"env": {
"vars": {
"SLACK_BOT_TOKEN": "xoxb-your-actual-bot-token",
"SLACK_APP_TOKEN": "xapp-your-actual-app-token"
}
},
"slack": {
"enabled": true,
"bot_token": "${SLACK_BOT_TOKEN}",
"app_token": "${SLACK_APP_TOKEN}"
}
}
```
### Option B: Export environment variables
@@ -226,7 +229,7 @@ python test_slack.py --channel C0123456789 --send-only
**Problem:** `SLACK_BOT_TOKEN` is not set or empty.
**Fix:**
1. Check your `.env` file exists and contains the token
1. Check your `config.json` has the token in `env.vars`
2. Make sure there are no extra spaces or quotes around the token
3. Verify the token starts with `xoxb-`
@@ -237,7 +240,7 @@ python test_slack.py --channel C0123456789 --send-only
**Fix:**
1. Go to your Slack app → **Basic Information** → **App-Level Tokens**
2. If no token exists, generate one with `connections:write` scope
3. Add it to your `.env` file
3. Add it to `config.json` `env.vars`
### ❌ "not_authed" or "invalid_auth"
@@ -331,8 +334,7 @@ python test_slack.py --channel C0123456789 --send-only
| `adapters/slack_adapter.py` | Core Slack adapter (Socket Mode, send/receive) |
| `main.py` | Entry point with echo and smart handlers |
| `test_slack.py` | Integration test suite |
| `.env` | Your Slack tokens (not committed to git) |
| `.env.example` | Token template |
| `~/.aetheel/config.json` | Your Slack tokens (in `env.vars` block) |
| `requirements.txt` | Python dependencies |
### Comparison with OpenClaw
@@ -346,7 +348,7 @@ python test_slack.py --channel C0123456789 --send-only
| **Threading** | `thread_ts` for conversation isolation | `thread_ts` for conversation isolation |
| **DM Handling** | `conversations.open` for user DMs | `conversations_open` for user DMs |
| **Text Limit** | 4000 chars (chunked) | 4000 chars (chunked) |
| **Config** | JSON5 config file | `.env` file |
| **Config** | JSON5 config file | `config.json` with `env.vars` + `${VAR}` |
| **Accounts** | Multi-account support | Single account (MVP) |
---

View File

@@ -12,7 +12,7 @@
| **Channels** | Slack only | 9 channels | WhatsApp only | 15+ channels | 5 channels |
| **LLM Runtime** | OpenCode / Claude Code (subprocess) | LiteLLM (multi-provider) | Claude Agent SDK | Pi Agent (custom RPC) | Go-native agent |
| **Memory** | Hybrid (vector + BM25) | Simple file-based | Per-group CLAUDE.md | Workspace files | MEMORY.md + sessions |
| **Config** | `.env` file | `config.json` | Code changes (no config) | JSON5 config | `config.json` |
| **Config** | `config.json` with `env.vars` + `${VAR}` | `config.json` | Code changes (no config) | JSON5 config | `config.json` |
| **Skills** | ❌ None | ✅ Bundled + custom | ✅ Code skills (transform) | ✅ Bundled + managed + workspace | ✅ Custom skills |
| **Scheduled Tasks** | ⚠️ Action tags (remind only) | ✅ Full cron system | ✅ Task scheduler | ✅ Cron + webhooks + Gmail | ✅ Cron + heartbeat |
| **Security** | ❌ No sandbox | ⚠️ Workspace restriction | ✅ Container isolation | ✅ Docker sandbox + pairing | ✅ Workspace sandbox |
@@ -97,13 +97,13 @@ The AI runtime has unrestricted system access. At minimum, workspace-level restr
### 🟡 Important Gaps (Medium Priority)
#### 5. Config File System (JSON instead of .env)
**Current:** `.env` file with environment variables
#### 5. Config File System (JSON with env.vars — ✅ Done)
**Current:** `config.json` with `env.vars` block and `${VAR}` substitution for secrets
**Others:** JSON/JSON5 config files
A structured config file is more flexible and easier to manage than flat env vars. It can hold nested structures for channels, providers, tools, etc.
Aetheel now uses a single config.json with an `env.vars` block for secrets and `${VAR}` references, matching openclaw's approach.
> **Recommendation:** Switch to `~/.aetheel/config.json` with a schema validator. Keep `.env` for secrets only.
> **Status:** ✅ Implemented — no separate `.env` file needed.
#### 6. Web Search Tool
**Current:** No web search
@@ -219,7 +219,7 @@ graph LR
Based on the analysis, here's a suggested implementation order:
### Phase 1: Foundation (Essentials)
1. **Config system**Switch from `.env` to JSON config
1. **Config system**✅ Done: `config.json` with `env.vars` + `${VAR}` substitution
2. **Skills system**`skills/` directory with `SKILL.md` loading
3. **Tool system** — Core tools (shell, file, web search) with sandbox
4. **Security sandbox** — Workspace-restricted tool execution

View File

@@ -0,0 +1,59 @@
OpenClaw's Discord integration is a massive, enterprise-grade system compared to Aetheel's. Here are the key differences:
What Aetheel has and does fine:
Basic gateway connection via discord.py
DM + @mention handling
Message chunking (2000 char limit)
Listen channels (respond without @mention in specific channels)
Background thread support
Token from env var
What OpenClaw has that Aetheel is missing:
Multi-account support — OpenClaw can run multiple Discord bot accounts simultaneously, each with its own token, config, and identity. Aetheel supports exactly one bot token.
DM access policies — OpenClaw has pairing, allowlist, open, and disabled DM policies. Pairing mode requires users to get a code approved before they can DM the bot. Aetheel lets anyone DM the bot with zero access control.
Guild access policies — OpenClaw has open, allowlist, and disabled guild policies with per-guild and per-channel allowlists. You can restrict which servers, which channels within a server, and which users/roles can trigger the bot. Aetheel has no guild-level access control at all.
Role-based routing — OpenClaw can route Discord users to different AI agents based on their Discord roles. Aetheel has no concept of this.
[-] Interactive components (v2) — OpenClaw supports Discord buttons, select menus, modal forms, and media galleries. The AI can send rich interactive messages. Aetheel sends plain text only.
[-] Native slash commands — OpenClaw registers and handles Discord slash commands natively. Aetheel has no slash command support.
[-] Reply threading — OpenClaw supports replyToMode (off, first, all) and explicit [[reply_to:<id>]] tags so the bot can reply to specific messages. Aetheel doesn't use Discord's reply feature at all.
[-] History context — OpenClaw injects configurable message history (historyLimit, default 20) from the Discord channel into the AI context. Aetheel doesn't read channel history.
[-] Reaction handling — OpenClaw can receive and send reactions, with configurable notification modes (off, own, all, allowlist). Aetheel ignores reactions entirely.
[-] Ack reactions — OpenClaw sends an acknowledgement emoji (e.g. 👀) while processing a message, so users know the bot is working. Aetheel gives no processing feedback.
[-] Typing indicators — OpenClaw shows typing indicators while the agent processes. Aetheel doesn't.
Media/file handling — OpenClaw can send and receive files, images, and voice messages (with ffmpeg conversion). Aetheel ignores attachments.
Voice messages — OpenClaw can send voice messages with auto-generated waveforms. Aetheel has no voice support.
[-] Exec approvals — OpenClaw can post button-based approval prompts in Discord for dangerous operations (like shell commands). Aetheel has no human-in-the-loop approval flow.
Polls — OpenClaw can create Discord polls. Aetheel can't.
Moderation tools — OpenClaw exposes timeout, kick, ban, role management as AI-accessible actions with configurable gates. Aetheel has none.
Channel management — OpenClaw can create, edit, delete, and move channels. Aetheel can't.
PluralKit support — OpenClaw resolves proxied messages from PluralKit systems. Niche but shows the depth.
Presence/status — OpenClaw can set the bot's online status, activity, and streaming status. Aetheel's bot just shows as "online" with no custom status.
Gateway proxy — OpenClaw supports routing Discord traffic through an HTTP proxy. Aetheel doesn't.
Retry/resilience — OpenClaw has configurable retry policies for Discord API calls. Aetheel has no retry logic.
Config writes from chat — OpenClaw lets users modify bot config via Discord commands. Aetheel's /config set works but isn't Discord-specific.
Session isolation model — OpenClaw has sophisticated session keys: DMs share a main session by default, guild channels get isolated sessions (agent:<agentId>:discord:channel:<channelId>), slash commands get their own sessions. Aetheel uses channel_id as the conversation ID for everything, which is simpler but less flexible.
Bottom line: Aetheel's Discord adapter is a functional but minimal "receive messages, send text back" integration. OpenClaw's is a full Discord platform with interactive UI, access control, moderation, media, threading, multi-account, and agent routing. The biggest practical gaps for Aetheel are probably: access control (DM/guild policies), typing/ack indicators, reply threading, history context injection, and interactive components.

View File

@@ -0,0 +1,315 @@
# Aetheel vs NanoClaw — Feature Gap Analysis
Deep comparison of Aetheel (Python, multi-channel AI assistant) and NanoClaw (TypeScript, container-isolated personal AI assistant). Focus: what NanoClaw has that Aetheel is missing.
---
## Architecture Differences
| Aspect | Aetheel | NanoClaw |
|--------|---------|----------|
| Language | Python | TypeScript |
| Agent execution | In-process (shared memory) | Container-isolated (Apple Container / Docker) |
| Identity model | Shared across all channels (SOUL.md, USER.md, MEMORY.md) | Per-group (each group has its own CLAUDE.md) |
| Security model | Application-level checks | OS-level container isolation |
| Config approach | Config-driven (`config.json` with `env.vars` + `${VAR}`) | Code-first (Claude modifies your fork) |
| Philosophy | Feature-rich framework | Minimal, understandable in 8 minutes |
---
## Features Aetheel Is Missing
### 1. Container Isolation (Critical)
NanoClaw runs every agent invocation inside a Linux container (Apple Container on macOS, Docker on Linux). Each container:
- Gets only explicitly mounted directories
- Runs as non-root (uid 1000)
- Is ephemeral (`--rm` flag, fresh per invocation)
- Cannot access other groups' files or sessions
- Cannot access host filesystem beyond mounts
Aetheel runs everything in-process with no sandboxing. The security audit already flagged path traversal, arbitrary code execution via hooks, and unvalidated action tags as critical issues.
**What to build:**
- Docker-based agent execution (spawn a container per AI request)
- Mount only the relevant group's workspace directory
- Pass secrets via stdin, not mounted files
- Add a `/convert-to-docker` skill or built-in Docker mode
---
### 2. Per-Group Isolation
NanoClaw gives each chat group its own:
- Filesystem folder (`groups/{name}/`)
- Memory file (`CLAUDE.md` per group)
- Session history (isolated `.claude/` directory)
- IPC namespace (prevents cross-group privilege escalation)
- Container mounts (only own folder + read-only global)
Aetheel shares SOUL.md, USER.md, and MEMORY.md across all channels and conversations. A Slack channel, Discord server, and Telegram group all see the same memory and identity.
**What to build:**
- Per-channel or per-group workspace directories
- Isolated session storage per group
- A `global/` shared memory that all groups can read but only the main channel can write
- Group registration system (like NanoClaw's `registerGroup()`)
---
### 3. Working Agent Teams / Swarms
NanoClaw has working agent teams today via Claude Code's experimental `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1`:
- Lead agent creates teammates using Claude's native `TeamCreate` / `SendMessage` tools
- Each teammate runs in its own container
- On Telegram, each agent gets a dedicated bot identity (pool of pre-created bots renamed dynamically via `setMyName`)
- The lead agent coordinates but doesn't relay every message — users see teammate messages directly
- `<internal>` tags let agents communicate without spamming the user
Aetheel has the tools in the allowed list (`TeamCreate`, `TeamDelete`, `SendMessage`) but no actual orchestration, no per-agent identity, and no way for teammates to appear as separate entities in chat.
**What to build:**
- Enable `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1` when using Claude runtime
- Bot pool for Telegram/Discord (multiple bot tokens, one per agent role)
- IPC routing that respects `sender` field to route messages through the right bot
- Per-agent CLAUDE.md / SOUL.md files
- `<internal>` tag stripping in outbound messages
---
### 4. Mount Security / Allowlist
NanoClaw has a tamper-proof mount allowlist at `~/.config/nanoclaw/mount-allowlist.json` (outside the project root, never mounted into containers):
- Defines which host directories can be mounted
- Default blocked patterns: `.ssh`, `.gnupg`, `.aws`, `.env`, `private_key`, etc.
- Symlink resolution before validation (prevents traversal)
- `nonMainReadOnly` forces read-only for non-main groups
- Per-root `allowReadWrite` control
Aetheel has no filesystem access control. The AI can read/write anywhere the process has permissions.
**What to build:**
- External allowlist config (outside workspace, not modifiable by the AI)
- Blocked path patterns for sensitive directories
- Symlink resolution and path validation
- Read-only enforcement for non-primary channels
---
### 5. IPC-Based Communication
NanoClaw uses file-based IPC for all agent-to-host communication:
- Agents write JSON files to `data/ipc/{group}/messages/` and `data/ipc/{group}/tasks/`
- Host polls IPC directories and processes files
- Per-group IPC namespaces prevent cross-group message injection
- Authorization checks: non-main groups can only send to their own chat, schedule tasks for themselves
- Error files moved to `data/ipc/errors/` for debugging
Aetheel uses in-memory action tags parsed from AI response text (`[ACTION:remind|...]`, `[ACTION:cron|...]`). No authorization, no isolation, no audit trail.
**What to build:**
- File-based or queue-based IPC for agent communication
- Per-group namespaces with authorization
- Audit trail for all IPC operations
- Error handling with failed message preservation
---
### 6. Group Queue with Concurrency Control
NanoClaw has a `GroupQueue` class that manages container execution:
- Max concurrent containers limit (`MAX_CONCURRENT_CONTAINERS`, default 5)
- Per-group queuing (messages and tasks queue while container is active)
- Follow-up messages sent to active containers via IPC input files
- Idle timeout with `_close` sentinel to wind down containers
- Exponential backoff retry (5s base, max 5 retries)
- Graceful shutdown (detaches containers, doesn't kill them)
- Task priority over messages in drain order
Aetheel has a simple concurrent limit of 3 subagents but no queuing, no retry logic, no follow-up message support, and no graceful shutdown.
**What to build:**
- Proper execution queue with configurable concurrency
- Per-channel message queuing when agent is busy
- Follow-up message injection into active sessions
- Exponential backoff retry on failures
- Graceful shutdown that lets active agents finish
---
### 7. Task Context Modes
NanoClaw scheduled tasks support two context modes:
- `group` — uses the group's existing session (shared conversation history)
- `isolated` — fresh session per task run (no prior context)
Aetheel scheduled tasks always run in a fresh context with no option to share the group's conversation history.
**What to build:**
- `context_mode` field on scheduled jobs (`group` vs `isolated`)
- Session ID passthrough for `group` mode tasks
---
### 8. Task Run Logging
NanoClaw logs every task execution:
- `task_run_logs` table with: task_id, run_at, duration_ms, status, result, error
- `last_result` summary stored on the task itself
- Tasks auto-complete after `once` schedule runs
Aetheel's scheduler persists jobs but doesn't log execution history or results.
**What to build:**
- Task run log table (when it ran, how long, success/error, result summary)
- Queryable task history (`task history <id>`)
---
### 9. Streaming Output with Idle Timeout
NanoClaw streams agent output in real-time:
- Container output is parsed as it arrives (sentinel markers for robust parsing)
- Results are forwarded to the user immediately via `sendMessage`
- Idle timeout (default 30 min) closes the container if no output for too long
- Prevents hanging containers from blocking the queue
Aetheel waits for the full AI response before sending anything back.
**What to build:**
- Streaming response support (send partial results as they arrive)
- Idle timeout for long-running agent sessions
- Typing indicators while agent is processing
---
### 10. Skills as Code Transformations
NanoClaw's skills are fundamentally different from Aetheel's:
- Skills are SKILL.md files that teach Claude Code how to modify the codebase
- A deterministic skills engine applies code changes (three-way merge, file additions)
- Skills have state tracking (`.nanoclaw/state.yaml`), backups, and rollback
- Examples: `/add-telegram`, `/add-discord`, `/add-gmail`, `/add-voice-transcription`, `/convert-to-docker`, `/add-parallel`
- Each skill is a complete guide: pre-flight checks, code changes, setup, verification, troubleshooting
Aetheel's skills are runtime context injections (markdown instructions added to the system prompt when trigger words match). They don't modify code.
**What to build:**
- Skills engine that can apply code transformations
- State tracking for applied skills
- Rollback support
- Template skills for common integrations
---
### 11. Voice Message Transcription
NanoClaw has a skill (`/add-voice-transcription`) that:
- Detects WhatsApp voice notes (`audioMessage.ptt === true`)
- Downloads audio via Baileys
- Transcribes using OpenAI Whisper API
- Stores transcribed content as `[Voice: <text>]` in the database
- Configurable provider, fallback message, enable/disable
Aetheel has no voice message handling.
**What to build:**
- Voice message detection per adapter (Telegram, Discord, Slack all support voice)
- Whisper API integration for transcription
- Transcribed content injection into the conversation
---
### 12. Gmail / Email Integration
NanoClaw has a skill (`/add-gmail`) with two modes:
- Tool mode: agent can read/send emails when triggered from chat
- Channel mode: emails trigger the agent, agent replies via email
- GCP OAuth setup guide
- Email polling with deduplication
- Per-thread or per-sender context isolation
Aetheel has no email integration.
**What to build:**
- Gmail MCP integration (or direct API)
- Email as a channel adapter
- OAuth credential management
---
### 13. WhatsApp Support
NanoClaw's primary channel is WhatsApp via the Baileys library:
- QR code and pairing code authentication
- Group metadata sync
- Message history storage per registered group
- Bot message filtering (prevents echo loops)
Aetheel supports Slack, Discord, Telegram, and WebChat but not WhatsApp.
**What to build:**
- WhatsApp adapter using a library like Baileys or the WhatsApp Business API
- QR code authentication flow
- Group registration and metadata sync
---
### 14. Structured Message Routing
NanoClaw has a clean channel abstraction:
- `Channel` interface: `connect()`, `sendMessage()`, `isConnected()`, `ownsJid()`, `disconnect()`, `setTyping?()`
- `findChannel()` routes outbound messages to the right channel by JID prefix (`tg:`, `dc:`, WhatsApp JIDs)
- `formatOutbound()` strips `<internal>` tags before sending
- XML-escaped message formatting for agent input
Aetheel's adapters work but lack JID-based routing, `<internal>` tag support, and typing indicators across all adapters.
**What to build:**
- JID-based message routing (prefix per channel)
- `<internal>` tag stripping for agent-to-agent communication
- Typing indicators for all adapters
- Unified channel interface with `ownsJid()` pattern
---
## Priority Recommendations
### High Priority (Security + Core Gaps)
1. Container isolation for agent execution
2. Fix the 10 critical/high security issues from the security audit
3. Per-group isolation (memory, sessions, filesystem)
4. Mount security allowlist
### Medium Priority (Feature Parity)
5. Working agent teams with per-agent identity
6. Group queue with concurrency control and retry
7. Task context modes and run logging
8. Streaming output with idle timeout
9. IPC-based communication with authorization
### Lower Priority (Nice to Have)
10. Voice message transcription
11. WhatsApp adapter
12. Gmail/email integration
13. Skills as code transformations
14. Structured message routing with JID prefixes
---
## What Aetheel Has That NanoClaw Doesn't
For reference, these are Aetheel strengths to preserve:
- Dual runtime support (OpenCode + Claude Code) with live switching
- Auto-failover on rate limits
- Per-request cost tracking and usage stats
- Local vector search (hybrid: 0.7 vector + 0.3 BM25) with fastembed
- Built-in multi-channel (Slack, Discord, Telegram, WebChat, Webhooks)
- WebChat browser UI
- Heartbeat / proactive task system
- Lifecycle hooks (gateway:startup, command:reload, agent:response, etc.)
- Comprehensive CLI (`aetheel start/stop/restart/logs/doctor/config/cron/memory`)
- Config-driven setup (no code changes needed for basic customization)
- Self-modification (AI can edit its own config, skills, identity files)
- Hot reload (`/reload` command)

View File

@@ -0,0 +1,53 @@
Looking at the OpenCode CLI doc against Aetheel's opencode_runtime.py, here are the gaps:
What Aetheel uses today:
opencode run with --model, --continue, --session, --format
SDK mode via opencode serve API (session create + chat)
Session persistence in SQLite
System prompt injection via XML tags (CLI) or system param (SDK)
Rate limit detection from error text
Live session tracking with idle timeout
What Aetheel is missing from the OpenCode CLI:
[-] --agent flag — OpenCode supports custom agents (opencode agent create/list). Aetheel has no concept of selecting different OpenCode agents per request. This would be useful for the planned agent teams feature — you could have a "programmer" agent and a "researcher" agent defined in OpenCode.
[-] --file / -f flag — OpenCode can attach files to a prompt (opencode run -f image.png "describe this"). Aetheel doesn't pass file attachments from chat adapters through to the runtime. Discord/Telegram/Slack all support file uploads.
[-] --attach flag — You can run opencode run --attach http://localhost:4096 to connect to a running server, avoiding MCP cold boot on every request. Aetheel's SDK mode connects to the server, but CLI mode spawns a fresh process each time. Using --attach in CLI mode would give you the speed of SDK mode without needing the Python SDK.
[-] --fork flag — Fork a session when continuing, creating a branch. Aetheel always continues sessions linearly. Forking would be useful for "what if" scenarios or spawning subagent tasks from a shared context.
[-] --title flag — Name sessions for easier identification. Aetheel's sessions are tracked by conversation ID but have no human-readable title.
--share flag — Share sessions via URL. Aetheel has no session sharing.
opencode session list/export/import — Full session management. Aetheel can list sessions internally but doesn't expose export/import or the full session lifecycle.
[-] opencode stats — Token usage and cost statistics with --days, --tools, --models filters. Aetheel tracks basic usage stats in memory but doesn't query OpenCode's built-in stats.
[-] opencode models — List available models from configured providers. Aetheel has no way to discover available models — you have to know the model name.
opencode auth management — Login/logout/list for providers. Aetheel relies on env vars for auth and has no way to manage OpenCode's credential store.
opencode mcp auth/logout/debug — OAuth-based MCP server auth and debugging. Aetheel can add/remove MCP servers but can't handle OAuth flows or debug MCP connections.
opencode github agent — GitHub Actions integration for repo automation. Aetheel has no CI/CD agent support.
opencode web — Built-in web UI. Aetheel has its own WebChat but doesn't leverage OpenCode's web interface.
opencode acp — Agent Client Protocol server. Aetheel doesn't use ACP.
OPENCODE_AUTO_SHARE — Auto-share sessions.
OPENCODE_DISABLE_AUTOCOMPACT — Control context compaction. Aetheel doesn't expose this, which could matter for long conversations.
OPENCODE_EXPERIMENTAL_PLAN_MODE — Plan mode for structured task execution. Aetheel doesn't use this.
OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS — Control bash command timeouts. Aetheel doesn't pass this through.
OPENCODE_ENABLE_EXA — Exa web search tools. Aetheel doesn't expose this toggle.
opencode upgrade — Self-update. Aetheel has aetheel update which does git pull but doesn't update the OpenCode binary itself.
The most impactful gaps are --agent (for agent teams), --file (for media from chat), --attach (for faster CLI mode), --fork (for branching conversations), and opencode stats (for usage visibility).

View File

@@ -0,0 +1,437 @@
CLI
OpenCode CLI options and commands.
The OpenCode CLI by default starts the TUI when run without any arguments.
Terminal window
opencode
But it also accepts commands as documented on this page. This allows you to interact with OpenCode programmatically.
Terminal window
opencode run "Explain how closures work in JavaScript"
tui
Start the OpenCode terminal user interface.
Terminal window
opencode [project]
Flags
Flag Short Description
--continue -c Continue the last session
--session -s Session ID to continue
--fork Fork the session when continuing (use with --continue or --session)
--prompt Prompt to use
--model -m Model to use in the form of provider/model
--agent Agent to use
--port Port to listen on
--hostname Hostname to listen on
Commands
The OpenCode CLI also has the following commands.
agent
Manage agents for OpenCode.
Terminal window
opencode agent [command]
attach
Attach a terminal to an already running OpenCode backend server started via serve or web commands.
Terminal window
opencode attach [url]
This allows using the TUI with a remote OpenCode backend. For example:
Terminal window
# Start the backend server for web/mobile access
opencode web --port 4096 --hostname 0.0.0.0
# In another terminal, attach the TUI to the running backend
opencode attach http://10.20.30.40:4096
Flags
Flag Short Description
--dir Working directory to start TUI in
--session -s Session ID to continue
create
Create a new agent with custom configuration.
Terminal window
opencode agent create
This command will guide you through creating a new agent with a custom system prompt and tool configuration.
list
List all available agents.
Terminal window
opencode agent list
auth
Command to manage credentials and login for providers.
Terminal window
opencode auth [command]
login
OpenCode is powered by the provider list at Models.dev, so you can use opencode auth login to configure API keys for any provider youd like to use. This is stored in ~/.local/share/opencode/auth.json.
Terminal window
opencode auth login
When OpenCode starts up it loads the providers from the credentials file. And if there are any keys defined in your environments or a .env file in your project.
list
Lists all the authenticated providers as stored in the credentials file.
Terminal window
opencode auth list
Or the short version.
Terminal window
opencode auth ls
logout
Logs you out of a provider by clearing it from the credentials file.
Terminal window
opencode auth logout
github
Manage the GitHub agent for repository automation.
Terminal window
opencode github [command]
install
Install the GitHub agent in your repository.
Terminal window
opencode github install
This sets up the necessary GitHub Actions workflow and guides you through the configuration process. Learn more.
run
Run the GitHub agent. This is typically used in GitHub Actions.
Terminal window
opencode github run
Flags
Flag Description
--event GitHub mock event to run the agent for
--token GitHub personal access token
mcp
Manage Model Context Protocol servers.
Terminal window
opencode mcp [command]
add
Add an MCP server to your configuration.
Terminal window
opencode mcp add
This command will guide you through adding either a local or remote MCP server.
list
List all configured MCP servers and their connection status.
Terminal window
opencode mcp list
Or use the short version.
Terminal window
opencode mcp ls
auth
Authenticate with an OAuth-enabled MCP server.
Terminal window
opencode mcp auth [name]
If you dont provide a server name, youll be prompted to select from available OAuth-capable servers.
You can also list OAuth-capable servers and their authentication status.
Terminal window
opencode mcp auth list
Or use the short version.
Terminal window
opencode mcp auth ls
logout
Remove OAuth credentials for an MCP server.
Terminal window
opencode mcp logout [name]
debug
Debug OAuth connection issues for an MCP server.
Terminal window
opencode mcp debug <name>
models
List all available models from configured providers.
Terminal window
opencode models [provider]
This command displays all models available across your configured providers in the format provider/model.
This is useful for figuring out the exact model name to use in your config.
You can optionally pass a provider ID to filter models by that provider.
Terminal window
opencode models anthropic
Flags
Flag Description
--refresh Refresh the models cache from models.dev
--verbose Use more verbose model output (includes metadata like costs)
Use the --refresh flag to update the cached model list. This is useful when new models have been added to a provider and you want to see them in OpenCode.
Terminal window
opencode models --refresh
run
Run opencode in non-interactive mode by passing a prompt directly.
Terminal window
opencode run [message..]
This is useful for scripting, automation, or when you want a quick answer without launching the full TUI. For example.
Terminal window
opencode run Explain the use of context in Go
You can also attach to a running opencode serve instance to avoid MCP server cold boot times on every run:
Terminal window
# Start a headless server in one terminal
opencode serve
# In another terminal, run commands that attach to it
opencode run --attach http://localhost:4096 "Explain async/await in JavaScript"
Flags
Flag Short Description
--command The command to run, use message for args
--continue -c Continue the last session
--session -s Session ID to continue
--fork Fork the session when continuing (use with --continue or --session)
--share Share the session
--model -m Model to use in the form of provider/model
--agent Agent to use
--file -f File(s) to attach to message
--format Format: default (formatted) or json (raw JSON events)
--title Title for the session (uses truncated prompt if no value provided)
--attach Attach to a running opencode server (e.g., http://localhost:4096)
--port Port for the local server (defaults to random port)
serve
Start a headless OpenCode server for API access. Check out the server docs for the full HTTP interface.
Terminal window
opencode serve
This starts an HTTP server that provides API access to opencode functionality without the TUI interface. Set OPENCODE_SERVER_PASSWORD to enable HTTP basic auth (username defaults to opencode).
Flags
Flag Description
--port Port to listen on
--hostname Hostname to listen on
--mdns Enable mDNS discovery
--cors Additional browser origin(s) to allow CORS
session
Manage OpenCode sessions.
Terminal window
opencode session [command]
list
List all OpenCode sessions.
Terminal window
opencode session list
Flags
Flag Short Description
--max-count -n Limit to N most recent sessions
--format Output format: table or json (table)
stats
Show token usage and cost statistics for your OpenCode sessions.
Terminal window
opencode stats
Flags
Flag Description
--days Show stats for the last N days (all time)
--tools Number of tools to show (all)
--models Show model usage breakdown (hidden by default). Pass a number to show top N
--project Filter by project (all projects, empty string: current project)
export
Export session data as JSON.
Terminal window
opencode export [sessionID]
If you dont provide a session ID, youll be prompted to select from available sessions.
import
Import session data from a JSON file or OpenCode share URL.
Terminal window
opencode import <file>
You can import from a local file or an OpenCode share URL.
Terminal window
opencode import session.json
opencode import https://opncd.ai/s/abc123
web
Start a headless OpenCode server with a web interface.
Terminal window
opencode web
This starts an HTTP server and opens a web browser to access OpenCode through a web interface. Set OPENCODE_SERVER_PASSWORD to enable HTTP basic auth (username defaults to opencode).
Flags
Flag Description
--port Port to listen on
--hostname Hostname to listen on
--mdns Enable mDNS discovery
--cors Additional browser origin(s) to allow CORS
acp
Start an ACP (Agent Client Protocol) server.
Terminal window
opencode acp
This command starts an ACP server that communicates via stdin/stdout using nd-JSON.
Flags
Flag Description
--cwd Working directory
--port Port to listen on
--hostname Hostname to listen on
uninstall
Uninstall OpenCode and remove all related files.
Terminal window
opencode uninstall
Flags
Flag Short Description
--keep-config -c Keep configuration files
--keep-data -d Keep session data and snapshots
--dry-run Show what would be removed without removing
--force -f Skip confirmation prompts
upgrade
Updates opencode to the latest version or a specific version.
Terminal window
opencode upgrade [target]
To upgrade to the latest version.
Terminal window
opencode upgrade
To upgrade to a specific version.
Terminal window
opencode upgrade v0.1.48
Flags
Flag Short Description
--method -m The installation method that was used; curl, npm, pnpm, bun, brew
Global Flags
The opencode CLI takes the following global flags.
Flag Short Description
--help -h Display help
--version -v Print version number
--print-logs Print logs to stderr
--log-level Log level (DEBUG, INFO, WARN, ERROR)
Environment variables
OpenCode can be configured using environment variables.
Variable Type Description
OPENCODE_AUTO_SHARE boolean Automatically share sessions
OPENCODE_GIT_BASH_PATH string Path to Git Bash executable on Windows
OPENCODE_CONFIG string Path to config file
OPENCODE_CONFIG_DIR string Path to config directory
OPENCODE_CONFIG_CONTENT string Inline json config content
OPENCODE_DISABLE_AUTOUPDATE boolean Disable automatic update checks
OPENCODE_DISABLE_PRUNE boolean Disable pruning of old data
OPENCODE_DISABLE_TERMINAL_TITLE boolean Disable automatic terminal title updates
OPENCODE_PERMISSION string Inlined json permissions config
OPENCODE_DISABLE_DEFAULT_PLUGINS boolean Disable default plugins
OPENCODE_DISABLE_LSP_DOWNLOAD boolean Disable automatic LSP server downloads
OPENCODE_ENABLE_EXPERIMENTAL_MODELS boolean Enable experimental models
OPENCODE_DISABLE_AUTOCOMPACT boolean Disable automatic context compaction
OPENCODE_DISABLE_CLAUDE_CODE boolean Disable reading from .claude (prompt + skills)
OPENCODE_DISABLE_CLAUDE_CODE_PROMPT boolean Disable reading ~/.claude/CLAUDE.md
OPENCODE_DISABLE_CLAUDE_CODE_SKILLS boolean Disable loading .claude/skills
OPENCODE_DISABLE_MODELS_FETCH boolean Disable fetching models from remote sources
OPENCODE_FAKE_VCS string Fake VCS provider for testing purposes
OPENCODE_DISABLE_FILETIME_CHECK boolean Disable file time checking for optimization
OPENCODE_CLIENT string Client identifier (defaults to cli)
OPENCODE_ENABLE_EXA boolean Enable Exa web search tools
OPENCODE_SERVER_PASSWORD string Enable basic auth for serve/web
OPENCODE_SERVER_USERNAME string Override basic auth username (default opencode)
OPENCODE_MODELS_URL string Custom URL for fetching models configuration
Experimental
These environment variables enable experimental features that may change or be removed.
Variable Type Description
OPENCODE_EXPERIMENTAL boolean Enable all experimental features
OPENCODE_EXPERIMENTAL_ICON_DISCOVERY boolean Enable icon discovery
OPENCODE_EXPERIMENTAL_DISABLE_COPY_ON_SELECT boolean Disable copy on select in TUI
OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS number Default timeout for bash commands in ms
OPENCODE_EXPERIMENTAL_OUTPUT_TOKEN_MAX number Max output tokens for LLM responses
OPENCODE_EXPERIMENTAL_FILEWATCHER boolean Enable file watcher for entire dir
OPENCODE_EXPERIMENTAL_OXFMT boolean Enable oxfmt formatter
OPENCODE_EXPERIMENTAL_LSP_TOOL boolean Enable experimental LSP tool
OPENCODE_EXPERIMENTAL_DISABLE_FILEWATCHER boolean Disable file watcher
OPENCODE_EXPERIMENTAL_EXA boolean Enable experimental Exa features
OPENCODE_EXPERIMENTAL_LSP_TY boolean Enable experimental LSP type checking
OPENCODE_EXPERIMENTAL_MARKDOWN boolean Enable experimental markdown features
OPENCODE_EXPERIMENTAL_PLAN_MODE boolean Enable plan mode