Initial commit: Discord-Claude Gateway with event-driven agent runtime

This commit is contained in:
2026-02-22 13:59:57 -05:00
parent b4f340b610
commit f2247ea3ac
28 changed files with 2056 additions and 205 deletions

16
.env.example Normal file
View File

@@ -0,0 +1,16 @@
# Required
DISCORD_BOT_TOKEN=your-discord-bot-token-here
# Recommended
OUTPUT_CHANNEL_ID=your-discord-channel-id-for-heartbeat-cron-output
# Optional
# CLAUDE_CLI_PATH=claude
# CONFIG_DIR=./config
# ALLOWED_TOOLS=Read,Write,Edit,Glob,Grep,WebSearch,WebFetch
# PERMISSION_MODE=bypassPermissions
# QUERY_TIMEOUT_MS=120000
# MAX_CONCURRENT_QUERIES=5
# MAX_QUEUE_DEPTH=100
# IDLE_SESSION_TIMEOUT_MS=1800000
# LOG_LEVEL=info

283
README.md
View File

@@ -1,208 +1,233 @@
# Discord-Claude Gateway
# Aetheel — Discord-Claude Gateway
An event-driven agent runtime that connects Discord to Claude via the [Claude Agent SDK](https://docs.anthropic.com/en/docs/agent-sdk/overview). Inspired by [OpenClaw](https://github.com/nichochar/open-claw)'s architecture — a gateway in front of an agent runtime with markdown-based personality, memory, and scheduled behaviors.
An event-driven AI agent runtime that connects Discord to Claude Code CLI. Inspired by [OpenClaw](https://github.com/nichochar/open-claw)'s architecture — a gateway in front of an agent runtime with markdown-based personality, memory, scheduled behaviors, and proactive messaging.
## How It Works
```
Discord Users ──► Discord Bot ──► Event Queue ──► Agent Runtime ──► Claude Agent SDK
Discord Users ──► Discord Bot ──► Event Queue ──► Agent Runtime ──► Claude Code CLI
▲ │
Heartbeats ──────────┤ │
Cron Jobs ───────────┤ ┌─────────────────────┘
Hooks ───────────────┘ ▼
Markdown Config Files
(soul, identity, memory, etc.)
IPC Watcher ───────────── Markdown Config Files
(CLAUDE.md, memory.md, etc.)
```
All inputs — Discord messages, heartbeat timers, cron jobs, lifecycle hooks — enter a unified event queue. The agent runtime reads your markdown config files fresh on each event, assembles a dynamic system prompt, and calls the Claude Agent SDK. The agent can write back to `memory.md` to persist facts across sessions.
All inputs — Discord messages, heartbeat timers, cron jobs, lifecycle hooks — enter a unified event queue. The agent runtime reads your markdown config files fresh on each event, assembles a dynamic system prompt, and calls the Claude Code CLI. The agent can write back to `memory.md` to persist facts across sessions, and send proactive messages via the IPC system.
Uses your existing Claude Code subscription — no API key needed.
## Prerequisites
- **Node.js** 18+
- **Claude Code CLI** — [Install Claude Code](https://docs.anthropic.com/en/docs/claude-code/getting-started) and sign in with your subscription
- **Discord Bot Token** — [Create a bot](https://discord.com/developers/applications) with Message Content Intent enabled
- Node.js 18+
- Claude Code CLI installed and signed in (`npm install -g @anthropic-ai/claude-code && claude`)
- A Discord bot token ([create one here](https://discord.com/developers/applications)) with Message Content Intent enabled
## Quick Start
```bash
# Install dependencies
git clone <your-repo-url>
cd aetheel-2
npm install
# Make sure Claude Code CLI is installed and you're signed in
claude --version
# Set required environment variables
export DISCORD_BOT_TOKEN=your-discord-bot-token
# Create config directory with persona files
mkdir config
```
Create at minimum `config/soul.md` and `config/identity.md`:
```bash
# config/identity.md
echo "# Identity
- **Name:** Aetheel
- **Vibe:** Helpful, sharp, slightly witty
- **Emoji:** ⚡" > config/identity.md
# config/soul.md
echo "# Soul
Be genuinely helpful. Have opinions. Be resourceful before asking.
Earn trust through competence." > config/soul.md
```
```bash
# Start the gateway
cp .env.example .env # Edit with your Discord bot token
mkdir -p config
# Create config/CLAUDE.md with your persona (see Setup section)
npm run dev
```
Or run the interactive setup:
```bash
bash scripts/setup.sh
```
## Configuration
All settings are via environment variables:
### Environment Variables
Create a `.env` file in the project root (auto-loaded via dotenv):
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `DISCORD_BOT_TOKEN` | Yes | — | Discord bot token |
| `OUTPUT_CHANNEL_ID` | No | — | Discord channel for heartbeat/cron/hook output |
| `CLAUDE_CLI_PATH` | No | `claude` | Path to the Claude Code CLI binary |
| `ALLOWED_TOOLS` | No | `Read,Write,Edit,Glob,Grep,WebSearch,WebFetch` | Comma-separated Claude Code tools |
| `PERMISSION_MODE` | No | `bypassPermissions` | Claude Code permission mode |
| `QUERY_TIMEOUT_MS` | No | `120000` | Query timeout in milliseconds |
| `MAX_CONCURRENT_QUERIES` | No | `5` | Max simultaneous Claude queries |
| `CONFIG_DIR` | No | `./config` | Path to markdown config directory |
| `ALLOWED_TOOLS` | No | `Read,Write,Edit,Glob,Grep,WebSearch,WebFetch` | Comma-separated tools the agent can use |
| `PERMISSION_MODE` | No | `bypassPermissions` | Claude Code permission mode |
| `QUERY_TIMEOUT_MS` | No | `120000` | Max time per query (ms) |
| `MAX_CONCURRENT_QUERIES` | No | `5` | Max simultaneous queries |
| `MAX_QUEUE_DEPTH` | No | `100` | Max events in the queue |
| `OUTPUT_CHANNEL_ID` | No | — | Discord channel for heartbeat/cron output |
| `IDLE_SESSION_TIMEOUT_MS` | No | `1800000` | Session idle timeout (30 min) |
| `LOG_LEVEL` | No | `info` | Log level: debug, info, warn, error |
## Markdown Config Files
### Markdown Config Files
Place these in your `CONFIG_DIR` (default: `./config/`). The gateway reads them fresh on every event — edit them anytime, no restart needed (except `agents.md` and `heartbeat.md` which are parsed at startup for cron/heartbeat timers).
Place these in `CONFIG_DIR` (default: `./config/`):
| File | Purpose | Required |
|------|---------|----------|
| `CLAUDE.md` | Persona: identity, personality, user context, tools — all in one | Yes |
| `agents.md` | Operating rules, cron jobs, hooks (parsed by gateway) | No |
| `memory.md` | Long-term memory (agent can write to this) | No (auto-created) |
| `heartbeat.md` | Proactive check definitions (parsed by gateway) | No |
| `CLAUDE.md` | Persona: identity, personality, user context, tools | Yes |
| `agents.md` | Operating rules, cron jobs, hooks (parsed at startup) | No |
| `memory.md` | Long-term memory (agent-writable) | No (auto-created) |
| `heartbeat.md` | Proactive check definitions (parsed at startup) | No |
Missing optional files are created with default headers on first run.
The gateway reads `CLAUDE.md` and `memory.md` fresh on every event — edit them anytime. `agents.md` and `heartbeat.md` are parsed at startup for timers, so restart after editing those.
### Heartbeat Config (`heartbeat.md`)
### Skills
Define proactive checks the agent runs on a timer:
Drop skill files into `config/skills/{name}/SKILL.md` and they're automatically loaded into the system prompt:
```
config/skills/
├── web-research/
│ └── SKILL.md → "When asked to research, use WebSearch..."
└── code-review/
└── SKILL.md → "When reviewing code, focus on..."
```
## Features
### Discord Integration
- Mention the bot (`@Aetheel hi`) or use `/claude` slash command
- `/claude-reset` to start a fresh conversation
- Responses auto-split at 2000 chars with code block preservation
- Typing indicators while processing
### Session Management
- Per-channel conversation sessions with Claude Code CLI `--resume`
- Sessions persist to `config/sessions.json` (survive restarts)
- Auto-cleanup of idle sessions after 30 minutes (configurable)
### Heartbeats (Timer Events)
Define in `config/heartbeat.md`:
```markdown
## check-email
Interval: 1800
Instruction: Check my inbox for anything urgent. If nothing, reply HEARTBEAT_OK.
## check-calendar
Interval: 3600
Instruction: Review upcoming calendar events in the next 24 hours.
Instruction: Check my inbox for anything urgent.
```
Interval is in seconds (minimum 60).
### Cron Jobs (in `agents.md`)
Define scheduled tasks with cron expressions:
### Cron Jobs (Scheduled Events)
Define in `config/agents.md`:
```markdown
## Cron Jobs
### morning-briefing
Cron: 0 9 * * *
Instruction: Good morning! Check email, review today's calendar, and give me a brief summary.
### weekly-review
Cron: 0 15 * * 1
Instruction: Review the week's calendar and flag any conflicts.
Cron: 0 8 * * *
Instruction: Good morning! Search for the latest AI news and post a summary.
```
### Hooks (in `agents.md`)
Define lifecycle hook instructions:
### Lifecycle Hooks
Define in `config/agents.md`:
```markdown
## Hooks
### startup
Instruction: Read memory.md and greet the user.
Instruction: Say hello, you just came online.
### shutdown
Instruction: Save any important context to memory.md before shutting down.
Instruction: Save important context to memory.md.
```
## Discord Commands
### Proactive Messaging (IPC)
The agent can send messages to any Discord channel by writing JSON files to `config/ipc/outbound/`:
```json
{"channelId": "123456789", "text": "Hey, found something interesting!"}
```
The gateway polls every 2 seconds and delivers the message.
| Command | Description |
|---------|-------------|
| `@bot <message>` | Send a prompt by mentioning the bot |
| `/claude <prompt>` | Send a prompt via slash command |
| `/claude-reset` | Reset the conversation session in the current channel |
### Message History
All inbound/outbound messages stored per channel in `config/messages/{channelId}.json`. Max 100 messages per channel, auto-trimmed.
## Architecture
### Conversation Archiving
Every exchange saved as readable markdown in `config/conversations/{channelId}/{YYYY-MM-DD}.md`.
The system has 5 input types (inspired by OpenClaw):
### Retry with Backoff
Claude CLI calls retry 3 times with exponential backoff (5s, 10s, 20s) on transient errors. Session corruption errors fail immediately.
1. **Messages** — Discord mentions and slash commands
2. **Heartbeats** — Timer-based proactive checks
3. **Cron Jobs** — Scheduled events with cron expressions
4. **Hooks** — Internal lifecycle triggers (startup, shutdown, agent_begin, agent_stop)
5. **Webhooks** — External system events (planned)
### Structured Logging
Pino-based structured JSON logging. Set `LOG_LEVEL=debug` for verbose output. Pretty-printed in dev, JSON in production (`NODE_ENV=production`).
All inputs enter a unified FIFO event queue → processed sequentially by the agent runtime → state persists to markdown files → loop continues.
## Deployment
### systemd (recommended for Linux servers)
```bash
sudo bash scripts/setup.sh # Creates .env, config/, and systemd service
```
Or manually:
```bash
sudo cp scripts/aetheel.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable aetheel
sudo systemctl start aetheel
# View logs
sudo journalctl -u aetheel -f
```
### PM2
```bash
npm run build
pm2 start dist/index.js --name aetheel
pm2 save
```
### Dev mode
```bash
npm run dev
```
## Project Structure
```
src/
├── index.ts # Entry point
├── gateway-core.ts # Main orchestrator
├── config.ts # Environment variable loader
├── discord-bot.ts # Discord.js wrapper
├── event-queue.ts # Unified FIFO event queue
├── agent-runtime.ts # Core processing engine
├── markdown-config-loader.ts # Reads config files from disk
├── system-prompt-assembler.ts# Assembles system prompt from configs
├── session-manager.ts # Channel-to-session bindings
├── channel-queue.ts # Per-channel sequential processing
├── response-formatter.ts # Message splitting for Discord's 2000 char limit
├── error-formatter.ts # Safe error formatting
├── heartbeat-scheduler.ts # Recurring timer events
├── cron-scheduler.ts # Cron-expression scheduled events
├── hook-manager.ts # Lifecycle hook events
├── bootstrap-manager.ts # First-run file validation/creation
── shutdown-handler.ts # Graceful SIGTERM/SIGINT handling
├── index.ts # Entry point (loads .env)
├── gateway-core.ts # Main orchestrator
├── config.ts # Environment variable loader
├── logger.ts # Pino structured logger
├── discord-bot.ts # Discord.js wrapper
├── event-queue.ts # Unified FIFO event queue
├── agent-runtime.ts # Core engine: reads configs, spawns CLI, streams output
├── markdown-config-loader.ts # Reads CLAUDE.md, agents.md, memory.md
├── system-prompt-assembler.ts # Assembles system prompt with sections
├── skills-loader.ts # Loads skills from config/skills/*/SKILL.md
├── session-manager.ts # Channel → session ID (persisted, idle cleanup)
├── message-history.ts # Per-channel message storage
├── conversation-archiver.ts # Markdown conversation logs
├── ipc-watcher.ts # Polls ipc/outbound/ for proactive messages
├── response-formatter.ts # Splits long text for Discord's 2000 char limit
├── error-formatter.ts # Sanitizes errors (strips keys, paths, stacks)
── heartbeat-scheduler.ts # Recurring timer events
├── cron-scheduler.ts # node-cron scheduled events
├── hook-manager.ts # Lifecycle hooks
├── bootstrap-manager.ts # First-run file validation/creation
├── channel-queue.ts # Per-channel sequential processing
└── shutdown-handler.ts # Graceful SIGTERM/SIGINT handling
config/ # Agent workspace (gitignored)
├── CLAUDE.md # Persona
├── agents.md # Rules, cron, hooks
├── memory.md # Long-term memory (agent-writable)
├── heartbeat.md # Heartbeat checks
├── sessions.json # Session persistence (auto)
├── messages/ # Message history (auto)
├── conversations/ # Conversation archives (auto)
├── ipc/outbound/ # Proactive message queue (auto)
├── skills/ # Skill definitions
└── news/ # Example: agent-created content
```
## Development
```bash
# Run tests
npm test
# Run in dev mode
npm run dev
# Build
npm run build
# Start production
npm start
npm test # Run tests (85 passing)
npm run dev # Dev mode with tsx
npm run build # Compile TypeScript
npm start # Run compiled JS
```
## Claude Code CLI vs API Key
This gateway uses the **Claude Code CLI** (`claude -p`) instead of the Anthropic API directly. This means:
- You use your existing **Claude Code subscription** — no separate API key needed
- Just sign in with `claude` in your terminal and you're good to go
- The gateway shells out to `claude -p "prompt" --output-format json` for each query
- Set `CLAUDE_CLI_PATH` if `claude` isn't in your PATH
## License
MIT

View File

@@ -0,0 +1,196 @@
# Custom instructions with AGENTS.md
Codex reads `AGENTS.md` files before doing any work. By layering global guidance with project-specific overrides, you can start each task with consistent expectations, no matter which repository you open.
## How Codex discovers guidance
Codex builds an instruction chain when it starts (once per run; in the TUI this usually means once per launched session). Discovery follows this precedence order:
1. **Global scope:** In your Codex home directory (defaults to `~/.codex`, unless you set `CODEX_HOME`), Codex reads `AGENTS.override.md` if it exists. Otherwise, Codex reads `AGENTS.md`. Codex uses only the first non-empty file at this level.
2. **Project scope:** Starting at the project root (typically the Git root), Codex walks down to your current working directory. If Codex cannot find a project root, it only checks the current directory. In each directory along the path, it checks for `AGENTS.override.md`, then `AGENTS.md`, then any fallback names in `project_doc_fallback_filenames`. Codex includes at most one file per directory.
3. **Merge order:** Codex concatenates files from the root down, joining them with blank lines. Files closer to your current directory override earlier guidance because they appear later in the combined prompt.
Codex skips empty files and stops adding files once the combined size reaches the limit defined by `project_doc_max_bytes` (32 KiB by default). For details on these knobs, see [Project instructions discovery](https://developers.openai.com/codex/config-advanced#project-instructions-discovery). Raise the limit or split instructions across nested directories when you hit the cap.
## Create global guidance
Create persistent defaults in your Codex home directory so every repository inherits your working agreements.
1. Ensure the directory exists:
```bash
mkdir -p ~/.codex
```
2. Create `~/.codex/AGENTS.md` with reusable preferences:
```md
# ~/.codex/AGENTS.md
## Working agreements
- Always run `npm test` after modifying JavaScript files.
- Prefer `pnpm` when installing dependencies.
- Ask for confirmation before adding new production dependencies.
```
3. Run Codex anywhere to confirm it loads the file:
```bash
codex --ask-for-approval never "Summarize the current instructions."
```
Expected: Codex quotes the items from `~/.codex/AGENTS.md` before proposing work.
Use `~/.codex/AGENTS.override.md` when you need a temporary global override without deleting the base file. Remove the override to restore the shared guidance.
## Layer project instructions
Repository-level files keep Codex aware of project norms while still inheriting your global defaults.
1. In your repository root, add an `AGENTS.md` that covers basic setup:
```md
# AGENTS.md
## Repository expectations
- Run `npm run lint` before opening a pull request.
- Document public utilities in `docs/` when you change behavior.
```
2. Add overrides in nested directories when specific teams need different rules. For example, inside `services/payments/` create `AGENTS.override.md`:
```md
# services/payments/AGENTS.override.md
## Payments service rules
- Use `make test-payments` instead of `npm test`.
- Never rotate API keys without notifying the security channel.
```
3. Start Codex from the payments directory:
```bash
codex --cd services/payments --ask-for-approval never "List the instruction sources you loaded."
```
Expected: Codex reports the global file first, the repository root `AGENTS.md` second, and the payments override last.
Codex stops searching once it reaches your current directory, so place overrides as close to specialized work as possible.
Here is a sample repository after you add a global file and a payments-specific override:
<FileTree
class="mt-4"
tree={[
{
name: "AGENTS.md",
comment: "Repository expectations",
highlight: true,
},
{
name: "services/",
open: true,
children: [
{
name: "payments/",
open: true,
children: [
{
name: "AGENTS.md",
comment: "Ignored because an override exists",
},
{
name: "AGENTS.override.md",
comment: "Payments service rules",
highlight: true,
},
{ name: "README.md" },
],
},
{
name: "search/",
children: [{ name: "AGENTS.md" }, { name: "…", placeholder: true }],
},
],
},
]}
/>
## Customize fallback filenames
If your repository already uses a different filename (for example `TEAM_GUIDE.md`), add it to the fallback list so Codex treats it like an instructions file.
1. Edit your Codex configuration:
```toml
# ~/.codex/config.toml
project_doc_fallback_filenames = ["TEAM_GUIDE.md", ".agents.md"]
project_doc_max_bytes = 65536
```
2. Restart Codex or run a new command so the updated configuration loads.
Now Codex checks each directory in this order: `AGENTS.override.md`, `AGENTS.md`, `TEAM_GUIDE.md`, `.agents.md`. Filenames not on this list are ignored for instruction discovery. The larger byte limit allows more combined guidance before truncation.
With the fallback list in place, Codex treats the alternate files as instructions:
<FileTree
class="mt-4"
tree={[
{
name: "TEAM_GUIDE.md",
comment: "Detected via fallback list",
highlight: true,
},
{
name: ".agents.md",
comment: "Fallback file in root",
},
{
name: "support/",
open: true,
children: [
{
name: "AGENTS.override.md",
comment: "Overrides fallback guidance",
highlight: true,
},
{
name: "playbooks/",
children: [{ name: "…", placeholder: true }],
},
],
},
]}
/>
Set the `CODEX_HOME` environment variable when you want a different profile, such as a project-specific automation user:
```bash
CODEX_HOME=$(pwd)/.codex codex exec "List active instruction sources"
```
Expected: The output lists files relative to the custom `.codex` directory.
## Verify your setup
- Run `codex --ask-for-approval never "Summarize the current instructions."` from a repository root. Codex should echo guidance from global and project files in precedence order.
- Use `codex --cd subdir --ask-for-approval never "Show which instruction files are active."` to confirm nested overrides replace broader rules.
- Check `~/.codex/log/codex-tui.log` (or the most recent `session-*.jsonl` file if you enabled session logging) after a session if you need to audit which instruction files Codex loaded.
- If instructions look stale, restart Codex in the target directory. Codex rebuilds the instruction chain on every run (and at the start of each TUI session), so there is no cache to clear manually.
## Troubleshoot discovery issues
- **Nothing loads:** Verify you are in the intended repository and that `codex status` reports the workspace root you expect. Ensure instruction files contain content; Codex ignores empty files.
- **Wrong guidance appears:** Look for an `AGENTS.override.md` higher in the directory tree or under your Codex home. Rename or remove the override to fall back to the regular file.
- **Codex ignores fallback names:** Confirm you listed the names in `project_doc_fallback_filenames` without typos, then restart Codex so the updated configuration takes effect.
- **Instructions truncated:** Raise `project_doc_max_bytes` or split large files across nested directories to keep critical guidance intact.
- **Profile confusion:** Run `echo $CODEX_HOME` before launching Codex. A non-default value points Codex at a different home directory than the one you edited.
## Next steps
- Visit the official [AGENTS.md](https://agents.md) website for more information.
- Review [Prompting Codex](https://developers.openai.com/codex/prompting) for conversational patterns that pair well with persistent guidance.

859
references/codex cli/cli.md Normal file
View File

@@ -0,0 +1,859 @@
# Command line options
export const globalFlagOptions = [
{
key: "PROMPT",
type: "string",
description:
"Optional text instruction to start the session. Omit to launch the TUI without a pre-filled message.",
},
{
key: "--image, -i",
type: "path[,path...]",
description:
"Attach one or more image files to the initial prompt. Separate multiple paths with commas or repeat the flag.",
},
{
key: "--model, -m",
type: "string",
description:
"Override the model set in configuration (for example `gpt-5-codex`).",
},
{
key: "--oss",
type: "boolean",
defaultValue: "false",
description:
'Use the local open source model provider (equivalent to `-c model_provider="oss"`). Validates that Ollama is running.',
},
{
key: "--profile, -p",
type: "string",
description:
"Configuration profile name to load from `~/.codex/config.toml`.",
},
{
key: "--sandbox, -s",
type: "read-only | workspace-write | danger-full-access",
description:
"Select the sandbox policy for model-generated shell commands.",
},
{
key: "--ask-for-approval, -a",
type: "untrusted | on-request | never",
description:
"Control when Codex pauses for human approval before running a command. `on-failure` is deprecated; prefer `on-request` for interactive runs or `never` for non-interactive runs.",
},
{
key: "--full-auto",
type: "boolean",
defaultValue: "false",
description:
"Shortcut for low-friction local work: sets `--ask-for-approval on-request` and `--sandbox workspace-write`.",
},
{
key: "--dangerously-bypass-approvals-and-sandbox, --yolo",
type: "boolean",
defaultValue: "false",
description:
"Run every command without approvals or sandboxing. Only use inside an externally hardened environment.",
},
{
key: "--cd, -C",
type: "path",
description:
"Set the working directory for the agent before it starts processing your request.",
},
{
key: "--search",
type: "boolean",
defaultValue: "false",
description:
'Enable live web search (sets `web_search = "live"` instead of the default `"cached"`).',
},
{
key: "--add-dir",
type: "path",
description:
"Grant additional directories write access alongside the main workspace. Repeat for multiple paths.",
},
{
key: "--no-alt-screen",
type: "boolean",
defaultValue: "false",
description:
"Disable alternate screen mode for the TUI (overrides `tui.alternate_screen` for this run).",
},
{
key: "--enable",
type: "feature",
description:
"Force-enable a feature flag (translates to `-c features.<name>=true`). Repeatable.",
},
{
key: "--disable",
type: "feature",
description:
"Force-disable a feature flag (translates to `-c features.<name>=false`). Repeatable.",
},
{
key: "--config, -c",
type: "key=value",
description:
"Override configuration values. Values parse as JSON if possible; otherwise the literal string is used.",
},
];
export const commandOverview = [
{
key: "codex",
href: "/codex/cli/reference#codex-interactive",
type: "stable",
description:
"Launch the terminal UI. Accepts the global flags above plus an optional prompt or image attachments.",
},
{
key: "codex app-server",
href: "/codex/cli/reference#codex-app-server",
type: "experimental",
description:
"Launch the Codex app server for local development or debugging.",
},
{
key: "codex app",
href: "/codex/cli/reference#codex-app",
type: "stable",
description:
"Launch the Codex desktop app on macOS, optionally opening a specific workspace path.",
},
{
key: "codex debug app-server send-message-v2",
href: "/codex/cli/reference#codex-debug-app-server-send-message-v2",
type: "experimental",
description:
"Debug app-server by sending a single V2 message through the built-in test client.",
},
{
key: "codex apply",
href: "/codex/cli/reference#codex-apply",
type: "stable",
description:
"Apply the latest diff generated by a Codex Cloud task to your local working tree. Alias: `codex a`.",
},
{
key: "codex cloud",
href: "/codex/cli/reference#codex-cloud",
type: "experimental",
description:
"Browse or execute Codex Cloud tasks from the terminal without opening the TUI. Alias: `codex cloud-tasks`.",
},
{
key: "codex completion",
href: "/codex/cli/reference#codex-completion",
type: "stable",
description:
"Generate shell completion scripts for Bash, Zsh, Fish, or PowerShell.",
},
{
key: "codex features",
href: "/codex/cli/reference#codex-features",
type: "stable",
description:
"List feature flags and persistently enable or disable them in `config.toml`.",
},
{
key: "codex exec",
href: "/codex/cli/reference#codex-exec",
type: "stable",
description:
"Run Codex non-interactively. Alias: `codex e`. Stream results to stdout or JSONL and optionally resume previous sessions.",
},
{
key: "codex execpolicy",
href: "/codex/cli/reference#codex-execpolicy",
type: "experimental",
description:
"Evaluate execpolicy rule files and see whether a command would be allowed, prompted, or blocked.",
},
{
key: "codex login",
href: "/codex/cli/reference#codex-login",
type: "stable",
description:
"Authenticate Codex using ChatGPT OAuth, device auth, or an API key piped over stdin.",
},
{
key: "codex logout",
href: "/codex/cli/reference#codex-logout",
type: "stable",
description: "Remove stored authentication credentials.",
},
{
key: "codex mcp",
href: "/codex/cli/reference#codex-mcp",
type: "experimental",
description:
"Manage Model Context Protocol servers (list, add, remove, authenticate).",
},
{
key: "codex mcp-server",
href: "/codex/cli/reference#codex-mcp-server",
type: "experimental",
description:
"Run Codex itself as an MCP server over stdio. Useful when another agent consumes Codex.",
},
{
key: "codex resume",
href: "/codex/cli/reference#codex-resume",
type: "stable",
description:
"Continue a previous interactive session by ID or resume the most recent conversation.",
},
{
key: "codex fork",
href: "/codex/cli/reference#codex-fork",
type: "stable",
description:
"Fork a previous interactive session into a new thread, preserving the original transcript.",
},
{
key: "codex sandbox",
href: "/codex/cli/reference#codex-sandbox",
type: "experimental",
description:
"Run arbitrary commands inside Codex-provided macOS seatbelt or Linux sandboxes (Landlock by default, optional bubblewrap pipeline).",
},
];
export const execOptions = [
{
key: "PROMPT",
type: "string | - (read stdin)",
description:
"Initial instruction for the task. Use `-` to pipe the prompt from stdin.",
},
{
key: "--image, -i",
type: "path[,path...]",
description:
"Attach images to the first message. Repeatable; supports comma-separated lists.",
},
{
key: "--model, -m",
type: "string",
description: "Override the configured model for this run.",
},
{
key: "--oss",
type: "boolean",
defaultValue: "false",
description:
"Use the local open source provider (requires a running Ollama instance).",
},
{
key: "--sandbox, -s",
type: "read-only | workspace-write | danger-full-access",
description:
"Sandbox policy for model-generated commands. Defaults to configuration.",
},
{
key: "--profile, -p",
type: "string",
description: "Select a configuration profile defined in config.toml.",
},
{
key: "--full-auto",
type: "boolean",
defaultValue: "false",
description:
"Apply the low-friction automation preset (`workspace-write` sandbox and `on-request` approvals).",
},
{
key: "--dangerously-bypass-approvals-and-sandbox, --yolo",
type: "boolean",
defaultValue: "false",
description:
"Bypass approval prompts and sandboxing. Dangerous—only use inside an isolated runner.",
},
{
key: "--cd, -C",
type: "path",
description: "Set the workspace root before executing the task.",
},
{
key: "--skip-git-repo-check",
type: "boolean",
defaultValue: "false",
description:
"Allow running outside a Git repository (useful for one-off directories).",
},
{
key: "--ephemeral",
type: "boolean",
defaultValue: "false",
description: "Run without persisting session rollout files to disk.",
},
{
key: "--output-schema",
type: "path",
description:
"JSON Schema file describing the expected final response shape. Codex validates tool output against it.",
},
{
key: "--color",
type: "always | never | auto",
defaultValue: "auto",
description: "Control ANSI color in stdout.",
},
{
key: "--json, --experimental-json",
type: "boolean",
defaultValue: "false",
description:
"Print newline-delimited JSON events instead of formatted text.",
},
{
key: "--output-last-message, -o",
type: "path",
description:
"Write the assistants final message to a file. Useful for downstream scripting.",
},
{
key: "Resume subcommand",
type: "codex exec resume [SESSION_ID]",
description:
"Resume an exec session by ID or add `--last` to continue the most recent session from the current working directory. Add `--all` to consider sessions from any directory. Accepts an optional follow-up prompt.",
},
{
key: "-c, --config",
type: "key=value",
description:
"Inline configuration override for the non-interactive run (repeatable).",
},
];
export const appServerOptions = [
{
key: "--listen",
type: "stdio:// | ws://IP:PORT",
defaultValue: "stdio://",
description:
"Transport listener URL. `ws://` is experimental and intended for development/testing.",
},
];
export const appOptions = [
{
key: "PATH",
type: "path",
defaultValue: ".",
description:
"Workspace path to open in Codex Desktop (`codex app` is available on macOS only).",
},
{
key: "--download-url",
type: "url",
description:
"Advanced override for the Codex desktop DMG download URL used during install.",
},
];
export const debugAppServerSendMessageV2Options = [
{
key: "USER_MESSAGE",
type: "string",
description:
"Message text sent to app-server through the built-in V2 test-client flow.",
},
];
export const resumeOptions = [
{
key: "SESSION_ID",
type: "uuid",
description:
"Resume the specified session. Omit and use `--last` to continue the most recent session.",
},
{
key: "--last",
type: "boolean",
defaultValue: "false",
description:
"Skip the picker and resume the most recent conversation from the current working directory.",
},
{
key: "--all",
type: "boolean",
defaultValue: "false",
description:
"Include sessions outside the current working directory when selecting the most recent session.",
},
];
export const featuresOptions = [
{
key: "List subcommand",
type: "codex features list",
description:
"Show known feature flags, their maturity stage, and their effective state.",
},
{
key: "Enable subcommand",
type: "codex features enable <feature>",
description:
"Persistently enable a feature flag in `config.toml`. Respects the active `--profile` when provided.",
},
{
key: "Disable subcommand",
type: "codex features disable <feature>",
description:
"Persistently disable a feature flag in `config.toml`. Respects the active `--profile` when provided.",
},
];
export const execResumeOptions = [
{
key: "SESSION_ID",
type: "uuid",
description:
"Resume the specified session. Omit and use `--last` to continue the most recent session.",
},
{
key: "--last",
type: "boolean",
defaultValue: "false",
description:
"Resume the most recent conversation from the current working directory.",
},
{
key: "--all",
type: "boolean",
defaultValue: "false",
description:
"Include sessions outside the current working directory when selecting the most recent session.",
},
{
key: "--image, -i",
type: "path[,path...]",
description:
"Attach one or more images to the follow-up prompt. Separate multiple paths with commas or repeat the flag.",
},
{
key: "PROMPT",
type: "string | - (read stdin)",
description:
"Optional follow-up instruction sent immediately after resuming.",
},
];
export const forkOptions = [
{
key: "SESSION_ID",
type: "uuid",
description:
"Fork the specified session. Omit and use `--last` to fork the most recent session.",
},
{
key: "--last",
type: "boolean",
defaultValue: "false",
description:
"Skip the picker and fork the most recent conversation automatically.",
},
{
key: "--all",
type: "boolean",
defaultValue: "false",
description:
"Show sessions beyond the current working directory in the picker.",
},
];
export const execpolicyOptions = [
{
key: "--rules, -r",
type: "path (repeatable)",
description:
"Path to an execpolicy rule file to evaluate. Provide multiple flags to combine rules across files.",
},
{
key: "--pretty",
type: "boolean",
defaultValue: "false",
description: "Pretty-print the JSON result.",
},
{
key: "COMMAND...",
type: "var-args",
description: "Command to be checked against the specified policies.",
},
];
export const loginOptions = [
{
key: "--with-api-key",
type: "boolean",
description:
"Read an API key from stdin (for example `printenv OPENAI_API_KEY | codex login --with-api-key`).",
},
{
key: "--device-auth",
type: "boolean",
description:
"Use OAuth device code flow instead of launching a browser window.",
},
{
key: "status subcommand",
type: "codex login status",
description:
"Print the active authentication mode and exit with 0 when logged in.",
},
];
export const applyOptions = [
{
key: "TASK_ID",
type: "string",
description:
"Identifier of the Codex Cloud task whose diff should be applied.",
},
];
export const sandboxMacOptions = [
{
key: "--full-auto",
type: "boolean",
defaultValue: "false",
description:
"Grant write access to the current workspace and `/tmp` without approvals.",
},
{
key: "--config, -c",
type: "key=value",
description:
"Pass configuration overrides into the sandboxed run (repeatable).",
},
{
key: "COMMAND...",
type: "var-args",
description:
"Shell command to execute under macOS Seatbelt. Everything after `--` is forwarded.",
},
];
export const sandboxLinuxOptions = [
{
key: "--full-auto",
type: "boolean",
defaultValue: "false",
description:
"Grant write access to the current workspace and `/tmp` inside the Landlock sandbox.",
},
{
key: "--config, -c",
type: "key=value",
description:
"Configuration overrides applied before launching the sandbox (repeatable).",
},
{
key: "COMMAND...",
type: "var-args",
description:
"Command to execute under Landlock + seccomp. Provide the executable after `--`.",
},
];
export const completionOptions = [
{
key: "SHELL",
type: "bash | zsh | fish | power-shell | elvish",
defaultValue: "bash",
description: "Shell to generate completions for. Output prints to stdout.",
},
];
export const cloudExecOptions = [
{
key: "QUERY",
type: "string",
description:
"Task prompt. If omitted, Codex prompts interactively for details.",
},
{
key: "--env",
type: "ENV_ID",
description:
"Target Codex Cloud environment identifier (required). Use `codex cloud` to list options.",
},
{
key: "--attempts",
type: "1-4",
defaultValue: "1",
description:
"Number of assistant attempts (best-of-N) Codex Cloud should run.",
},
];
export const cloudListOptions = [
{
key: "--env",
type: "ENV_ID",
description: "Filter tasks by environment identifier.",
},
{
key: "--limit",
type: "1-20",
defaultValue: "20",
description: "Maximum number of tasks to return.",
},
{
key: "--cursor",
type: "string",
description: "Pagination cursor returned by a previous request.",
},
{
key: "--json",
type: "boolean",
defaultValue: "false",
description: "Emit machine-readable JSON instead of plain text.",
},
];
export const mcpCommands = [
{
key: "list",
type: "--json",
description:
"List configured MCP servers. Add `--json` for machine-readable output.",
},
{
key: "get <name>",
type: "--json",
description:
"Show a specific server configuration. `--json` prints the raw config entry.",
},
{
key: "add <name>",
type: "-- <command...> | --url <value>",
description:
"Register a server using a stdio launcher command or a streamable HTTP URL. Supports `--env KEY=VALUE` for stdio transports.",
},
{
key: "remove <name>",
description: "Delete a stored MCP server definition.",
},
{
key: "login <name>",
type: "--scopes scope1,scope2",
description:
"Start an OAuth login for a streamable HTTP server (servers that support OAuth only).",
},
{
key: "logout <name>",
description:
"Remove stored OAuth credentials for a streamable HTTP server.",
},
];
export const mcpAddOptions = [
{
key: "COMMAND...",
type: "stdio transport",
description:
"Executable plus arguments to launch the MCP server. Provide after `--`.",
},
{
key: "--env KEY=VALUE",
type: "repeatable",
description:
"Environment variable assignments applied when launching a stdio server.",
},
{
key: "--url",
type: "https://…",
description:
"Register a streamable HTTP server instead of stdio. Mutually exclusive with `COMMAND...`.",
},
{
key: "--bearer-token-env-var",
type: "ENV_VAR",
description:
"Environment variable whose value is sent as a bearer token when connecting to a streamable HTTP server.",
},
];
## How to read this reference
This page catalogs every documented Codex CLI command and flag. Use the interactive tables to search by key or description. Each section indicates whether the option is stable or experimental and calls out risky combinations.
The CLI inherits most defaults from <code>~/.codex/config.toml</code>. Any
<code>-c key=value</code> overrides you pass at the command line take
precedence for that invocation. See [Config
basics](https://developers.openai.com/codex/config-basic#configuration-precedence) for more information.
## Global flags
<ConfigTable client:load options={globalFlagOptions} />
These options apply to the base `codex` command and propagate to each subcommand unless a section below specifies otherwise.
When you run a subcommand, place global flags after it (for example, `codex exec --oss ...`) so Codex applies them as intended.
## Command overview
The Maturity column uses feature maturity labels such as Experimental, Beta,
and Stable. See [Feature Maturity](https://developers.openai.com/codex/feature-maturity) for how to
interpret these labels.
<ConfigTable
client:load
options={commandOverview}
secondColumnTitle="Maturity"
secondColumnVariant="maturity"
/>
## Command details
### `codex` (interactive)
Running `codex` with no subcommand launches the interactive terminal UI (TUI). The agent accepts the global flags above plus image attachments. Web search defaults to cached mode; use `--search` to switch to live browsing and `--full-auto` to let Codex run most commands without prompts.
### `codex app-server`
Launch the Codex app server locally. This is primarily for development and debugging and may change without notice.
<ConfigTable client:load options={appServerOptions} />
`codex app-server --listen stdio://` keeps the default JSONL-over-stdio behavior. `--listen ws://IP:PORT` enables WebSocket transport (experimental). If you generate schemas for client bindings, add `--experimental` to include gated fields and methods.
### `codex app`
Launch Codex Desktop from the terminal on macOS and optionally open a specific workspace path.
<ConfigTable client:load options={appOptions} />
`codex app` installs/opens the desktop app on macOS, then opens the provided workspace path. This subcommand is macOS-only.
### `codex debug app-server send-message-v2`
Send one message through app-server's V2 thread/turn flow using the built-in app-server test client.
<ConfigTable client:load options={debugAppServerSendMessageV2Options} />
This debug flow initializes with `experimentalApi: true`, starts a thread, sends a turn, and streams server notifications. Use it to reproduce and inspect app-server protocol behavior locally.
### `codex apply`
Apply the most recent diff from a Codex cloud task to your local repository. You must authenticate and have access to the task.
<ConfigTable client:load options={applyOptions} />
Codex prints the patched files and exits non-zero if `git apply` fails (for example, due to conflicts).
### `codex cloud`
Interact with Codex cloud tasks from the terminal. The default command opens an interactive picker; `codex cloud exec` submits a task directly, and `codex cloud list` returns recent tasks for scripting or quick inspection.
<ConfigTable client:load options={cloudExecOptions} />
Authentication follows the same credentials as the main CLI. Codex exits non-zero if the task submission fails.
#### `codex cloud list`
List recent cloud tasks with optional filtering and pagination.
<ConfigTable client:load options={cloudListOptions} />
Plain-text output prints a task URL followed by status details. Use `--json` for automation. The JSON payload contains a `tasks` array plus an optional `cursor` value. Each task includes `id`, `url`, `title`, `status`, `updated_at`, `environment_id`, `environment_label`, `summary`, `is_review`, and `attempt_total`.
### `codex completion`
Generate shell completion scripts and redirect the output to the appropriate location, for example `codex completion zsh > "${fpath[1]}/_codex"`.
<ConfigTable client:load options={completionOptions} />
### `codex features`
Manage feature flags stored in `~/.codex/config.toml`. The `enable` and `disable` commands persist changes so they apply to future sessions. When you launch with `--profile`, Codex writes to that profile instead of the root configuration.
<ConfigTable client:load options={featuresOptions} />
### `codex exec`
Use `codex exec` (or the short form `codex e`) for scripted or CI-style runs that should finish without human interaction.
<ConfigTable client:load options={execOptions} />
Codex writes formatted output by default. Add `--json` to receive newline-delimited JSON events (one per state change). The optional `resume` subcommand lets you continue non-interactive tasks. Use `--last` to pick the most recent session from the current working directory, or add `--all` to search across all sessions:
<ConfigTable client:load options={execResumeOptions} />
### `codex execpolicy`
Check `execpolicy` rule files before you save them. `codex execpolicy check` accepts one or more `--rules` flags (for example, files under `~/.codex/rules`) and emits JSON showing the strictest decision and any matching rules. Add `--pretty` to format the output. The `execpolicy` command is currently in preview.
<ConfigTable client:load options={execpolicyOptions} />
### `codex login`
Authenticate the CLI with a ChatGPT account or API key. With no flags, Codex opens a browser for the ChatGPT OAuth flow.
<ConfigTable client:load options={loginOptions} />
`codex login status` exits with `0` when credentials are present, which is helpful in automation scripts.
### `codex logout`
Remove saved credentials for both API key and ChatGPT authentication. This command has no flags.
### `codex mcp`
Manage Model Context Protocol server entries stored in `~/.codex/config.toml`.
<ConfigTable client:load options={mcpCommands} />
The `add` subcommand supports both stdio and streamable HTTP transports:
<ConfigTable client:load options={mcpAddOptions} />
OAuth actions (`login`, `logout`) only work with streamable HTTP servers (and only when the server supports OAuth).
### `codex mcp-server`
Run Codex as an MCP server over stdio so that other tools can connect. This command inherits global configuration overrides and exits when the downstream client closes the connection.
### `codex resume`
Continue an interactive session by ID or resume the most recent conversation. `codex resume` scopes `--last` to the current working directory unless you pass `--all`. It accepts the same global flags as `codex`, including model and sandbox overrides.
<ConfigTable client:load options={resumeOptions} />
### `codex fork`
Fork a previous interactive session into a new thread. By default, `codex fork` opens the session picker; add `--last` to fork your most recent session instead.
<ConfigTable client:load options={forkOptions} />
### `codex sandbox`
Use the sandbox helper to run a command under the same policies Codex uses internally.
#### macOS seatbelt
<ConfigTable client:load options={sandboxMacOptions} />
#### Linux Landlock
<ConfigTable client:load options={sandboxLinuxOptions} />
## Flag combinations and safety tips
- Set `--full-auto` for unattended local work, but avoid combining it with `--dangerously-bypass-approvals-and-sandbox` unless you are inside a dedicated sandbox VM.
- When you need to grant Codex write access to more directories, prefer `--add-dir` rather than forcing `--sandbox danger-full-access`.
- Pair `--json` with `--output-last-message` in CI to capture machine-readable progress and a final natural-language summary.
## Related resources
- [Codex CLI overview](https://developers.openai.com/codex/cli): installation, upgrades, and quick tips.
- [Config basics](https://developers.openai.com/codex/config-basic): persist defaults like the model and provider.
- [Advanced Config](https://developers.openai.com/codex/config-advanced): profiles, providers, sandbox tuning, and integrations.
- [AGENTS.md](https://developers.openai.com/codex/guides/agents-md): conceptual overview of Codex agent capabilities and best practices.

View File

@@ -0,0 +1,59 @@
# Codex SDK
If you use Codex through the Codex CLI, the IDE extension, or Codex Web, you can also control it programmatically.
Use the SDK when you need to:
- Control Codex as part of your CI/CD pipeline
- Create your own agent that can engage with Codex to perform complex engineering tasks
- Build Codex into your own internal tools and workflows
- Integrate Codex within your own application
## TypeScript library
The TypeScript library provides a way to control Codex from within your application that is more comprehensive and flexible than non-interactive mode.
Use the library server-side; it requires Node.js 18 or later.
### Installation
To get started, install the Codex SDK using `npm`:
```bash
npm install @openai/codex-sdk
```
### Usage
Start a thread with Codex and run it with your prompt.
```ts
const codex = new Codex();
const thread = codex.startThread();
const result = await thread.run(
"Make a plan to diagnose and fix the CI failures"
);
console.log(result);
```
Call `run()` again to continue on the same thread, or resume a past thread by providing a thread ID.
```ts
// running the same thread
const result = await thread.run("Implement the plan");
console.log(result);
// resuming past thread
const threadId = "<thread-id>";
const thread2 = codex.resumeThread(threadId);
const result2 = await thread2.run("Pick up where you left off");
console.log(result2);
```
For more details, check out the [TypeScript repo](https://github.com/openai/codex/tree/main/sdk/typescript).

115
references/gemini/cli.md Normal file
View File

@@ -0,0 +1,115 @@
# Gemini CLI cheatsheet
This page provides a reference for commonly used Gemini CLI commands, options,
and parameters.
## CLI commands
| Command | Description | Example |
| ---------------------------------- | ---------------------------------- | --------------------------------------------------- |
| `gemini` | Start interactive REPL | `gemini` |
| `gemini "query"` | Query non-interactively, then exit | `gemini "explain this project"` |
| `cat file \| gemini` | Process piped content | `cat logs.txt \| gemini` |
| `gemini -i "query"` | Execute and continue interactively | `gemini -i "What is the purpose of this project?"` |
| `gemini -r "latest"` | Continue most recent session | `gemini -r "latest"` |
| `gemini -r "latest" "query"` | Continue session with a new prompt | `gemini -r "latest" "Check for type errors"` |
| `gemini -r "<session-id>" "query"` | Resume session by ID | `gemini -r "abc123" "Finish this PR"` |
| `gemini update` | Update to latest version | `gemini update` |
| `gemini extensions` | Manage extensions | See [Extensions Management](#extensions-management) |
| `gemini mcp` | Configure MCP servers | See [MCP Server Management](#mcp-server-management) |
### Positional arguments
| Argument | Type | Description |
| -------- | ----------------- | ------------------------------------------------------------------------------------------------------------------ |
| `query` | string (variadic) | Positional prompt. Defaults to one-shot mode. Use `-i/--prompt-interactive` to execute and continue interactively. |
## CLI Options
| Option | Alias | Type | Default | Description |
| -------------------------------- | ----- | ------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--debug` | `-d` | boolean | `false` | Run in debug mode with verbose logging |
| `--version` | `-v` | - | - | Show CLI version number and exit |
| `--help` | `-h` | - | - | Show help information |
| `--model` | `-m` | string | `auto` | Model to use. See [Model Selection](#model-selection) for available values. |
| `--prompt` | `-p` | string | - | Prompt text. Appended to stdin input if provided. **Deprecated:** Use positional arguments instead. |
| `--prompt-interactive` | `-i` | string | - | Execute prompt and continue in interactive mode |
| `--sandbox` | `-s` | boolean | `false` | Run in a sandboxed environment for safer execution |
| `--approval-mode` | - | string | `default` | Approval mode for tool execution. Choices: `default`, `auto_edit`, `yolo` |
| `--yolo` | `-y` | boolean | `false` | **Deprecated.** Auto-approve all actions. Use `--approval-mode=yolo` instead. |
| `--experimental-acp` | - | boolean | - | Start in ACP (Agent Code Pilot) mode. **Experimental feature.** |
| `--experimental-zed-integration` | - | boolean | - | Run in Zed editor integration mode. **Experimental feature.** |
| `--allowed-mcp-server-names` | - | array | - | Allowed MCP server names (comma-separated or multiple flags) |
| `--allowed-tools` | - | array | - | **Deprecated.** Use the [Policy Engine](/docs/reference/policy-engine) instead. Tools that are allowed to run without confirmation (comma-separated or multiple flags) |
| `--extensions` | `-e` | array | - | List of extensions to use. If not provided, all extensions are enabled (comma-separated or multiple flags) |
| `--list-extensions` | `-l` | boolean | - | List all available extensions and exit |
| `--resume` | `-r` | string | - | Resume a previous session. Use `"latest"` for most recent or index number (e.g. `--resume 5`) |
| `--list-sessions` | - | boolean | - | List available sessions for the current project and exit |
| `--delete-session` | - | string | - | Delete a session by index number (use `--list-sessions` to see available sessions) |
| `--include-directories` | - | array | - | Additional directories to include in the workspace (comma-separated or multiple flags) |
| `--screen-reader` | - | boolean | - | Enable screen reader mode for accessibility |
| `--output-format` | `-o` | string | `text` | The format of the CLI output. Choices: `text`, `json`, `stream-json` |
## Model selection
The `--model` (or `-m`) flag lets you specify which Gemini model to use. You can
use either model aliases (user-friendly names) or concrete model names.
### Model aliases
These are convenient shortcuts that map to specific models:
| Alias | Resolves To | Description |
| ------------ | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------- |
| `auto` | `gemini-2.5-pro` or `gemini-3-pro-preview` | **Default.** Resolves to the preview model if preview features are enabled, otherwise resolves to the standard pro model. |
| `pro` | `gemini-2.5-pro` or `gemini-3-pro-preview` | For complex reasoning tasks. Uses preview model if enabled. |
| `flash` | `gemini-2.5-flash` | Fast, balanced model for most tasks. |
| `flash-lite` | `gemini-2.5-flash-lite` | Fastest model for simple tasks. |
## Extensions management
| Command | Description | Example |
| -------------------------------------------------- | -------------------------------------------- | ------------------------------------------------------------------------------ |
| `gemini extensions install <source>` | Install extension from Git URL or local path | `gemini extensions install https://github.com/user/my-extension` |
| `gemini extensions install <source> --ref <ref>` | Install from specific branch/tag/commit | `gemini extensions install https://github.com/user/my-extension --ref develop` |
| `gemini extensions install <source> --auto-update` | Install with auto-update enabled | `gemini extensions install https://github.com/user/my-extension --auto-update` |
| `gemini extensions uninstall <name>` | Uninstall one or more extensions | `gemini extensions uninstall my-extension` |
| `gemini extensions list` | List all installed extensions | `gemini extensions list` |
| `gemini extensions update <name>` | Update a specific extension | `gemini extensions update my-extension` |
| `gemini extensions update --all` | Update all extensions | `gemini extensions update --all` |
| `gemini extensions enable <name>` | Enable an extension | `gemini extensions enable my-extension` |
| `gemini extensions disable <name>` | Disable an extension | `gemini extensions disable my-extension` |
| `gemini extensions link <path>` | Link local extension for development | `gemini extensions link /path/to/extension` |
| `gemini extensions new <path>` | Create new extension from template | `gemini extensions new ./my-extension` |
| `gemini extensions validate <path>` | Validate extension structure | `gemini extensions validate ./my-extension` |
See [Extensions Documentation](/docs/extensions) for more details.
## MCP server management
| Command | Description | Example |
| ------------------------------------------------------------- | ------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `gemini mcp add <name> <command>` | Add stdio-based MCP server | `gemini mcp add github npx -y @modelcontextprotocol/server-github` |
| `gemini mcp add <name> <url> --transport http` | Add HTTP-based MCP server | `gemini mcp add api-server http://localhost:3000 --transport http` |
| `gemini mcp add <name> <command> --env KEY=value` | Add with environment variables | `gemini mcp add slack node server.js --env SLACK_TOKEN=xoxb-xxx` |
| `gemini mcp add <name> <command> --scope user` | Add with user scope | `gemini mcp add db node db-server.js --scope user` |
| `gemini mcp add <name> <command> --include-tools tool1,tool2` | Add with specific tools | `gemini mcp add github npx -y @modelcontextprotocol/server-github --include-tools list_repos,get_pr` |
| `gemini mcp remove <name>` | Remove an MCP server | `gemini mcp remove github` |
| `gemini mcp list` | List all configured MCP servers | `gemini mcp list` |
See [MCP Server Integration](/docs/tools/mcp-server) for more details.
## Skills management
| Command | Description | Example |
| -------------------------------- | ------------------------------------- | ------------------------------------------------- |
| `gemini skills list` | List all discovered agent skills | `gemini skills list` |
| `gemini skills install <source>` | Install skill from Git, path, or file | `gemini skills install https://github.com/u/repo` |
| `gemini skills link <path>` | Link local agent skills via symlink | `gemini skills link /path/to/my-skills` |
| `gemini skills uninstall <name>` | Uninstall an agent skill | `gemini skills uninstall my-skill` |
| `gemini skills enable <name>` | Enable an agent skill | `gemini skills enable my-skill` |
| `gemini skills disable <name>` | Disable an agent skill | `gemini skills disable my-skill` |
| `gemini skills enable --all` | Enable all skills | `gemini skills enable --all` |
| `gemini skills disable --all` | Disable all skills | `gemini skills disable --all` |
See [Agent Skills Documentation](/docs/cli/skills) for more details.

21
scripts/aetheel.service Normal file
View File

@@ -0,0 +1,21 @@
# Aetheel systemd service file
# Copy to /etc/systemd/system/aetheel.service and edit paths
# Or run: bash scripts/setup.sh (does this automatically)
[Unit]
Description=Aetheel Discord-Claude Gateway
After=network.target
[Service]
Type=simple
User=YOUR_USERNAME
WorkingDirectory=/path/to/aetheel-2
EnvironmentFile=/path/to/aetheel-2/.env
ExecStart=/usr/bin/npx tsx src/index.ts
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target

199
scripts/setup.sh Normal file
View File

@@ -0,0 +1,199 @@
#!/bin/bash
set -e
echo "========================================="
echo " Aetheel — Discord-Claude Gateway Setup"
echo "========================================="
echo ""
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
CONFIG_DIR="$PROJECT_DIR/config"
# --- Step 1: Check prerequisites ---
echo "Checking prerequisites..."
if ! command -v node &> /dev/null; then
echo "ERROR: Node.js is not installed. Install Node.js 18+ first."
exit 1
fi
NODE_VERSION=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$NODE_VERSION" -lt 18 ]; then
echo "ERROR: Node.js 18+ required. Found: $(node -v)"
exit 1
fi
echo " ✓ Node.js $(node -v)"
if ! command -v claude &> /dev/null; then
echo "WARNING: Claude Code CLI not found in PATH."
echo " Install: npm install -g @anthropic-ai/claude-code"
echo " Then run: claude (to sign in)"
echo ""
read -p "Continue anyway? (y/n) " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
else
echo " ✓ Claude Code CLI $(claude --version 2>/dev/null || echo 'installed')"
fi
# --- Step 2: Install dependencies ---
echo ""
echo "Installing dependencies..."
cd "$PROJECT_DIR"
npm install
echo " ✓ Dependencies installed"
# --- Step 3: Create .env ---
echo ""
if [ -f "$PROJECT_DIR/.env" ]; then
echo ".env already exists. Skipping."
else
echo "Setting up environment variables..."
read -p "Discord Bot Token: " DISCORD_TOKEN
read -p "Output Channel ID (for heartbeat/cron, press Enter to skip): " OUTPUT_CHANNEL
cat > "$PROJECT_DIR/.env" << EOF
DISCORD_BOT_TOKEN=$DISCORD_TOKEN
EOF
if [ -n "$OUTPUT_CHANNEL" ]; then
echo "OUTPUT_CHANNEL_ID=$OUTPUT_CHANNEL" >> "$PROJECT_DIR/.env"
fi
echo " ✓ .env created"
fi
# --- Step 4: Create config directory ---
echo ""
echo "Setting up config directory..."
mkdir -p "$CONFIG_DIR"
mkdir -p "$CONFIG_DIR/skills"
mkdir -p "$CONFIG_DIR/ipc/outbound"
mkdir -p "$CONFIG_DIR/messages"
mkdir -p "$CONFIG_DIR/conversations"
# --- Step 5: Create CLAUDE.md if missing ---
if [ ! -f "$CONFIG_DIR/CLAUDE.md" ]; then
echo ""
read -p "Agent name (default: Aetheel): " AGENT_NAME
AGENT_NAME=${AGENT_NAME:-Aetheel}
cat > "$CONFIG_DIR/CLAUDE.md" << EOF
# $AGENT_NAME
## Identity
- **Name:** $AGENT_NAME
- **Vibe:** Helpful, sharp, slightly witty
- **Emoji:** ⚡
## Personality
Be genuinely helpful, not performatively helpful. Skip the filler words — just help.
Have opinions. Be resourceful before asking. Earn trust through competence.
Keep Discord messages concise. Use markdown formatting.
## User Context
(Add info about yourself here — name, timezone, preferences, projects)
## Tools
(Add notes about specific tools, APIs, or services the agent should know about)
EOF
echo " ✓ CLAUDE.md created for $AGENT_NAME"
else
echo " ✓ CLAUDE.md already exists"
fi
# --- Step 6: Create agents.md if missing ---
if [ ! -f "$CONFIG_DIR/agents.md" ]; then
cat > "$CONFIG_DIR/agents.md" << 'EOF'
# Operating Rules
Be helpful and concise. Keep Discord messages short.
## Cron Jobs
## Hooks
### startup
Instruction: Say hello briefly, you just came online.
EOF
echo " ✓ agents.md created"
else
echo " ✓ agents.md already exists"
fi
# --- Step 7: Create heartbeat.md if missing ---
if [ ! -f "$CONFIG_DIR/heartbeat.md" ]; then
cat > "$CONFIG_DIR/heartbeat.md" << 'EOF'
# Heartbeat
# Add checks below. Example:
# ## check-name
# Interval: 3600
# Instruction: Check something and report.
EOF
echo " ✓ heartbeat.md created"
else
echo " ✓ heartbeat.md already exists"
fi
# --- Step 8: Create memory.md if missing ---
if [ ! -f "$CONFIG_DIR/memory.md" ]; then
echo "# Memory" > "$CONFIG_DIR/memory.md"
echo " ✓ memory.md created"
else
echo " ✓ memory.md already exists"
fi
# --- Step 9: Set up systemd service (optional) ---
echo ""
read -p "Set up systemd service for auto-start? (y/n) " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
NPX_PATH=$(which npx)
CURRENT_USER=$(whoami)
sudo tee /etc/systemd/system/aetheel.service > /dev/null << EOF
[Unit]
Description=Aetheel Discord-Claude Gateway
After=network.target
[Service]
Type=simple
User=$CURRENT_USER
WorkingDirectory=$PROJECT_DIR
EnvironmentFile=$PROJECT_DIR/.env
ExecStart=$NPX_PATH tsx src/index.ts
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable aetheel
echo " ✓ systemd service created and enabled"
echo ""
echo " Start: sudo systemctl start aetheel"
echo " Stop: sudo systemctl stop aetheel"
echo " Logs: sudo journalctl -u aetheel -f"
echo " Restart: sudo systemctl restart aetheel"
else
echo " Skipped systemd setup. Run manually with: npm run dev"
fi
echo ""
echo "========================================="
echo " Setup complete!"
echo "========================================="
echo ""
echo "Config directory: $CONFIG_DIR"
echo ""
echo "Next steps:"
echo " 1. Edit config/CLAUDE.md with your persona"
echo " 2. Add cron jobs to config/agents.md"
echo " 3. Add heartbeat checks to config/heartbeat.md"
echo " 4. Start: npm run dev (or sudo systemctl start aetheel)"
echo ""

View File

@@ -11,6 +11,8 @@ import type { SessionManager } from "./session-manager.js";
import { formatErrorForUser } from "./error-formatter.js";
import type { HookManager } from "./hook-manager.js";
import type { GatewayConfig } from "./config.js";
import { loadSkills } from "./skills-loader.js";
import { logger } from "./logger.js";
export interface EventResult {
responseText?: string;
@@ -21,6 +23,40 @@ export interface EventResult {
export type OnStreamResult = (text: string, channelId: string) => Promise<void>;
function isTransientError(error: unknown): boolean {
if (error instanceof Error) {
const msg = error.message.toLowerCase();
if (msg.includes("session") && (msg.includes("invalid") || msg.includes("corrupt") || msg.includes("not found") || msg.includes("expired"))) {
return false;
}
return msg.includes("timed out") || msg.includes("timeout") || msg.includes("exit") || msg.includes("spawn") || msg.includes("crash");
}
return true;
}
export async function withRetry<T>(
fn: () => Promise<T>,
maxRetries: number,
baseDelayMs: number,
shouldRetry: (error: unknown) => boolean,
): Promise<T> {
let lastError: unknown;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
lastError = error;
if (attempt >= maxRetries || !shouldRetry(error)) {
throw error;
}
const delay = baseDelayMs * Math.pow(2, attempt);
logger.info({ attempt: attempt + 1, maxRetries, delayMs: delay }, "Retrying after transient error");
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
throw lastError;
}
interface ClaudeJsonResponse {
type: string;
subtype?: string;
@@ -77,7 +113,8 @@ export class AgentRuntime {
private async processEventCore(event: Event, onStreamResult?: OnStreamResult): Promise<EventResult> {
const configs = await this.markdownConfigLoader.loadAll(this.config.configDir);
const systemPrompt = this.systemPromptAssembler.assemble(configs);
const skills = await loadSkills(this.config.configDir);
const systemPrompt = this.systemPromptAssembler.assemble(configs, skills);
switch (event.type) {
case "message":
@@ -104,7 +141,12 @@ export class AgentRuntime {
: undefined;
try {
const response = await this.executeClaude(promptText, systemPrompt, existingSessionId, streamCallback);
const response = await withRetry(
() => this.executeClaude(promptText, systemPrompt, existingSessionId, streamCallback),
3,
5000,
isTransientError,
);
if (response.session_id && channelId) {
this.sessionManager.setSessionId(channelId, response.session_id);
@@ -203,7 +245,7 @@ export class AgentRuntime {
args.push("--max-turns", "25");
const configDir = path.resolve(this.config.configDir);
console.log(`[DEBUG] Spawning: ${this.config.claudeCliPath} cwd=${configDir} args=${JSON.stringify(args.slice(0, 8))}... (${args.length} total)`);
logger.debug({ cliPath: this.config.claudeCliPath, cwd: configDir, argCount: args.length }, "Spawning Claude CLI");
const child = spawn(this.config.claudeCliPath, args, {
stdio: ["ignore", "pipe", "pipe"],
@@ -242,7 +284,7 @@ export class AgentRuntime {
if (onResult) {
streamedResults = true;
onResult(obj.result).catch((err) =>
console.error("[DEBUG] Stream callback error:", err)
logger.error({ err }, "Stream callback error")
);
}
}
@@ -257,14 +299,14 @@ export class AgentRuntime {
});
const timer = setTimeout(() => {
console.log(`[DEBUG] Timeout reached, killing Claude CLI process`);
logger.debug("Timeout reached, killing Claude CLI process");
child.kill("SIGTERM");
reject(new Error("Query timed out"));
}, this.config.queryTimeoutMs);
child.on("close", (code) => {
clearTimeout(timer);
console.log(`[DEBUG] Claude CLI exited: code=${code}, stdout=${stdout.length} chars, streamed=${streamedResults}`);
logger.debug({ code, stdoutLength: stdout.length, streamed: streamedResults }, "Claude CLI exited");
if (code !== 0 && code !== null) {
reject(new Error(`Claude CLI error (exit ${code}): ${stderr.slice(0, 500) || "unknown error"}`));
@@ -302,7 +344,7 @@ export class AgentRuntime {
} catch { /* ignore */ }
}
console.log(`[DEBUG] Parsed: result=${lastResultText.length} chars, session=${parsedSessionId ?? "none"}`);
logger.debug({ resultLength: lastResultText.length, session: parsedSessionId ?? "none" }, "Parsed Claude response");
resolve({
type: "result",
@@ -314,7 +356,7 @@ export class AgentRuntime {
child.on("error", (err) => {
clearTimeout(timer);
console.error(`[DEBUG] Failed to spawn Claude CLI: ${err.message}`);
logger.error({ err }, "Failed to spawn Claude CLI");
reject(new Error(`Failed to spawn Claude CLI: ${err.message}`));
});
});

View File

@@ -1,5 +1,6 @@
import { readFile, writeFile, access } from "node:fs/promises";
import { join } from "node:path";
import { logger } from "./logger.js";
export interface BootConfig {
requiredFiles: string[];
@@ -59,10 +60,10 @@ export class BootstrapManager {
// Log results
if (loadedFiles.length > 0) {
console.log(`Bootstrap: loaded files — ${loadedFiles.join(", ")}`);
logger.info({ files: loadedFiles }, "Bootstrap: loaded files");
}
if (createdFiles.length > 0) {
console.log(`Bootstrap: created files with defaults — ${createdFiles.join(", ")}`);
logger.info({ files: createdFiles }, "Bootstrap: created files with defaults");
}
return { loadedFiles, createdFiles };

View File

@@ -8,6 +8,7 @@ export interface GatewayConfig {
configDir: string;
maxQueueDepth: number;
outputChannelId?: string;
idleSessionTimeoutMs: number;
}
const DEFAULT_ALLOWED_TOOLS = ["Read", "Write", "Edit", "Glob", "Grep", "WebSearch", "WebFetch"];
@@ -17,6 +18,7 @@ const DEFAULT_MAX_CONCURRENT_QUERIES = 5;
const DEFAULT_CONFIG_DIR = "./config";
const DEFAULT_MAX_QUEUE_DEPTH = 100;
const DEFAULT_CLAUDE_CLI_PATH = "claude";
const DEFAULT_IDLE_SESSION_TIMEOUT_MS = 1_800_000; // 30 minutes
export function loadConfig(): GatewayConfig {
const missing: string[] = [];
@@ -56,6 +58,10 @@ export function loadConfig(): GatewayConfig {
const outputChannelId = process.env.OUTPUT_CHANNEL_ID || undefined;
const idleSessionTimeoutMs = process.env.IDLE_SESSION_TIMEOUT_MS
? parseInt(process.env.IDLE_SESSION_TIMEOUT_MS, 10)
: DEFAULT_IDLE_SESSION_TIMEOUT_MS;
return {
discordBotToken: discordBotToken!,
claudeCliPath,
@@ -66,5 +72,6 @@ export function loadConfig(): GatewayConfig {
configDir,
maxQueueDepth,
outputChannelId,
idleSessionTimeoutMs,
};
}

View File

@@ -0,0 +1,20 @@
import { appendFile, mkdir } from "node:fs/promises";
import { join } from "node:path";
import { logger } from "./logger.js";
export class ConversationArchiver {
async archive(configDir: string, channelId: string, prompt: string, response: string): Promise<void> {
const now = new Date();
const dateStr = now.toISOString().slice(0, 10); // YYYY-MM-DD
const timeStr = now.toTimeString().slice(0, 8); // HH:MM:SS
const dir = join(configDir, "conversations", channelId);
await mkdir(dir, { recursive: true });
const filePath = join(dir, `${dateStr}.md`);
const entry = `### ${timeStr} — User\n${prompt}\n\n### ${timeStr} — Aetheel\n${response}\n\n---\n\n`;
await appendFile(filePath, entry, "utf-8");
logger.debug({ channelId, date: dateStr }, "Conversation archived");
}
}

View File

@@ -1,5 +1,6 @@
import cron from "node-cron";
import type { Event } from "./event-queue.js";
import { logger } from "./logger.js";
export interface CronJob {
name: string;
@@ -82,9 +83,7 @@ export class CronScheduler {
start(jobs: CronJob[], enqueue: EnqueueFn): void {
for (const job of jobs) {
if (!cron.validate(job.expression)) {
console.warn(
`Cron job "${job.name}" has invalid cron expression "${job.expression}". Skipping.`
);
logger.warn({ name: job.name, expression: job.expression }, "Cron job has invalid cron expression, skipping");
continue;
}

View File

@@ -8,6 +8,7 @@ import {
type Message,
type TextChannel,
} from "discord.js";
import { logger } from "./logger.js";
export interface Prompt {
text: string;
@@ -46,13 +47,12 @@ export class DiscordBot {
return new Promise<void>((resolve, reject) => {
this.client.once("ready", () => {
const user = this.client.user;
console.log(`Bot logged in as ${user?.tag ?? "unknown"}`);
console.log(`Connected to ${this.client.guilds.cache.size} guild(s)`);
logger.info({ tag: user?.tag ?? "unknown" }, "Bot logged in");
logger.info({ guildCount: this.client.guilds.cache.size }, "Connected to guilds");
this.setupMessageHandler();
this.setupInteractionHandler();
// Debug: confirm we're listening for messages
console.log(`[DEBUG] Message intent enabled, listening for messageCreate events`);
logger.debug("Message intent enabled, listening for messageCreate events");
resolve();
});
@@ -85,7 +85,7 @@ export class DiscordBot {
const rest = new REST({ version: "10" }).setToken(this.client.token!);
await rest.put(Routes.applicationCommands(clientId), { body: commands });
console.log("Registered /claude and /claude-reset slash commands");
logger.info("Registered /claude and /claude-reset slash commands");
}
async sendMessage(channelId: string, content: string): Promise<void> {
@@ -95,9 +95,9 @@ export class DiscordBot {
await (channel as TextChannel).send(content);
}
} catch (error) {
console.error(
`Failed to send message to channel ${channelId} (content length: ${content.length}):`,
error
logger.error(
{ channelId, contentLength: content.length, err: error },
"Failed to send message to channel",
);
}
}
@@ -127,32 +127,32 @@ export class DiscordBot {
private setupMessageHandler(): void {
this.client.on("messageCreate", (message: Message) => {
console.log(`[DEBUG] Message received: "${message.content}" from ${message.author.tag} (bot: ${message.author.bot})`);
logger.debug({ content: message.content, author: message.author.tag, bot: message.author.bot }, "Message received");
if (shouldIgnoreMessage(message)) {
console.log("[DEBUG] Ignoring bot message");
logger.debug("Ignoring bot message");
return;
}
const botUser = this.client.user;
if (!botUser) {
console.log("[DEBUG] No bot user available");
logger.debug("No bot user available");
return;
}
if (!message.mentions.has(botUser)) {
console.log("[DEBUG] Message does not mention the bot");
logger.debug("Message does not mention the bot");
return;
}
const text = extractPromptFromMention(message.content, botUser.id);
console.log(`[DEBUG] Extracted prompt: "${text}"`);
logger.debug({ text }, "Extracted prompt");
if (!text) {
console.log("[DEBUG] Empty prompt after extraction, ignoring");
logger.debug("Empty prompt after extraction, ignoring");
return;
}
console.log(`[DEBUG] Forwarding prompt to handler: "${text}" from channel ${message.channelId}`);
logger.debug({ text, channelId: message.channelId }, "Forwarding prompt to handler");
this.promptHandler?.({
text,
channelId: message.channelId,

View File

@@ -12,6 +12,10 @@ import { HookManager } from "./hook-manager.js";
import { BootstrapManager } from "./bootstrap-manager.js";
import { splitMessage } from "./response-formatter.js";
import { formatErrorForUser } from "./error-formatter.js";
import { appendMessage } from "./message-history.js";
import { IpcWatcher } from "./ipc-watcher.js";
import { ConversationArchiver } from "./conversation-archiver.js";
import { logger } from "./logger.js";
export class GatewayCore {
private config!: GatewayConfig;
@@ -23,6 +27,9 @@ export class GatewayCore {
private cronScheduler!: CronScheduler;
private hookManager!: HookManager;
private markdownConfigLoader!: MarkdownConfigLoader;
private ipcWatcher!: IpcWatcher;
private conversationArchiver = new ConversationArchiver();
private idleCleanupTimer: ReturnType<typeof setInterval> | null = null;
private activeQueryCount = 0;
private isShuttingDown = false;
@@ -30,7 +37,7 @@ export class GatewayCore {
async start(): Promise<void> {
// 1. Load config
this.config = loadConfig();
console.log("Configuration loaded");
logger.info("Configuration loaded");
// 2. Run bootstrap
const bootstrapManager = new BootstrapManager();
@@ -69,9 +76,9 @@ export class GatewayCore {
this.heartbeatScheduler.start(checks, (event) =>
this.eventQueue.enqueue(event),
);
console.log(`HeartbeatScheduler started with ${checks.length} check(s)`);
logger.info({ count: checks.length }, "HeartbeatScheduler started");
} else {
console.log("No heartbeat.md found, operating without heartbeat events");
logger.info("No heartbeat.md found, operating without heartbeat events");
}
// 7. Parse agents.md → start CronScheduler, load HookConfig
@@ -85,15 +92,15 @@ export class GatewayCore {
this.cronScheduler.start(cronJobs, (event) =>
this.eventQueue.enqueue(event),
);
console.log(`CronScheduler started with ${cronJobs.length} job(s)`);
logger.info({ count: cronJobs.length }, "CronScheduler started");
this.hookManager.parseConfig(agentsContent);
console.log("HookConfig loaded from agents.md");
logger.info("HookConfig loaded from agents.md");
}
// 8. Register EventQueue processing handler
this.eventQueue.onEvent(async (event: Event) => {
console.log(`[DEBUG] Processing event: type=${event.type}, id=${event.id}`);
logger.debug({ type: event.type, id: event.id }, "Processing event");
try {
// Streaming callback — sends results to Discord as they arrive
const onStreamResult = async (text: string, channelId: string) => {
@@ -104,7 +111,7 @@ export class GatewayCore {
};
const result = await this.agentRuntime.processEvent(event, onStreamResult);
console.log(`[DEBUG] Event result: responseText=${result.responseText?.length ?? 0} chars, error=${result.error ?? "none"}`);
logger.debug({ responseLength: result.responseText?.length ?? 0, error: result.error ?? "none" }, "Event result");
// Only send if not already streamed
if (result.responseText && result.targetChannelId) {
@@ -112,13 +119,28 @@ export class GatewayCore {
for (const chunk of chunks) {
await this.discordBot.sendMessage(result.targetChannelId, chunk);
}
// Store outbound message history
await appendMessage(this.config.configDir, result.targetChannelId, {
sender: "Aetheel",
content: result.responseText,
timestamp: new Date().toISOString(),
direction: "outbound",
}).catch(() => {});
}
if (result.error && result.targetChannelId) {
await this.discordBot.sendMessage(result.targetChannelId, result.error);
}
// Archive conversation for message events
if (event.type === "message" && result.responseText) {
const payload = event.payload as MessagePayload;
await this.conversationArchiver
.archive(this.config.configDir, payload.prompt.channelId, payload.prompt.text, result.responseText)
.catch(() => {});
}
} catch (error) {
console.error("Error processing event:", error);
logger.error({ err: error }, "Error processing event");
if (event.type === "message") {
const payload = event.payload as MessagePayload;
const errorMsg = formatErrorForUser(error);
@@ -135,7 +157,7 @@ export class GatewayCore {
// 9. Wire DiscordBot.onPrompt() to create message events and enqueue them
this.discordBot.onPrompt((prompt: Prompt) => {
console.log(`[DEBUG] onPrompt called: "${prompt.text}" from channel ${prompt.channelId}`);
logger.debug({ text: prompt.text, channelId: prompt.channelId }, "onPrompt called");
if (this.isShuttingDown) {
this.discordBot
@@ -153,6 +175,17 @@ export class GatewayCore {
this.activeQueryCount++;
// Store inbound message history
appendMessage(this.config.configDir, prompt.channelId, {
sender: prompt.userId,
content: prompt.text,
timestamp: new Date().toISOString(),
direction: "inbound",
}).catch(() => {});
// Touch session activity
this.sessionManager.touchActivity(prompt.channelId);
// Send typing indicator
this.discordBot.sendTyping(prompt.channelId).catch(() => {});
@@ -177,18 +210,36 @@ export class GatewayCore {
// 11. Fire startup hook
this.hookManager.fire("startup", (event) => this.eventQueue.enqueue(event));
console.log("Gateway started successfully");
// 12. Start IPC watcher
this.ipcWatcher = new IpcWatcher(
this.config.configDir,
(channelId, text) => this.discordBot.sendMessage(channelId, text),
);
this.ipcWatcher.start();
// 13. Start idle session cleanup timer (every 5 minutes)
this.idleCleanupTimer = setInterval(() => {
this.sessionManager.cleanupIdleSessions(this.config.idleSessionTimeoutMs);
}, 5 * 60 * 1000);
logger.info("Gateway started successfully");
}
async shutdown(): Promise<void> {
console.log("Initiating graceful shutdown...");
logger.info("Initiating graceful shutdown...");
// 1. Set isShuttingDown flag, stop accepting new events from Discord
this.isShuttingDown = true;
// 2. Stop HeartbeatScheduler and CronScheduler
// 2. Stop HeartbeatScheduler, CronScheduler, IPC watcher, idle cleanup
this.heartbeatScheduler?.stop();
this.cronScheduler?.stop();
this.ipcWatcher?.stop();
if (this.idleCleanupTimer) {
clearInterval(this.idleCleanupTimer);
this.idleCleanupTimer = null;
}
// 3. Fire shutdown hook (enqueue and wait for processing)
this.hookManager?.fire("shutdown", (event) => this.eventQueue.enqueue(event));
@@ -199,7 +250,7 @@ export class GatewayCore {
// 5. Disconnect DiscordBot
await this.discordBot?.destroy();
console.log("Gateway shut down cleanly");
logger.info("Gateway shut down cleanly");
// 6. Exit with code 0
process.exit(0);

View File

@@ -1,4 +1,5 @@
import type { Event } from "./event-queue.js";
import { logger } from "./logger.js";
export interface HeartbeatCheck {
name: string;
@@ -56,9 +57,7 @@ export class HeartbeatScheduler {
start(checks: HeartbeatCheck[], enqueue: EnqueueFn): void {
for (const check of checks) {
if (check.intervalSeconds < MIN_INTERVAL_SECONDS) {
console.warn(
`Heartbeat check "${check.name}" has interval ${check.intervalSeconds}s which is below the minimum of ${MIN_INTERVAL_SECONDS}s. Skipping.`
);
logger.warn({ name: check.name, interval: check.intervalSeconds, minimum: MIN_INTERVAL_SECONDS }, "Heartbeat check interval below minimum, skipping");
continue;
}

View File

@@ -1,11 +1,12 @@
import "dotenv/config";
import { GatewayCore } from "./gateway-core.js";
import { registerShutdownHandler } from "./shutdown-handler.js";
import { logger } from "./logger.js";
const gateway = new GatewayCore();
registerShutdownHandler(gateway);
gateway.start().catch((error) => {
console.error("Failed to start gateway:", error);
logger.error({ err: error }, "Failed to start gateway");
process.exit(1);
});

74
src/ipc-watcher.ts Normal file
View File

@@ -0,0 +1,74 @@
import { readdir, readFile, unlink, mkdir } from "node:fs/promises";
import { join } from "node:path";
import { logger } from "./logger.js";
interface IpcMessage {
channelId: string;
text: string;
}
const POLL_INTERVAL_MS = 2000;
export class IpcWatcher {
private configDir: string;
private sendMessage: (channelId: string, text: string) => Promise<void>;
private timer: ReturnType<typeof setInterval> | null = null;
constructor(configDir: string, sendMessage: (channelId: string, text: string) => Promise<void>) {
this.configDir = configDir;
this.sendMessage = sendMessage;
}
private get outboundDir(): string {
return join(this.configDir, "ipc", "outbound");
}
start(): void {
mkdir(this.outboundDir, { recursive: true }).catch((err) => {
logger.error({ err }, "Failed to create IPC outbound directory");
});
this.timer = setInterval(() => {
this.poll().catch((err) => {
logger.error({ err }, "IPC poll error");
});
}, POLL_INTERVAL_MS);
logger.info({ dir: this.outboundDir }, "IPC watcher started");
}
stop(): void {
if (this.timer) {
clearInterval(this.timer);
this.timer = null;
}
logger.info("IPC watcher stopped");
}
private async poll(): Promise<void> {
let files: string[];
try {
files = await readdir(this.outboundDir);
} catch {
return; // Directory may not exist yet
}
const jsonFiles = files.filter((f) => f.endsWith(".json"));
for (const file of jsonFiles) {
const filePath = join(this.outboundDir, file);
try {
const raw = await readFile(filePath, "utf-8");
const msg = JSON.parse(raw) as IpcMessage;
if (msg.channelId && msg.text) {
await this.sendMessage(msg.channelId, msg.text);
logger.info({ channelId: msg.channelId, file }, "IPC message sent");
} else {
logger.warn({ file }, "IPC message missing channelId or text");
}
await unlink(filePath);
} catch (err) {
logger.error({ err, file }, "Failed to process IPC message");
}
}
}
}

10
src/logger.ts Normal file
View File

@@ -0,0 +1,10 @@
import pino from "pino";
const isProduction = process.env.NODE_ENV === "production";
export const logger = pino({
level: process.env.LOG_LEVEL || "info",
transport: isProduction
? undefined
: { target: "pino-pretty", options: { colorize: true, translateTime: "SYS:HH:MM:ss" } },
});

View File

@@ -1,5 +1,6 @@
import { readFile, writeFile } from "node:fs/promises";
import { join } from "node:path";
import { logger } from "./logger.js";
export interface MarkdownConfigs {
/** CLAUDE.md — persona: identity, soul, user context, tools (all in one) */
@@ -21,7 +22,7 @@ export class MarkdownConfigLoader {
// CLAUDE.md — main persona file
configs.persona = await this.loadFile(configDir, "CLAUDE.md");
if (!configs.persona) {
console.warn("Warning: CLAUDE.md not found in " + configDir);
logger.warn({ configDir }, "CLAUDE.md not found");
}
// agents.md — parsed by gateway for cron/hooks, also included in prompt

61
src/message-history.ts Normal file
View File

@@ -0,0 +1,61 @@
import { readFile, writeFile, mkdir } from "node:fs/promises";
import { join } from "node:path";
import { logger } from "./logger.js";
export interface HistoryEntry {
sender: string;
content: string;
timestamp: string;
direction: "inbound" | "outbound";
}
const MAX_MESSAGES_PER_FILE = 100;
function messagesDir(configDir: string): string {
return join(configDir, "messages");
}
function channelFile(configDir: string, channelId: string): string {
return join(messagesDir(configDir), `${channelId}.json`);
}
async function ensureDir(dir: string): Promise<void> {
await mkdir(dir, { recursive: true });
}
async function readMessages(filePath: string): Promise<HistoryEntry[]> {
try {
const data = await readFile(filePath, "utf-8");
return JSON.parse(data) as HistoryEntry[];
} catch {
return [];
}
}
export async function appendMessage(
configDir: string,
channelId: string,
message: HistoryEntry,
): Promise<void> {
const dir = messagesDir(configDir);
await ensureDir(dir);
const filePath = channelFile(configDir, channelId);
const messages = await readMessages(filePath);
messages.push(message);
// Trim to max
const trimmed = messages.length > MAX_MESSAGES_PER_FILE
? messages.slice(messages.length - MAX_MESSAGES_PER_FILE)
: messages;
await writeFile(filePath, JSON.stringify(trimmed, null, 2), "utf-8");
logger.debug({ channelId, direction: message.direction }, "Message appended to history");
}
export async function getRecentMessages(
configDir: string,
channelId: string,
count: number,
): Promise<HistoryEntry[]> {
const filePath = channelFile(configDir, channelId);
const messages = await readMessages(filePath);
return messages.slice(-count);
}

View File

@@ -1,8 +1,10 @@
import { readFileSync, writeFileSync, mkdirSync } from "node:fs";
import { join, dirname } from "node:path";
import { logger } from "./logger.js";
export class SessionManager {
private bindings = new Map<string, string>();
private lastActivity = new Map<string, number>();
private persistPath: string | null = null;
constructor(persistPath?: string) {
@@ -18,19 +20,46 @@ export class SessionManager {
setSessionId(channelId: string, sessionId: string): void {
this.bindings.set(channelId, sessionId);
this.lastActivity.set(channelId, Date.now());
this.saveToDisk();
}
removeSession(channelId: string): void {
this.bindings.delete(channelId);
this.lastActivity.delete(channelId);
this.saveToDisk();
}
clear(): void {
this.bindings.clear();
this.lastActivity.clear();
this.saveToDisk();
}
touchActivity(channelId: string): void {
if (this.bindings.has(channelId)) {
this.lastActivity.set(channelId, Date.now());
}
}
cleanupIdleSessions(timeoutMs: number): void {
const now = Date.now();
const toRemove: string[] = [];
for (const [channelId, lastTime] of this.lastActivity) {
if (now - lastTime > timeoutMs) {
toRemove.push(channelId);
}
}
for (const channelId of toRemove) {
logger.info({ channelId, idleMs: now - (this.lastActivity.get(channelId) ?? 0) }, "Cleaning up idle session");
this.bindings.delete(channelId);
this.lastActivity.delete(channelId);
}
if (toRemove.length > 0) {
this.saveToDisk();
}
}
private loadFromDisk(): void {
if (!this.persistPath) return;
try {
@@ -39,7 +68,7 @@ export class SessionManager {
for (const [k, v] of Object.entries(parsed)) {
this.bindings.set(k, v);
}
console.log(`Sessions loaded: ${this.bindings.size} channel(s)`);
logger.info({ count: this.bindings.size }, "Sessions loaded");
} catch {
// File doesn't exist yet — that's fine
}
@@ -55,7 +84,7 @@ export class SessionManager {
}
writeFileSync(this.persistPath, JSON.stringify(obj, null, 2), "utf-8");
} catch (err) {
console.error("Failed to persist sessions:", err);
logger.error({ err }, "Failed to persist sessions");
}
}
}

View File

@@ -1,4 +1,5 @@
import type { GatewayCore } from "./gateway-core.js";
import { logger } from "./logger.js";
export function registerShutdownHandler(gateway: GatewayCore): void {
let shuttingDown = false;
@@ -8,7 +9,7 @@ export function registerShutdownHandler(gateway: GatewayCore): void {
return;
}
shuttingDown = true;
console.log(`Received ${signal}, shutting down...`);
logger.info({ signal }, "Received signal, shutting down...");
gateway.shutdown();
};

33
src/skills-loader.ts Normal file
View File

@@ -0,0 +1,33 @@
import { readdir, readFile } from "node:fs/promises";
import { join } from "node:path";
import { logger } from "./logger.js";
export interface Skill {
name: string;
content: string;
}
export async function loadSkills(configDir: string): Promise<Skill[]> {
const skillsDir = join(configDir, "skills");
let entries: string[];
try {
entries = await readdir(skillsDir);
} catch {
return [];
}
const skills: Skill[] = [];
for (const entry of entries) {
const skillFile = join(skillsDir, entry, "SKILL.md");
try {
const content = await readFile(skillFile, "utf-8");
skills.push({ name: entry, content });
logger.debug({ name: entry }, "Skill loaded");
} catch {
// No SKILL.md in this directory, skip
}
}
logger.info({ count: skills.length }, "Skills loaded");
return skills;
}

View File

@@ -1,11 +1,15 @@
import type { MarkdownConfigs } from "./markdown-config-loader.js";
import type { Skill } from "./skills-loader.js";
const PREAMBLE =
"You may update your long-term memory by writing to memory.md using the Write tool. Use this to persist important facts, lessons learned, and context across sessions.";
const IPC_PREAMBLE =
'You can send messages to any Discord channel proactively by writing a JSON file to the ipc/outbound/ directory. Format: {"channelId": "CHANNEL_ID", "text": "Your message"}. The gateway will pick it up and send it within 2 seconds.';
export class SystemPromptAssembler {
assemble(configs: MarkdownConfigs): string {
const parts: string[] = [PREAMBLE, ""];
assemble(configs: MarkdownConfigs, skills?: Skill[]): string {
const parts: string[] = [PREAMBLE, "", IPC_PREAMBLE, ""];
if (configs.persona) {
parts.push(`## Persona\n\n${configs.persona}\n`);
@@ -19,6 +23,12 @@ export class SystemPromptAssembler {
parts.push(`## Long-Term Memory\n\n${configs.memory}\n`);
}
if (skills && skills.length > 0) {
for (const skill of skills) {
parts.push(`## Skill: ${skill.name}\n\n${skill.content}\n`);
}
}
return parts.join("\n");
}
}

View File

@@ -2,6 +2,17 @@ import { describe, it, expect, beforeEach, afterEach, vi } from "vitest";
import { CronScheduler, type CronJob } from "../../src/cron-scheduler.js";
import type { Event } from "../../src/event-queue.js";
vi.mock("../../src/logger.js", () => ({
logger: {
info: vi.fn(),
debug: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
},
}));
import { logger } from "../../src/logger.js";
// Mock node-cron
vi.mock("node-cron", () => {
const tasks: Array<{ expression: string; callback: () => void; stopped: boolean }> = [];
@@ -57,6 +68,7 @@ describe("CronScheduler", () => {
beforeEach(() => {
mockCron._clearTasks();
vi.mocked(logger.warn).mockClear();
scheduler = new CronScheduler();
});
@@ -185,7 +197,6 @@ Instruction: This should not be parsed either`;
});
it("skips jobs with invalid cron expressions and logs a warning", () => {
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
const enqueue: EnqueueFn = (event) =>
({ ...event, id: 1, timestamp: new Date() }) as Event;
@@ -195,14 +206,10 @@ Instruction: This should not be parsed either`;
scheduler.start(jobs, enqueue);
expect(warnSpy).toHaveBeenCalledWith(
expect.stringContaining("bad-job")
);
expect(warnSpy).toHaveBeenCalledWith(
expect(logger.warn).toHaveBeenCalledWith(
expect.objectContaining({ name: "bad-job" }),
expect.stringContaining("invalid cron expression")
);
warnSpy.mockRestore();
});
it("schedules valid jobs and skips invalid ones in the same batch", () => {
@@ -211,7 +218,6 @@ Instruction: This should not be parsed either`;
enqueued.push(event);
return { ...event, id: enqueued.length, timestamp: new Date() } as Event;
};
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
const jobs: CronJob[] = [
{ name: "bad-job", expression: "invalid", instruction: "Bad" },
@@ -220,14 +226,12 @@ Instruction: This should not be parsed either`;
scheduler.start(jobs, enqueue);
expect(warnSpy).toHaveBeenCalledTimes(1);
expect(logger.warn).toHaveBeenCalledTimes(1);
// Only the valid job should have been scheduled — fire all
mockCron._fireAll();
expect(enqueued).toHaveLength(1);
expect(enqueued[0].payload).toEqual({ instruction: "Good", jobName: "good-job" });
warnSpy.mockRestore();
});
});

View File

@@ -2,6 +2,17 @@ import { describe, it, expect, beforeEach, afterEach, vi } from "vitest";
import { HeartbeatScheduler, type HeartbeatCheck } from "../../src/heartbeat-scheduler.js";
import type { Event } from "../../src/event-queue.js";
vi.mock("../../src/logger.js", () => ({
logger: {
info: vi.fn(),
debug: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
},
}));
import { logger } from "../../src/logger.js";
type EnqueueFn = (event: Omit<Event, "id" | "timestamp">) => Event | null;
describe("HeartbeatScheduler", () => {
@@ -9,6 +20,7 @@ describe("HeartbeatScheduler", () => {
beforeEach(() => {
vi.useFakeTimers();
vi.mocked(logger.warn).mockClear();
scheduler = new HeartbeatScheduler();
});
@@ -95,7 +107,6 @@ Instruction: Do something`;
});
it("rejects checks with interval < 60 seconds with a warning", () => {
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
const enqueue: EnqueueFn = () => null;
const checks: HeartbeatCheck[] = [
@@ -104,18 +115,14 @@ Instruction: Do something`;
scheduler.start(checks, enqueue);
expect(warnSpy).toHaveBeenCalledWith(
expect.stringContaining("too-fast")
);
expect(warnSpy).toHaveBeenCalledWith(
expect.stringContaining("below the minimum")
expect(logger.warn).toHaveBeenCalledWith(
expect.objectContaining({ name: "too-fast" }),
expect.stringContaining("below minimum")
);
// Advance time — no events should be enqueued
vi.advanceTimersByTime(60_000);
// No way to check enqueue wasn't called since it returns null, but the warn confirms rejection
warnSpy.mockRestore();
});
it("starts valid checks and skips invalid ones in the same batch", () => {
@@ -124,7 +131,6 @@ Instruction: Do something`;
enqueued.push(event);
return { ...event, id: enqueued.length, timestamp: new Date() } as Event;
};
const warnSpy = vi.spyOn(console, "warn").mockImplementation(() => {});
const checks: HeartbeatCheck[] = [
{ name: "too-fast", instruction: "Bad check", intervalSeconds: 10 },
@@ -133,13 +139,11 @@ Instruction: Do something`;
scheduler.start(checks, enqueue);
expect(warnSpy).toHaveBeenCalledTimes(1);
expect(logger.warn).toHaveBeenCalledTimes(1);
vi.advanceTimersByTime(60_000);
expect(enqueued).toHaveLength(1);
expect(enqueued[0].payload).toEqual({ instruction: "Good check", checkName: "valid-check" });
warnSpy.mockRestore();
});
});

View File

@@ -1,6 +1,17 @@
import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
import { registerShutdownHandler } from "../../src/shutdown-handler.js";
vi.mock("../../src/logger.js", () => ({
logger: {
info: vi.fn(),
debug: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
},
}));
import { logger } from "../../src/logger.js";
describe("registerShutdownHandler", () => {
let mockGateway: { shutdown: ReturnType<typeof vi.fn> };
let sigintListeners: Array<() => void>;
@@ -16,7 +27,7 @@ describe("registerShutdownHandler", () => {
if (event === "SIGTERM") sigtermListeners.push(listener as () => void);
return process;
});
vi.spyOn(console, "log").mockImplementation(() => {});
vi.mocked(logger.info).mockClear();
});
afterEach(() => {
@@ -52,6 +63,9 @@ describe("registerShutdownHandler", () => {
it("logs the signal name", () => {
registerShutdownHandler(mockGateway as never);
sigtermListeners[0]();
expect(console.log).toHaveBeenCalledWith("Received SIGTERM, shutting down...");
expect(logger.info).toHaveBeenCalledWith(
expect.objectContaining({ signal: "SIGTERM" }),
expect.stringContaining("shutting down")
);
});
});