🎧
Listen to this article

1. What Is OpenClaw?

OpenClaw is a self-hosted AI agent framework that turns large language models into persistent personal assistants. Unlike cloud-hosted AI services where you send a message and get a response, OpenClaw runs a long-lived Gateway daemon on your machine that maintains connections to your messaging platforms, schedules background work, and gives the AI agent a persistent workspace with real tools.[1]

Think of it this way: ChatGPT is a stateless web page. OpenClaw is a daemon process that runs 24/7, remembers what happened yesterday, can check your email on a schedule, and responds to your Telegram messages while you sleep.

What makes OpenClaw distinct from other agent frameworks:

┌──────────────────────────────────────────────────────┐ │ GATEWAY (daemon) │ │ │ │ ┌─────────┐ ┌──────────┐ ┌────────┐ ┌─────────┐ │ │ │Telegram │ │WhatsApp │ │Discord │ │ Slack │ │ │ │(grammY) │ │(Baileys) │ │(Bot) │ │ (Bolt) │ │ │ └────┬────┘ └────┬─────┘ └───┬────┘ └────┬────┘ │ │ └──────┬─────┴────────────┴──────┬─────┘ │ │ ▼ ▼ │ │ ┌─────────────┐ ┌──────────────┐ │ │ │ Session │ │ Cron/Heart │ │ │ │ Manager │ │ Scheduler │ │ │ └──────┬──────┘ └──────┬───────┘ │ │ └──────────┬─────────────┘ │ │ ▼ │ │ ┌────────────────┐ │ │ │ Agent Runtime │ │ │ │ (pi-mono) │ │ │ └───────┬────────┘ │ │ ▼ │ │ ┌──────────────────┐ │ │ │ Workspace │ │ │ │ ~/.openclaw/ │ │ │ │ workspace/ │ │ │ └──────────────────┘ │ │ │ │ WS API ◄──── CLI / macOS App / WebChat / Nodes │ └──────────────────────────────────────────────────────┘

2. The Gateway

The Gateway is the heart of OpenClaw. It's a single long-lived Node.js process that owns everything: messaging connections, session state, cron scheduling, and agent execution.[2]

What the Gateway does:

Gateway lifecycle:

The Gateway is typically managed as a system service via launchd (macOS) or systemd (Linux). Key operations:

openclaw gateway start     # Start the daemon
openclaw gateway stop      # Stop it
openclaw gateway restart   # Restart (preserves config)
openclaw gateway status    # Check if running

Configuration lives in ~/.openclaw/openclaw.json (JSON5 format — comments and trailing commas allowed). The Gateway validates this config strictly on boot — unknown keys or malformed values prevent startup. Run openclaw doctor to diagnose config issues.[3]

💡 Key insight: Everything runs inside the Gateway process. Cron doesn't shell out to a separate scheduler. Heartbeats don't spawn separate processes. If the Gateway is down, nothing runs. This simplicity is a feature — one process to monitor, one process to restart.

Wire protocol:

Clients connect via WebSocket and exchange JSON frames. The first frame must be a connect handshake. After that, it's request/response pairs plus server-push events. If OPENCLAW_GATEWAY_TOKEN is set, all connections must authenticate.[2]

3. Sessions & Agents

An agent in OpenClaw is a fully scoped AI "brain" with its own workspace, session store, auth profiles, and persona files (SOUL.md, AGENTS.md, etc.). A single Gateway can host one agent (default) or many isolated agents side by side.[4]

Session Keys

Every conversation in OpenClaw maps to a session key. The key determines which conversation history the agent sees:

SourceSession Key Pattern
Direct messages (default)agent:main:main
DMs (per-channel-peer)agent:main:telegram:dm:12345
Group chatsagent:main:telegram:group:-100123
Telegram topicsagent:main:telegram:group:-100123:topic:456
Cron jobscron:<jobId>
Sub-agentsagent:main:subagent:<uuid>
Webhookshook:<uuid>

By default, all direct messages share a single "main" session for continuity. This means your WhatsApp DMs and Telegram DMs feed into the same conversation context. You can change this with session.dmScope.[5]

Session Lifecycle

Sessions reset based on configurable policies:

Session transcripts are stored as JSONL files at ~/.openclaw/agents/<agentId>/sessions/<SessionId>.jsonl. The session metadata (token usage, routing info) lives in sessions.json.

4. Sub-Agents (sessions_spawn)

Sub-agents are OpenClaw's mechanism for parallel, non-blocking background work. When the main agent needs to do something that takes time — research a topic, analyze files, generate content — it can spawn a sub-agent that runs independently.[6]

Isolation & Context

Each sub-agent runs in its own isolated session (agent:main:subagent:<uuid>) with:

⚠️ Important: Sub-agents share the workspace filesystem but NOT the conversation context. They can read files the main agent wrote, but they don't know what you discussed in chat. Everything the sub-agent needs must be in the spawn prompt or in files.

The Announce Flow

When a sub-agent completes, it goes through an announce step:

  1. Sub-agent's final reply is captured
  2. A summary (including runtime, token usage, and estimated cost) is posted to the main agent's session
  3. The main agent synthesizes a natural-language summary for the user

The sessions_spawn call is non-blocking — it returns immediately with { status: "accepted", runId, childSessionKey }. The main agent can continue answering questions while the sub-agent works in the background.

// What the agent calls internally:
sessions_spawn({
  task: "Research OpenClaw cron job architecture and write a summary",
  label: "cron-research",
  model: "anthropic/claude-sonnet-4",
  runTimeoutSeconds: 300,
  cleanup: "keep"
})

Sub-agents use a dedicated queue lane (subagent) so they don't block the main agent's incoming message processing. You can run up to 8 concurrent sub-agents (configurable via subagents.maxConcurrent).

5. Cron Jobs — The Architecture Deep Dive

Cron is the Gateway's built-in scheduler. Jobs persist to ~/.openclaw/cron/jobs.json, survive Gateway restarts, and can run on precise schedules or at fixed intervals.[7]

Schedule Types

TypeUse CaseExample
atOne-shot reminder"at": "2026-02-15T09:00:00Z" or "at": "20m"
everyFixed interval"everyMs": 3600000 (every hour)
cronRecurring schedule"expr": "0 6 * * *", "tz": "America/New_York"

One-shot (at) jobs auto-delete after success by default. Set deleteAfterRun: false to keep them (they disable instead of deleting).

Main vs Isolated Sessions

This is the most important architectural distinction in cron jobs. It determines everything about how the job executes:

Main session jobs (sessionTarget: "main")

// Main session cron job (system event)
{
  name: "Weekly retro reminder",
  schedule: { kind: "cron", expr: "0 16 * * 5", tz: "America/New_York" },
  sessionTarget: "main",
  wakeMode: "now",
  payload: { kind: "systemEvent", text: "🔄 Weekly Retrospective Time!" }
}

Isolated jobs (sessionTarget: "isolated")

// Isolated cron job (dedicated agent turn)
{
  name: "Daily News Briefing",
  schedule: { kind: "cron", expr: "0 6 * * *", tz: "America/New_York" },
  sessionTarget: "isolated",
  wakeMode: "next-heartbeat",
  payload: {
    kind: "agentTurn",
    message: "Search for top news headlines from the past 24 hours..."
  },
  delivery: { mode: "none" }
}

🔑 The Key Insight: Fresh Context Every Run

⚠️ Critical understanding for cron job reliability: Isolated cron jobs start with a completely fresh session every time. The sub-agent has NO memory of previous runs. It doesn't know what it did yesterday. It doesn't remember the filename conventions it used last time. It only sees:

1. The system prompt (AGENTS.md, TOOLS.md, workspace context)
2. The cron job's message prompt (the text you wrote)
3. The files in the workspace it can read

This is why text instructions to sub-agents are fragile. If your prompt says "use the appropriate cover art filename," the agent interprets that fresh each time and may choose differently. The agent isn't drifting from a standard — it never knew the standard to begin with.

This is the root cause of the cover art filename issue that led to our AR-1 architectural requirement. Each news cron job was independently choosing cover art filenames because the prompt said something vague like "include cover art." Some runs used -rss-cover, some used -cover, some fell back to the generic podcast-cover.jpg.

The fix wasn't better prompts — it was a canonical cover-art mapping file that all cron jobs reference. The prompt says "read the mapping" instead of "figure out the filename."

Delivery Modes

ModeBehavior
announce (default for isolated)Delivers the agent's output to the target channel and posts a brief summary to the main session
noneInternal only — no delivery, no main-session summary. The agent's output stays in the cron session transcript.

When delivery.mode = "announce", you can target a specific channel and recipient:

delivery: {
  mode: "announce",
  channel: "telegram",
  to: "-1001234567890:topic:123",  // Telegram forum topic
  bestEffort: true  // Don't fail the job if delivery fails
}

Wake Modes

wakeMode controls when the main-session summary posts after an announce delivery:

Model & Thinking Overrides

Isolated jobs can use a different model and thinking level than the main session:

openclaw cron add \
  --name "Deep analysis" \
  --cron "0 6 * * 1" \
  --session isolated \
  --message "Weekly project analysis..." \
  --model opus \
  --thinking high

6. Heartbeats

Heartbeats are periodic agent turns in the main session — like a timer that wakes the agent every N minutes so it can check if anything needs attention.[8]

Default behavior: every 30 minutes, the Gateway sends a prompt asking the agent to read HEARTBEAT.md and handle any pending tasks. If nothing needs attention, the agent replies HEARTBEAT_OK and no message is delivered to the user.

How heartbeats differ from cron:

AspectHeartbeatCron (isolated)
SessionMain (shared context)Fresh each run
TimingApproximate (can drift)Exact cron expressions
Best forBatched checks (email + calendar + notifications)Standalone tasks, precise timing
CostOne turn covers multiple checksFull turn per job
ContextFull conversation historyNone (starts clean)
💡 Pro tip: Use heartbeats for monitoring and awareness (batched periodic checks). Use cron for precise scheduling and isolated task execution. Most efficient setups use both together — heartbeat for the routine inbox/calendar sweeps, cron for the 6 AM daily news generation.

The HEARTBEAT.md file is your "heartbeat checklist." Keep it tiny — it gets included in every heartbeat prompt, so every line costs tokens every 30 minutes. If the file is effectively empty (only blank lines and headers), OpenClaw skips the heartbeat entirely to save API calls.

7. Memory System

OpenClaw's memory is plain Markdown on disk. There's no hidden database — the agent's memory is exactly what you see in the workspace files.[9]

Two memory layers:

Vector memory search:

OpenClaw indexes memory files into a vector database (SQLite with embeddings) for semantic search. The memory_search tool lets the agent find relevant notes even when the wording differs. Supports OpenAI, Gemini, Voyage, or local GGUF embeddings.

The search combines vector similarity (semantic meaning) with BM25 keyword matching (exact tokens like IDs, code symbols) for hybrid retrieval.

Pre-compaction memory flush:

When a session nears its context window limit, OpenClaw triggers a silent agent turn that says "write your durable memories to disk before we compact." This prevents knowledge loss when older messages get summarized.

💡 Key principle: "Mental notes" don't survive sessions. If the agent wants to remember something, it must write it to a file. MEMORY.md for important things, memory/YYYY-MM-DD.md for daily context. Text beats brain.

8. Tool System

OpenClaw exposes tools to the agent through two parallel channels: structured function definitions sent to the model API (so the model can call them), and human-readable descriptions in the system prompt (so the model knows how to use them).[10]

Core tools:

ToolPurpose
read, write, editFile system operations
exec, processShell commands and background process management
web_search, web_fetchBrave Search API and URL content extraction
browserFull browser automation (Playwright-based)
messageSend messages across all connected channels
cronManage scheduled jobs
sessions_spawnCreate sub-agents
memory_search, memory_getSemantic memory retrieval
nodesControl paired devices (camera, screen, location)
canvasDrive the node Canvas (HTML rendering, A2UI)
imageAnalyze images with a vision model
ttsText-to-speech generation
gatewayRestart, config.get, config.patch, config.apply

Tool policy (allow/deny):

Tools are filtered by policy before being sent to the model. You can globally allow or deny tools, use profiles (minimal, coding, messaging, full), and set per-agent overrides. The deny list always wins over allow.

Skills:

Skills are tool usage guidance — Markdown files that teach the agent how to use external binaries or workflows. They're loaded from three locations (workspace wins on name conflicts): bundled skills, managed skills (~/.openclaw/skills), and workspace skills (<workspace>/skills).

9. Channel Integration

OpenClaw connects to messaging platforms through channel adapters built into the Gateway. Each channel has its own authentication mechanism, message format, and feature set.[11]

Supported channels:

Channels run simultaneously. Multiple channels feed into the same agent's session (by default, all DMs share the main session). Messages are routed based on bindings — rules that match channel + accountId + peer to an agent.

💡 Security: DM access is controlled per-channel via allowlists (channels.whatsapp.allowFrom, channels.telegram.allowFrom). Never run open-to-the-world on a personal machine. Groups require explicit allowlisting or requireMention: true.

10. Workspace & Files

The workspace (~/.openclaw/workspace by default) is the agent's home directory. It's where the agent reads instructions, writes memory, and does all its work.

Bootstrap files (injected at session start):

FilePurposeLoaded in sub-agents?
AGENTS.mdOperating instructions, conventions, safety rules✅ Yes
SOUL.mdPersona, personality, boundaries, tone❌ No
USER.mdUser profile, preferred name, context❌ No
TOOLS.mdUser-maintained tool notes (camera names, SSH hosts)✅ Yes
IDENTITY.mdAgent name, vibe, emoji❌ No
MEMORY.mdCurated long-term memory❌ No
HEARTBEAT.mdHeartbeat checklist❌ No
BOOTSTRAP.mdOne-time first-run ritual (delete after)❌ No

The workspace is typically a git repository. Brand-new workspaces are auto-initialized with git. This gives you version history, backup, and the ability to push workspace changes to a remote.

File paths:

11. Configuration

Configuration is a single JSON5 file at ~/.openclaw/openclaw.json. It controls everything: agent defaults, channel connections, session behavior, tool policies, heartbeat settings, and cron configuration.[3]

Key config sections:

{
  agents: {
    defaults: {
      workspace: "~/.openclaw/workspace",
      model: { primary: "anthropic/claude-opus-4-6" },
      heartbeat: { every: "30m", target: "last" },
      subagents: { model: "anthropic/claude-sonnet-4" },
    },
    list: [
      { id: "main", default: true, name: "Yaneth" }
    ]
  },
  session: {
    dmScope: "main",          // or "per-channel-peer"
    reset: { mode: "daily", atHour: 4 }
  },
  channels: {
    telegram: { botToken: "...", allowFrom: ["12345"] },
    whatsapp: { allowFrom: ["+15551234567"] }
  },
  cron: { enabled: true },
  tools: { deny: [] }
}

Config management tools:

Both patch and apply validate the config, write it, and restart the Gateway automatically.

12. Practical Patterns & Anti-Patterns for Cron Jobs

This section is the practical payoff from understanding the architecture. Now that you know isolated cron jobs start fresh every time, here's how to design them so they can't go wrong.

❌ Anti-Patterns (Things That Break)

1. Vague instructions that rely on agent interpretation

// ❌ BAD: Agent interprets "appropriate" differently each run
message: "Generate the daily news page with appropriate cover art
          and save it in the correct location."

Why it breaks: "Appropriate" and "correct" are subjective. The agent interprets these fresh each time with no memory of what it did yesterday. On Monday it might use news-cover.jpg, on Tuesday daily-news-rss-cover.jpg.

2. Assuming the agent remembers previous runs

// ❌ BAD: References "the same format" but the agent has no history
message: "Generate today's news briefing in the same format
          as yesterday."

Why it breaks: Isolated cron sessions have NO memory of previous runs. "The same format as yesterday" is meaningless because the agent can't see yesterday's session.

3. Relying on the agent to "figure out" file paths

// ❌ BAD: Multiple valid interpretations
message: "Save the audio file in the news directory."

Why it breaks: Is it /news/, /news/audio/, /sites/news/, or /frontend/sites/news/? The agent guesses, and different models or temperatures produce different guesses.

✅ Better Patterns (Things That Work)

1. Reference canonical mapping files instead of hardcoding

// ✅ GOOD: Points to a single source of truth
message: "Read the cover art mapping at
          frontend/sites/news-feeds/cover-art-mapping.md
          Use EXACTLY the filename specified for 'Daily News'.
          Do not invent filenames."

Why it works: The mapping file is the source of truth. If you need to change a filename, you change it in one place and all cron jobs get the update.

2. Use scripts that enforce correctness

// ✅ GOOD: Script handles the deterministic parts
message: "Run python3 scripts/generate-daily-news.py
          The script handles file naming, directory structure,
          and RSS feed updates. Review the output and commit."

Why it works: Scripts are deterministic. They use the same filenames every time. The agent's job is reduced to running the script and reviewing the output — minimizing the surface area for drift.

3. Explicit, complete instructions with exact paths

// ✅ GOOD: Nothing left to interpretation
message: "DAILY NEWS UPDATE:
1. Search for top news from the past 24 hours
2. Write the HTML page to:
   frontend/sites/news/index.html
3. Generate audio and save to:
   frontend/sites/news/daily-news-audio.opus
4. Cover art filename MUST be:
   daily-news-rss-cover.jpg
5. Update RSS entry in:
   frontend/sites/news-feeds/feed.xml
6. Git commit with message:
   'Daily news update YYYY-MM-DD'
7. Git push to main"

Why it works: Every file path is explicit. Every naming convention is stated. There's nothing for the agent to "interpret." The prompt is a checklist, not a suggestion.

4. Validation and post-processing steps

// ✅ GOOD: Self-checking prompt
message: "After generating the news page:
1. Verify the cover art filename matches
   cover-art-mapping.md
2. Verify the audio file exists at the expected path
3. Verify the RSS feed XML is valid
4. If ANY verification fails, log the error
   and DO NOT commit."

5. Template-based approaches

// ✅ GOOD: Template enforces structure
message: "Read the template at
          frontend/sites/news/template.html
          Replace ONLY the content placeholders
          ({{HEADLINES}}, {{DATE}}, {{AUDIO_URL}}).
          Do not modify the template structure."

Summary: The Cron Job Reliability Checklist

Before creating or editing a cron job, verify:
☐ Every file path is absolute or workspace-relative, never vague
☐ Every naming convention is stated explicitly, never assumed
☐ Any shared config (cover art, templates) is in a canonical file the prompt references
☐ The prompt includes validation steps to catch drift
☐ Deterministic operations are in scripts, not free-form agent output
☐ The prompt says "do not invent" where convention matters
☐ You've tested the prompt by reading it cold — could you follow it with zero context?

Real Examples from Our Cron Jobs

Here are actual cron job patterns running in production at ThinkSmart.Life:

Daily News Generation (6 AM EST, isolated)

Each news feed (General, Crypto, AI, Happy News, F1, Olympics) runs as an isolated cron job at 6 AM. The prompt specifies exact output paths, cover art filenames from the canonical mapping, and explicit RSS feed update instructions. Delivery mode is none — these jobs do their work silently and commit to git.

News Anchor Generation (6:30 AM EST, isolated)

A single cron job auto-discovers all active user anchors from the API, generates episodes for each, and commits in a single push. This follows AR-3: one cron job handles N anchors, not N cron jobs for N anchors.

Weekly Retrospective (Friday 4 PM, main session)

This uses sessionTarget: "main" with a systemEvent because the retro needs conversational context — it reviews the week's memory files, commits, and progress, then presents to Michel in Telegram.

References

  1. OpenClaw Documentation, "Personal Assistant Setup," docs.openclaw.ai/start/openclaw, 2026.
  2. OpenClaw Documentation, "Gateway Architecture," docs.openclaw.ai/concepts/architecture, January 2026.
  3. OpenClaw Documentation, "Configuration," docs.openclaw.ai/gateway/configuration, 2026.
  4. OpenClaw Documentation, "Multi-Agent Routing," docs.openclaw.ai/concepts/multi-agent, 2026.
  5. OpenClaw Documentation, "Session Management," docs.openclaw.ai/concepts/session, 2026.
  6. OpenClaw Documentation, "Sub-Agents," docs.openclaw.ai/tools/subagents, 2026.
  7. OpenClaw Documentation, "Cron Jobs," docs.openclaw.ai/automation/cron-jobs, 2026.
  8. OpenClaw Documentation, "Heartbeat," docs.openclaw.ai/gateway/heartbeat, 2026.
  9. OpenClaw Documentation, "Memory," docs.openclaw.ai/concepts/memory, 2026.
  10. OpenClaw Documentation, "Tools," docs.openclaw.ai/tools, 2026.
  11. OpenClaw Documentation, "Chat Channels," docs.openclaw.ai/channels, 2026.
  12. OpenClaw Documentation, "Cron vs Heartbeat," docs.openclaw.ai/automation/cron-vs-heartbeat, 2026.
🛡️ No Third-Party Tracking