COMPLETE COURSE PREVIEW - All 10 modules included below

OpenClaw Course for CEOs

Run AI Agents That Actually Work. Own your AI stack. £97 launch price.

Buy Now (£97)
10 modules · 95,000+ words · 11 hours of content · OpenClaw-specific (not generic AI)
Module 1 | Module 2 | Module 3 | Module 4 | Module 5 | Module 6 | Module 7 | Module 8 | Module 9 | Module 10

Module 4: Cron Jobs - Make Your Agent Work While You Sleep

From Manual Tasks to Autonomous Execution

Introduction

You've built your first 3 OpenClaw projects. They work. But there's a problem: you still have to REMEMBER to run them.

Daily brief at 8am? You type openclaw run daily-brief.

Weekly report on Friday? You type openclaw run weekly-report.

Monthly invoice reminders? You... forget half the time.

This is not automation. This is just delegating tasks to yourself.

This module teaches you cron jobs - the system that turns your OpenClaw agents from on-demand assistants into autonomous employees that work 24/7 without you lifting a finger. What you'll build:
  • Daily brief delivered to your phone at 8am (emails, calendar, tasks)
  • Weekly business report every Friday at 5pm
  • Monthly invoice reminders on the 25th of each month
  • Custom schedules for any recurring task
Time to implement: 60 minutes for initial setup, then 5 minutes to add new jobs

---

Part 1: What Are Cron Jobs?

The Concept

A cron job is a scheduled task that runs automatically at specific times. Think of it as setting an alarm for your computer.

Instead of you typing openclaw run daily-brief every morning, you tell your computer:

> "Every day at 8am, run this command for me."

Your computer does it. Rain or shine. Weekends included (unless you tell it otherwise).

Cron vs Heartbeat (When to Use Which)

OpenClaw offers TWO ways to schedule autonomous work:

Cron jobs (this module):
  • Fixed schedules ("every day at 8am")
  • Time-based triggers ("first Monday of every month")
  • Simple, predictable, reliable
  • Best for: daily briefs, weekly reports, monthly invoices
Heartbeat monitoring (Module 8):
  • Proactive checks ("did my website go down?")
  • Event-based triggers ("new email arrived, process it")
  • Intelligent scheduling (skip runs if nothing changed)
  • Best for: monitoring, alerts, conditional workflows
Rule of thumb: If you can describe it with "every [time]", use cron. If it's "when [event]", use heartbeat.

---

Part 2: Setting Up Your First Cron Job

Step 1: Create the Agent Script

Let's build a daily brief agent that summarizes:

  • New emails (Gmail API)
  • Today's calendar (Google Calendar API)
  • Outstanding tasks (Notion API)

Create the agent file:

cd ~/.openclaw/agents

nano daily-brief.md

Paste this agent definition:

Daily Brief Agent

Role

You are my morning briefing assistant. Your job is to scan my email, calendar, and task list, then send me a concise daily brief via Telegram.

Data Sources

  1. Gmail: Unread emails from last 24 hours (exclude newsletters)
  2. Google Calendar: Today's events (next 12 hours)
  3. Notion: Tasks with status "In Progress" or due today

Output Format

Send a Telegram message (use /send-telegram tool):

---

Daily Brief - [Date]

📧 Emails (X unread):

  • [Sender]: [Subject] - [1-line summary]

(Max 5 emails. If more, say "...and N more")

📅 Today's Calendar:

  • [Time] - [Event title] - [Location if applicable]

(All events for today)

Tasks Due Today:

  • [Task title] - [Status]

🔥 Priority Action:

[The ONE thing that absolutely must get done today]

---

Rules

  • Keep it under 500 words
  • No fluff or motivational quotes
  • If calendar empty, say "Clear calendar today"
  • If no urgent emails, say "No urgent emails"
  • Priority action must be SPECIFIC (not "check emails")

Save and exit (Ctrl+X, Y, Enter).

Step 2: Test the Agent Manually

Before scheduling it, verify it works:

openclaw run daily-brief

Check your Telegram. You should receive a brief within 30 seconds.

Common issues:
  • "No data sources configured": Run openclaw config add-gmail and openclaw config add-calendar
  • "Telegram send failed": Check your Telegram bot token in ~/.openclaw/.env
  • "Agent exceeded timeout": Reduce data scope (e.g., last 12 hours instead of 24)

Step 3: Convert to Cron Job

Edit your cron schedule:

crontab -e

Add this line (replace YOUR_USERNAME with your actual username):

0 8   * /usr/local/bin/openclaw run daily-brief >> /Users/YOUR_USERNAME/.openclaw/logs/daily-brief.log 2>&1
What this means:
  • 0 8 * = "At 8:00am, every day, every month, every day of week"
  • /usr/local/bin/openclaw = Full path to openclaw command
  • run daily-brief = Run the agent we just created
  • >> /Users/YOUR_USERNAME/.openclaw/logs/daily-brief.log = Save output to log file
  • 2>&1 = Capture errors too

Save and exit (:wq in vim, or Ctrl+X in nano).

Step 4: Verify Cron Job Is Scheduled

List your active cron jobs:

crontab -l

You should see your daily-brief line.

To test immediately (don't wait until 8am):

Manually trigger the cron job command

/usr/local/bin/openclaw run daily-brief >> ~/.openclaw/logs/daily-brief.log 2>&1

Check the log

tail ~/.openclaw/logs/daily-brief.log

If you see output and your Telegram message arrived, it's working.

---

Part 3: Cron Schedule Syntax (The Cheat Sheet)

Cron uses 5 fields to define schedules:

    *

│ │ │ │ │

│ │ │ │ └─── Day of week (0-7, where 0 and 7 = Sunday)

│ │ │ └───── Month (1-12)

│ │ └─────── Day of month (1-31)

│ └───────── Hour (0-23)

└─────────── Minute (0-59)

Common Patterns

| Schedule | Cron Syntax | Description |

|----------|-------------|-------------|

| Every hour | 0 | At minute 0 of every hour |

| Every 15 minutes | /15 * | At :00, :15, :30, :45 |

| Daily at 8am | 0 8 * | 8:00am every day |

| Weekdays at 9am | 0 9 1-5 | 9am Mon-Fri only |

| First of month | 0 9 1 | 9am on the 1st |

| Last day of month | 0 17 28-31 | 5pm on 28th-31st (covers all month-end days) |

| Every Monday at 10am | 0 10 1 | Monday weekly report |

| Twice daily | 0 8,17 * | 8am and 5pm |

Step-by-Step Builder

  1. Decide the time: "Every Friday at 5pm"
  2. Translate to numbers:

- Minute: 0 (at the top of the hour)

- Hour: 17 (5pm in 24-hour format)

- Day of month: * (any day)

- Month: * (every month)

- Day of week: 5 (Friday)

  1. Result: 0 17 5
Tip: Use [crontab.guru](https://crontab.guru) to validate your syntax.

---

Part 4: Real-World Cron Job Templates

Template 1: Weekly Business Report (Every Friday at 5pm)

Agent file: ~/.openclaw/agents/weekly-report.md

Weekly Business Report Agent

Role

Compile a weekly summary of business metrics and send to my Telegram.

Data Sources

  1. Gmail: Count of emails sent/received this week
  2. CRM (Notion): Deals closed, pipeline value, new leads
  3. Calendar: Total meeting hours this week
  4. Finance (Stripe API): Revenue this week vs last week

Output Format

Weekly Report - Week of [Date]

📊 Metrics:

  • Revenue: £X,XXX (↑/↓ Y% vs last week)
  • Deals closed: N
  • Pipeline value: £X,XXX
  • New leads: N

Time Spent:

  • Meetings: X hours
  • Emails: X sent, Y received

🎯 Next Week Focus:

[Top 3 priorities based on pipeline and deadlines]

Cron job:
0 17   5 /usr/local/bin/openclaw run weekly-report >> ~/.openclaw/logs/weekly-report.log 2>&1

---

Template 2: Monthly Invoice Reminders (25th of Every Month)

Agent file: ~/.openclaw/agents/invoice-reminders.md

Invoice Reminder Agent

Role

Check for unpaid invoices older than 30 days and draft reminder emails.

Data Sources

  1. Accounting system (Xero/QuickBooks API): Invoices with status "Sent" or "Overdue"
  2. Gmail: Check if reminder already sent in last 14 days (avoid duplicate reminders)

Process

  1. Query invoices older than 30 days with status != "Paid"
  2. For each invoice:

- Check if reminder sent in last 14 days (search Gmail Sent folder)

- If NOT sent, draft reminder email

  1. Send drafts to my Telegram for approval (DO NOT auto-send)

Email Template

Subject: Reminder: Invoice #[NUMBER] - £[AMOUNT] outstanding

Hi [Client Name],

I hope you're well. I'm following up on Invoice #[NUMBER] for £[AMOUNT], issued on [DATE].

This invoice is now [DAYS] days overdue. Could you please confirm when payment will be made?

If you've already paid, please let me know so I can update my records.

Thanks,

Dan

Cron job:
0 9 25   /usr/local/bin/openclaw run invoice-reminders >> ~/.openclaw/logs/invoice-reminders.log 2>&1

---

Template 3: Silent Hours (Suppress Non-Urgent Notifications)

Use case: You don't want daily briefs on weekends or after 7pm. Solution: Add time-based logic to your agent:

Edit ~/.openclaw/agents/daily-brief.md and add:

Silent Hours

  • Do NOT run between 7pm (19:00) and 8am (08:00)
  • Do NOT run on Saturday or Sunday
  • If triggered during silent hours, exit immediately with log: "Skipped - silent hours"

Implementation

Before processing, check:

python

import datetime

now = datetime.datetime.now()

hour = now.hour

day = now.strftime("%A")

if hour < 8 or hour >= 19:

log("Skipped - outside working hours")

exit()

if day in ["Saturday", "Sunday"]:

log("Skipped - weekend")

exit()


Cron job (runs every day at 8am, but agent self-filters weekends):
bash

0 8 * /usr/local/bin/openclaw run daily-brief >> ~/.openclaw/logs/daily-brief.log 2>&1


---

Part 5: Troubleshooting Cron Jobs

Problem 1: Cron Job Doesn't Run

Symptoms: It's 8am, no daily brief arrived. Diagnosis:
bash

Check if cron is running

sudo launchctl list | grep cron # macOS

systemctl status cron # Linux

Check your crontab

crontab -l

Check system logs

tail -f /var/log/syslog | grep CRON # Linux

tail -f /var/log/system.log | grep cron # macOS


Common causes:
  • Cron daemon not running (restart: sudo systemctl restart cron on Linux)
  • Wrong time zone (cron uses system time, check with date)
  • Syntax error in crontab (validate at crontab.guru)

---

Problem 2: Cron Runs But Agent Fails

Symptoms: Cron triggers, but no Telegram message. Log shows errors. Diagnosis:
bash

Check the agent's log file

tail -50 ~/.openclaw/logs/daily-brief.log

Manually run the exact cron command

/usr/local/bin/openclaw run daily-brief


Common causes:
  • Missing environment variables (cron doesn't load .bashrc or .zshrc)
  • API tokens not accessible (cron runs with limited PATH and ENV)
  • Wrong file permissions (agent file not readable)
Fix: Add environment variables to crontab
bash

crontab -e


Add these lines at the TOP of your crontab (before any cron jobs):

cron

SHELL=/bin/bash

PATH=/usr/local/bin:/usr/bin:/bin

HOME=/Users/YOUR_USERNAME

ANTHROPICAPIKEY=sk-ant-your-key-here


Now your cron jobs have access to the same environment as your terminal sessions.

---

Problem 3: Cron Job Runs Multiple Times

Symptoms: You get 3 daily briefs at 8am instead of 1. Diagnosis:
bash

crontab -l | grep daily-brief


Cause: You accidentally added the same cron job multiple times.

Fix:
bash

crontab -e

Delete duplicate lines, save


---

Part 6: Advanced Patterns

Pattern 1: Conditional Execution (Only Run If Data Changed)

Use case: Weekly report only runs if there's new data (avoid empty reports).

Add this to your weekly-report.md agent:

markdown

Conditional Logic

Before generating report:

  1. Query data sources for this week's activity
  2. If total activity < 5 (e.g., < 5 emails, < 5 tasks):

- Log: "No significant activity this week - report skipped"

- Exit without sending Telegram message

  1. Else: Generate and send report as normal

Your cron job runs every Friday, but the agent decides whether to actually send a report.

---

Pattern 2: Retry on Failure

Use case: If Gmail API is down at 8am, retry at 8:30am and 9am.

Create a wrapper script: ~/.openclaw/scripts/daily-brief-retry.sh

bash

#!/bin/bash

LOG_FILE="$HOME/.openclaw/logs/daily-brief.log"

Try running the agent

/usr/local/bin/openclaw run daily-brief >> "$LOG_FILE" 2>&1

Check if it succeeded

if [ $? -ne 0 ]; then

echo "First attempt failed. Retrying in 30 minutes..." >> "$LOG_FILE"

sleep 1800 # 30 minutes

/usr/local/bin/openclaw run daily-brief >> "$LOG_FILE" 2>&1

fi


Make it executable:

bash

chmod +x ~/.openclaw/scripts/daily-brief-retry.sh


Update crontab to use the wrapper:

bash

0 8 * /Users/YOUR_USERNAME/.openclaw/scripts/daily-brief-retry.sh


---

Pattern 3: Staggered Schedules (Avoid API Rate Limits)

Use case: You have 5 agents that all query Gmail API. Running them simultaneously hits rate limits. Solution: Stagger by 10 minutes.
cron

0 8 * /usr/local/bin/openclaw run daily-brief

10 8 * /usr/local/bin/openclaw run email-triage

20 8 * /usr/local/bin/openclaw run task-review

30 8 * /usr/local/bin/openclaw run calendar-prep

40 8 * /usr/local/bin/openclaw run meeting-notes-summary


Each agent runs 10 minutes apart. No rate limit collisions.

---

Part 7: Maintenance & Monitoring

Log Rotation (Prevent Disk Space Issues)

Cron job logs grow over time. Set up automatic log rotation:

Create ~/.openclaw/scripts/rotate-logs.sh:

bash

#!/bin/bash

LOG_DIR="$HOME/.openclaw/logs"

Compress logs older than 7 days

find "$LOG_DIR" -name "*.log" -mtime +7 -exec gzip {} \;

Delete compressed logs older than 30 days

find "$LOG_DIR" -name "*.log.gz" -mtime +30 -delete


Make executable:

bash

chmod +x ~/.openclaw/scripts/rotate-logs.sh


Add to crontab (runs daily at 2am):

bash

0 2 * /Users/YOUR_USERNAME/.openclaw/scripts/rotate-logs.sh


---

Weekly Cron Health Check

Create an agent that verifies your cron jobs are running correctly:

~/.openclaw/agents/cron-health-check.md:
markdown

Cron Health Check Agent

Role

Verify all cron jobs executed in the last 7 days. Alert if any failed.

Process

  1. Read all log files in ~/.openclaw/logs/
  2. For each log:

- Check last modified date (should be within 7 days)

- Check for error patterns ("failed", "timeout", "exception")

  1. If any errors found, send alert to Telegram:
Cron Health Alert

Job: [name]

Last run: [date]

Status: FAILED

Error: [first line of error]

See full log: ~/.openclaw/logs/[name].log

Schedule

Run every Sunday at 6pm


Crontab:

bash

0 18 0 /usr/local/bin/openclaw run cron-health-check >> ~/.openclaw/logs/cron-health-check.log 2>&1

``

---

Part 8: Practical Exercise

Your Challenge: Build a "Weekly Wins" Agent

Create an agent that runs every Friday at 4pm and asks you via Telegram:

> "What were your 3 biggest wins this week?"

You reply via Telegram. The agent:

  1. Stores your wins in a Notion database
  2. Sends you a motivational summary
  3. Includes your wins-to-date count
Bonus: On the first Friday of every month, send a "Monthly Wins Roundup" with all wins from that month. Hints:
  • Use /ask-user tool to prompt for wins via Telegram
  • Store wins in Notion with fields: Date, Win 1, Win 2, Win 3
  • Use cron syntax 0 16 5 for "every Friday at 4pm"
  • Use 0 16 1-7 * 5` for "first Friday of month at 4pm" (runs on Fridays between 1st-7th)

---

Summary: What You've Learned

Cron job fundamentals: Schedule syntax, testing, troubleshooting

Real-world templates: Daily briefs, weekly reports, invoice reminders

Advanced patterns: Conditional execution, retries, staggered schedules

Maintenance: Log rotation, health checks, silent hours

Next steps:
  • Add 2-3 cron jobs to your OpenClaw setup this week
  • Test them manually before scheduling
  • Set up log rotation to avoid disk space issues
  • Move on to Module 5: Gateway Setup (control your agents from your phone)

---

Resources

  • Official docs: [docs.openclaw.ai/cron-jobs](https://docs.openclaw.ai/cron-jobs)
  • Cron syntax validator: [crontab.guru](https://crontab.guru)
  • Community examples: [github.com/openclaw/examples](https://github.com/openclaw/examples)
  • Zen van Riel's blog: [zenvanriel.nl/openclaw-cron-jobs](https://zenvanriel.nl/openclaw-cron-jobs)
Next module: Module 5 - Gateway Setup (Telegram/Slack control, Tailscale networking)

Module 5: Gateway Setup - Control Your AI Team From Anywhere

From Desktop-Only to Phone-in-Pocket Control

Introduction

Your OpenClaw agents run on your computer. They work 24/7 via cron jobs. But there's a limitation: you can only interact with them when you're sitting at your desk.

Client calls while you're out? You can't ask your agent to pull their file.

Idea hits you on a walk? You can't tell your content agent to draft it.

Emergency at 11pm? You can't trigger your monitoring agent from bed.

This is the missing piece: gateways. Gateways turn your OpenClaw agents into teammates you can message from your phone, just like texting a colleague. Telegram, Slack, iMessage, WhatsApp - you pick the app you already use. What you'll build:
  • Telegram bot that lets you run any agent via mobile
  • Slack integration for team-shared agents
  • Secure remote access via Tailscale (no exposed ports, no VPN config)
  • Command shortcuts for your most-used agents
Time to implement: 75 minutes (Telegram: 30 min, Tailscale: 25 min, Slack: 20 min)

---

Part 1: Why Gateways Matter

The Problem With Desktop-Only Agents

Riley Brown (creator of OpenClaw) runs 11 cloud agents + 1 Mac Mini agent. They handle:

  • Meeting notes during calls
  • Proposal generation between appointments
  • Lead qualification from anywhere
  • Content drafts during travel

Without gateways, he'd need to remote desktop into his computer every time. With gateways, he just texts his agent.

Gateway Options

OpenClaw supports 4 gateway platforms:

| Platform | Best For | Setup Time |

|----------|----------|------------|

| Telegram | Personal use, fastest setup | 15 min |

| Slack | Team collaboration, company workspace | 25 min |

| iMessage | Apple ecosystem, minimal friction | 40 min (Mac only) |

| WhatsApp | International teams, non-technical users | 35 min |

This module covers Telegram (easiest) and Slack (most common for teams). iMessage and WhatsApp follow similar patterns - see docs.openclaw.ai/gateways for full guides.

---

Part 2: Setting Up Telegram Gateway

Why Telegram First?

  • Free API (no rate limits for personal use)
  • Works on all platforms (iOS, Android, desktop, web)
  • 5-second response times (webhook support)
  • No phone number required for bot
  • Built-in file sharing (PDFs, images, voice notes)

Step 1: Create Your Telegram Bot

  1. Open Telegram, search for @BotFather
  2. Send /newbot
  3. Follow prompts:

- Bot name: "My OpenClaw Assistant" (display name)

- Bot username: "myopenclawbot" (must end in bot, must be unique)

  1. BotFather will reply with your bot token: 1234567890:ABCdefGHIjklMNOpqrsTUVwxyz
CRITICAL: This token is like a password. Treat it like your bank login. Never commit it to GitHub or share it publicly.

Step 2: Configure OpenClaw Gateway

Tell OpenClaw about your Telegram bot:

openclaw gateway add telegram

When prompted:

  • Bot token: Paste the token from BotFather
  • Chat ID (optional): Leave blank for now (we'll get it in Step 4)
  • Allowed commands: all (you can restrict later)

OpenClaw will create ~/.openclaw/gateways/telegram.yml:

platform: telegram

bot_token: "1234567890:ABCdefGHIjklMNOpqrsTUVwxyz"

allowed_users: []

allowed_commands: all

webhook_url: null

polling_interval: 2

Step 3: Start the Gateway

openclaw gateway start telegram

You should see:

✓ Telegram gateway started

✓ Polling for messages every 2 seconds

✓ Send /start to your bot to begin

Step 4: Connect Your Telegram Account

  1. In Telegram, search for your bot (the username you created)
  2. Click "Start" or send /start
  3. Your bot should reply:
Welcome! Your chat ID is 123456789.

Available commands:

/run [agent-name] - Run an agent

/list - List available agents

/status - Check agent status

/logs [agent-name] - View recent logs

/help - Show all commands

Copy your chat ID (the number in the welcome message).

Step 5: Whitelist Your Chat ID (Security)

Stop the gateway (Ctrl+C in the terminal running it).

Edit the gateway config:

nano ~/.openclaw/gateways/telegram.yml

Add your chat ID to allowed_users:

allowed_users:

- 123456789

Save (Ctrl+X, Y, Enter).

Why this matters: Without whitelisting, anyone who finds your bot can run your agents. With whitelisting, only YOUR Telegram account can control them.

Step 6: Test From Your Phone

Restart the gateway:

openclaw gateway start telegram

From Telegram on your phone, send:

/run daily-brief

Within 5 seconds, your bot should reply with your daily brief (the agent from Module 4).

If it works, you're done. Your agents are now phone-accessible.

---

Part 3: Command Shortcuts (Power User Feature)

The Problem

Typing /run send-proposal-to John Smith on mobile is tedious. Shortcuts fix this.

Create Custom Commands

Edit your gateway config:

nano ~/.openclaw/gateways/telegram.yml

Add a shortcuts section:

shortcuts:

brief: "run daily-brief"

proposals: "run proposal-generator"

leads: "run lead-qualifier"

report: "run weekly-report"

Restart the gateway. Now you can type:

/brief

Instead of:

/run daily-brief

Dynamic Arguments

You can pass parameters to agents:

shortcuts:

pitch: "run proposal-generator --client=$1 --budget=$2"

Usage:

/pitch "Acme Corp" "15000"

This runs:

openclaw run proposal-generator --client="Acme Corp" --budget="15000"

---

Part 4: Running as a Background Service (launchd on Mac)

The Problem

When you close your terminal, the gateway stops. For 24/7 access, run it as a system service.

Create launchd Plist (macOS)

Create the service file:

nano ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist

Paste (replace YOUR_USERNAME with your actual username):





Label

com.openclaw.telegram-gateway

ProgramArguments

/usr/local/bin/openclaw

gateway

start

telegram

RunAtLoad

KeepAlive

StandardOutPath

/Users/YOUR_USERNAME/.openclaw/logs/telegram-gateway.log

StandardErrorPath

/Users/YOUR_USERNAME/.openclaw/logs/telegram-gateway-error.log

Load the service:

launchctl load ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist

The gateway now runs 24/7, even after reboots.

To stop it:
launchctl unload ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist

Linux/systemd Alternative

For Linux, create /etc/systemd/system/openclaw-telegram.service:

[Unit]

Description=OpenClaw Telegram Gateway

After=network.target

[Service]

Type=simple

User=YOUR_USERNAME

ExecStart=/usr/local/bin/openclaw gateway start telegram

Restart=always

RestartSec=10

[Install]

WantedBy=multi-user.target

Enable and start:

sudo systemctl enable openclaw-telegram

sudo systemctl start openclaw-telegram

---

Part 5: Slack Gateway (For Team Collaboration)

When to Use Slack vs Telegram

Use Slack if:
  • You already have a company Slack workspace
  • Multiple people need agent access (team-shared agents)
  • You want agent outputs visible to your team
  • You need per-channel agent routing (e.g., #sales runs sales-agent)
Stick with Telegram if:
  • Personal use only
  • Don't want to manage workspace permissions
  • Need fastest setup

Step 1: Create Slack App

  1. Go to https://api.slack.com/apps
  2. Click "Create New App" → "From scratch"
  3. App name: "OpenClaw Assistant"
  4. Workspace: Select your workspace
  5. Click "Create App"

Step 2: Add Bot Permissions

In your app settings:

  1. Go to OAuth & Permissions (left sidebar)
  2. Scroll to ScopesBot Token Scopes
  3. Add these permissions:

- chat:write (send messages)

- commands (handle slash commands)

- files:write (send files)

- channels:read (list channels)

- groups:read (list private channels)

Step 3: Install to Workspace

  1. Scroll to top of OAuth & Permissions page
  2. Click Install to Workspace
  3. Click Allow
  4. Copy the Bot User OAuth Token (starts with xoxb-)

Step 4: Configure OpenClaw

openclaw gateway add slack

When prompted:

  • Bot token: Paste the xoxb- token
  • Signing secret: Get from Basic InformationApp Credentials
  • Allowed channels: Leave blank (we'll configure per-channel later)

Step 5: Set Up Slash Commands

In Slack app settings:

  1. Go to Slash Commands (left sidebar)
  2. Click Create New Command
  3. Fill in:

- Command: /openclaw

- Request URL: https://your-tailscale-url/webhook/slack (we'll set this up in Part 6)

- Short description: "Run OpenClaw agents"

- Usage hint: [agent-name] [args]

  1. Click Save

Step 6: Test in Slack

Start the Slack gateway:

openclaw gateway start slack

In any Slack channel, type:

/openclaw run daily-brief

The bot should reply with your brief in a thread (visible only to you).

Step 7: Channel-Specific Agents

You can route channels to specific agents:

Edit ~/.openclaw/gateways/slack.yml:

channel_agents:

C01234ABC: sales-agent # #sales channel

C56789XYZ: support-agent # #support channel

G98765DEF: proposal-generator # #proposals private channel

Get channel IDs: right-click channel → View channel details → scroll to bottom.

Now when anyone in #sales types /openclaw run, it automatically uses the sales-agent.

---

Part 6: Secure Remote Access with Tailscale

The Problem

Your OpenClaw agents run on your laptop/Mac Mini at home. When you're on the road, you can't reach them directly (they're behind your home router).

Bad solutions:
  • Port forwarding (exposes your computer to the internet)
  • VPN (complex setup, costs money, slow)
  • Cloud hosting (defeats the purpose of local agents)
Good solution: Tailscale.

What Is Tailscale?

Tailscale creates a secure private network between your devices. It's like your devices are on the same Wi-Fi, even when they're not.

  • Free for personal use (up to 100 devices)
  • Zero-config (no port forwarding, no firewall rules)
  • Works behind corporate firewalls
  • End-to-end encrypted (WireGuard protocol)

Tailscale Serve vs Funnel

Tailscale Serve (private):
  • Exposes OpenClaw ONLY to your Tailscale network
  • Only your devices can reach it
  • Best for personal use
Tailscale Funnel (public):
  • Exposes OpenClaw to the ENTIRE internet
  • Anyone with the URL can reach it (use authentication!)
  • Best for team access without Tailscale accounts
For this module, we'll use Serve (private, secure).

Step 1: Install Tailscale

Mac:
brew install tailscale
Linux:
curl -fsSL https://tailscale.com/install.sh | sh
Windows: Download from https://tailscale.com/download

Step 2: Authenticate

sudo tailscale up

This opens a browser to log in. Use Google/Microsoft/GitHub account (or create Tailscale account).

Your device is now on your Tailscale network.

Step 3: Check Your Tailscale IP

tailscale ip

You'll see something like: 100.64.0.5

This is your device's private IP on the Tailscale network.

Step 4: Expose OpenClaw via Tailscale Serve

OpenClaw includes a built-in webhook server for receiving gateway commands.

Start the webhook server:

openclaw serve --port 8080

Expose it on Tailscale:

tailscale serve https / http://localhost:8080

You'll see:

Available within your Tailscale network at:

https://your-machine-name.tailnet-name.ts.net

This URL is now accessible from any device logged into your Tailscale network (your phone, laptop, tablet).

Step 5: Update Gateway Webhooks (Optional)

If you want Telegram/Slack to push messages to your server (instead of polling), update your gateway configs:

Telegram:
webhook_url: "https://your-machine-name.tailnet-name.ts.net/webhook/telegram"

polling_interval: null

Slack:

Use the same URL in your Slack app's Slash CommandsRequest URL.

Step 6: Install Tailscale on Your Phone

Install Tailscale app on your phone (iOS/Android).

Log in with the same account.

Now your phone can reach your OpenClaw instance via https://your-machine-name.tailnet-name.ts.net.

---

Part 7: Security Best Practices

Rule 1: Use Allowlists, Not Denylists

Your gateway configs should ALWAYS have allowedusers or allowedchannels defined.

Bad (anyone can use your agents):
allowed_users: []
Good (only you can use them):
allowed_users:

- 123456789

Rule 2: Limit Agent Permissions

Some agents should be read-only (daily-brief, status-check). Others can write (proposal-generator, email-sender).

In your agent definition (~/.openclaw/agents/agent-name.md), specify:

permissions:

read: [gmail, calendar, notion]

write: []

OpenClaw will enforce this. If the agent tries to send an email but write: [gmail] isn't listed, it fails.

Rule 3: Separate Personal and Work Gateways

If you run agents for work AND personal projects, use separate gateways:

  • Telegram (personal): Only runs personal agents (daily-brief, content-ideas)
  • Slack (work): Only runs work agents (sales-agent, proposal-generator)

This prevents accidentally running your work proposal-generator for a personal project (or vice versa).

Rule 4: Monitor Gateway Logs

Check who's using your agents:

tail -f ~/.openclaw/logs/telegram-gateway.log

Look for:

  • Unauthorized access attempts (chat IDs not in allowlist)
  • Failed commands (typos, missing agents)
  • Rate limit warnings (someone spamming your bot)

Set up alerts (Module 8: Heartbeat Monitoring) to notify you of suspicious activity.

Rule 5: Rotate Tokens Every 90 Days

Telegram and Slack tokens don't expire, but you should rotate them periodically:

  1. Generate new token (BotFather → /token or Slack app settings)
  2. Update ~/.openclaw/gateways/*.yml
  3. Restart gateway
  4. Revoke old token

---

Part 8: Real-World Gateway Workflows

Workflow 1: On-the-Go Proposal Generation

Scenario: Client calls, asks for proposal. You're at a coffee shop. Without gateway: "Let me get back to you tomorrow when I'm at my desk." With gateway:
  1. Open Telegram
  2. Send: /pitch "Acme Corp" "Website redesign" "15000"
  3. Agent reads client history from CRM (Notion)
  4. Agent drafts proposal in 90 seconds
  5. Agent sends PDF to your Telegram
  6. You forward it to client before call ends
Time saved: 2 hours (proposal writing) → 90 seconds

Workflow 2: Meeting Notes While On Call

Scenario: Zoom call with client. Need to capture action items. Without gateway: Frantically typing notes while talking. With gateway:
  1. Start call
  2. Open Telegram, send: /run meeting-notes
  3. Agent listens to your call (via Zoom API or voice recording)
  4. Agent extracts action items, decisions, next steps
  5. Agent sends summary to Telegram when call ends
  6. You paste summary into follow-up email
Time saved: 15 minutes of note cleanup

Workflow 3: Content Ideas During a Walk

Scenario: Idea hits you while walking the dog. Don't want to lose it. Without gateway: Open Notes app, type rough idea, hope you remember to expand later. With gateway:
  1. Open Telegram, send voice note: "Content idea: how to use OpenClaw for proposal generation"
  2. Agent transcribes voice note
  3. Agent expands into 3 headline options + outline
  4. Agent saves to Notion content calendar
  5. When you get home, full content brief is waiting
Time saved: 20 minutes (context switching + recall)

---

Part 9: Troubleshooting Common Issues

Issue 1: Gateway Not Responding

Symptoms: You send /run daily-brief in Telegram, no response. Diagnosis:

Check if gateway is running

ps aux | grep "openclaw gateway"

Check logs

tail -50 ~/.openclaw/logs/telegram-gateway.log

Common causes:
  • Gateway not started (run openclaw gateway start telegram)
  • Chat ID not whitelisted (add to allowed_users)
  • Bot token revoked (regenerate in BotFather)

Issue 2: "Agent Not Found" Error

Symptoms: Gateway responds, but says "Agent 'xyz' not found." Diagnosis:

List available agents

openclaw list agents

Check agent file exists

ls ~/.openclaw/agents/

Fix: Verify agent name matches filename (e.g., daily-brief.md = agent name daily-brief).

Issue 3: Tailscale URL Not Reachable

Symptoms: Can't reach https://your-machine-name.tailnet-name.ts.net from phone. Diagnosis:

Check Tailscale status

tailscale status

Verify serve is running

curl http://localhost:8080

Common causes:
  • Phone not logged into Tailscale (install app, log in)
  • openclaw serve not running (start it)
  • Firewall blocking port 8080 (allow it: sudo ufw allow 8080 on Linux)

Issue 4: Slack Command Returns "Dispatch Failed"

Symptoms: /openclaw run daily-brief in Slack shows "dispatch failed" error. Diagnosis:

Check Slack app event logs:

  1. Go to https://api.slack.com/apps
  2. Select your app → Event SubscriptionsView Logs
Common causes:
  • Request URL incorrect (should be https://your-tailscale-url/webhook/slack)
  • Signing secret mismatch (regenerate in Basic InformationApp Credentials)
  • Gateway not running (openclaw gateway start slack)

---

Part 10: Homework

Task 1: Set Up Telegram Gateway (Required)

  • [ ] Create Telegram bot via BotFather
  • [ ] Configure OpenClaw gateway
  • [ ] Whitelist your chat ID
  • [ ] Run /run daily-brief from your phone
  • [ ] Add 3 custom shortcuts
Time estimate: 30 minutes

Task 2: Set Up Tailscale (Recommended)

  • [ ] Install Tailscale on your OpenClaw machine
  • [ ] Install Tailscale on your phone
  • [ ] Expose OpenClaw via tailscale serve
  • [ ] Access https://your-machine-name.tailnet-name.ts.net from phone
Time estimate: 25 minutes

Task 3: Set Up Slack Gateway (Optional)

  • [ ] Create Slack app
  • [ ] Add bot permissions
  • [ ] Install to workspace
  • [ ] Configure OpenClaw Slack gateway
  • [ ] Test /openclaw run daily-brief in Slack
Time estimate: 25 minutes

---

Next Steps

In Module 6: Mission Controls, you'll build a Notion dashboard that shows:

  • Which agents are running
  • Token costs per agent
  • Success/failure rates
  • Performance metrics

This turns your phone-accessible agents into a monitored AI team with real-time visibility.

But first, make sure your Telegram gateway is working. Everything from here builds on mobile access.

---

Quick Reference

Telegram Commands

| Command | Description |

|---------|-------------|

| /run [agent] | Run an agent |

| /list | List all available agents |

| /status | Check gateway status |

| /logs [agent] | View recent logs for an agent |

| /help | Show all commands |

Useful Commands

Start gateway

openclaw gateway start telegram

Stop gateway

(Ctrl+C if running in terminal, or:)

launchctl unload ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist

View gateway logs

tail -f ~/.openclaw/logs/telegram-gateway.log

List active gateways

openclaw gateway list

Test gateway manually

openclaw gateway test telegram

Check Tailscale status

tailscale status

Restart Tailscale serve

tailscale serve https / http://localhost:8080

Key Files

  • Telegram config: ~/.openclaw/gateways/telegram.yml
  • Slack config: ~/.openclaw/gateways/slack.yml
  • Gateway logs: ~/.openclaw/logs/telegram-gateway.log
  • launchd plist: ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist

---

You now have phone-in-pocket AI agents. Next: visibility into what they're doing.

Module 6: Mission Controls (Notion Dashboard)

Duration: 60 minutes Prerequisites: Modules 1-5 (especially Module 3 for OAuth setup)

---

What You'll Build

By the end of this module, you'll have a centralized Notion dashboard that gives you complete visibility into your OpenClaw operations:

  • Agent Activity Monitor - See what every agent is working on in real-time
  • Cost Tracker - Track API spend per agent, per day, per project
  • Performance Metrics - Token usage, response times, success rates
  • Task Completion Log - Automatic updates when agents finish work
  • Health Status - Know immediately if an agent fails or goes offline

Think of this as your "mission control" - one place to see everything happening in your AI team.

---

Why Mission Controls Matter

Without visibility, your AI agents are a black box. You don't know:

  • What they're doing right now
  • How much they're costing you
  • Which agents are performing well vs. struggling
  • When something breaks
Real cost of no visibility:

Andrew Chen (consultant, 5 agents running) discovered he was spending £340/month on an email agent that was stuck in a loop, retrying the same failed API call 400+ times per day. He only noticed when his Anthropic bill arrived.

With a dashboard: He would have seen the spike in tokens within 2 hours and killed the loop before it cost £300.

---

Architecture Overview

Your mission control setup has 3 components:

  1. Notion Database (your dashboard UI)
  2. Status Reporter Agent (updates the database automatically)
  3. Cost Tracker Script (pulls API usage from Anthropic/OpenAI)
┌─────────────────────────────────────┐

│ Notion Dashboard (read) │

│ ┌─────────────────────────────┐ │

│ │ Agent Activity │ │

│ │ Cost Tracker │ │

│ │ Performance Metrics │ │

│ │ Health Status │ │

│ └─────────────────────────────┘ │

└────────────▲────────────────────────┘

│ (write updates)

┌────────────┴────────────────────────┐

│ Status Reporter Agent │

│ - Runs every 5 minutes (cron) │

│ - Queries .openclaw/state/ │

│ - Posts to Notion API │

└────────────▲────────────────────────┘

│ (read state)

┌────────────┴────────────────────────┐

│ OpenClaw Agents │

│ .openclaw/state/*.json │

└─────────────────────────────────────┘

---

Part 1: Create Your Notion Dashboard

Step 1: Create a Notion Integration

  1. Go to https://www.notion.so/my-integrations
  2. Click "+ New integration"
  3. Name: OpenClaw Mission Control
  4. Associated workspace: Your workspace
  5. Capabilities:

- ✅ Read content

- ✅ Update content

- ✅ Insert content

  1. Click "Submit"
  2. Copy the Internal Integration Token - you'll need this
Security note: This token gives full read/write access to any Notion page you share with the integration. Store it in .openclaw/credentials/notion-token.txt (NOT in your git repo).
mkdir -p ~/.openclaw/credentials

echo "yourintegrationtoken_here" > ~/.openclaw/credentials/notion-token.txt

chmod 600 ~/.openclaw/credentials/notion-token.txt

Step 2: Create Your Dashboard Database

  1. Open Notion
  2. Create a new page: "OpenClaw Mission Control"
  3. Add a Database - Table view
  4. Name the database: "Agent Activity"
  5. Add these properties:

| Property Name | Type | Description |

|--------------|------|-------------|

| Agent Name | Title | Name of the agent (e.g. "Email Triage") |

| Status | Select | Running / Idle / Failed / Paused |

| Last Run | Date | When the agent last executed |

| Duration | Number | How long the last run took (seconds) |

| Tokens Used | Number | Total tokens consumed in last run |

| Cost (£) | Formula | prop("Tokens Used") * 0.000015 |

| Success Rate | Number | % of successful runs (last 24h) |

| Current Task | Text | What the agent is working on |

| Last Error | Text | Most recent error message (if any) |

  1. Share the database with your integration:

- Click "Share" in the top right

- Search for "OpenClaw Mission Control"

- Click "Invite"

  1. Get the database ID:

- Open the database as a full page

- Copy the URL: https://www.notion.so/yourworkspace/abc123?v=xyz

- The database ID is the abc123 part (32 characters)

- Save it: echo "yourdatabaseid" > ~/.openclaw/credentials/notion-db-id.txt

---

Part 2: Build the Status Reporter Agent

This agent runs every 5 minutes, checks what your other agents are doing, and updates the Notion dashboard.

Create the Agent Config

Create .openclaw/agents/status-reporter.json:

{

"name": "status-reporter",

"description": "Updates Notion dashboard with agent activity and performance metrics",

"model": "claude-haiku-4",

"schedule": "/5 *",

"tools": ["bash", "read"],

"memory_scope": "isolated",

"max_tokens": 1000,

"system_prompt": "You are a status reporter. Read agent state files from .openclaw/state/ and update the Notion dashboard. Be concise - you run every 5 minutes."

}

Why Haiku? Status reporting is simple data aggregation. Haiku costs £0.25 per million input tokens (vs £3 for Sonnet). Running every 5 minutes = 288 runs/day. With Haiku: ~£2/month. With Sonnet: ~£24/month.

Create the Reporter Script

Create ~/.openclaw/skills/update-notion-dashboard.sh:

#!/bin/bash

Update Notion dashboard with current agent status

Called by status-reporter agent every 5 minutes

NOTION_TOKEN=$(cat ~/.openclaw/credentials/notion-token.txt)

NOTION_DB=$(cat ~/.openclaw/credentials/notion-db-id.txt)

STATE_DIR="$HOME/.openclaw/state"

Check if state directory exists

if [ ! -d "$STATE_DIR" ]; then

echo "Error: State directory not found at $STATE_DIR"

exit 1

fi

Process each agent's state file

for statefile in "$STATEDIR"/*.json; do

[ -e "$state_file" ] || continue

AGENTNAME=$(jq -r '.name' "$statefile")

STATUS=$(jq -r '.status' "$state_file")

LASTRUN=$(jq -r '.lastrun' "$state_file")

DURATION=$(jq -r '.durationseconds' "$statefile")

TOKENS=$(jq -r '.tokensused' "$statefile")

SUCCESSRATE=$(jq -r '.successrate24h' "$statefile")

CURRENTTASK=$(jq -r '.currenttask' "$state_file")

LASTERROR=$(jq -r '.lasterror // "None"' "$state_file")

# Search for existing row in Notion

SEARCH_RESPONSE=$(curl -s -X POST \

"https://api.notion.com/v1/databases/$NOTION_DB/query" \

-H "Authorization: Bearer $NOTION_TOKEN" \

-H "Notion-Version: 2022-06-28" \

-H "Content-Type: application/json" \

-d '{

"filter": {

"property": "Agent Name",

"title": {

"equals": "'"$AGENT_NAME"'"

}

}

}')

PAGEID=$(echo "$SEARCHRESPONSE" | jq -r '.results[0].id // empty')

# Build the update payload

PAYLOAD=$(cat <

{

"properties": {

"Agent Name": {

"title": [{"text": {"content": "$AGENT_NAME"}}]

},

"Status": {

"select": {"name": "$STATUS"}

},

"Last Run": {

"date": {"start": "$LAST_RUN"}

},

"Duration": {

"number": $DURATION

},

"Tokens Used": {

"number": $TOKENS

},

"Success Rate": {

"number": $SUCCESS_RATE

},

"Current Task": {

"richtext": [{"text": {"content": "$CURRENTTASK"}}]

},

"Last Error": {

"richtext": [{"text": {"content": "$LASTERROR"}}]

}

}

}

EOF

)

if [ -n "$PAGE_ID" ]; then

# Update existing row

curl -s -X PATCH \

"https://api.notion.com/v1/pages/$PAGE_ID" \

-H "Authorization: Bearer $NOTION_TOKEN" \

-H "Notion-Version: 2022-06-28" \

-H "Content-Type: application/json" \

-d "$PAYLOAD" > /dev/null

else

# Create new row

curl -s -X POST \

"https://api.notion.com/v1/pages" \

-H "Authorization: Bearer $NOTION_TOKEN" \

-H "Notion-Version: 2022-06-28" \

-H "Content-Type: application/json" \

-d '{

"parent": {"databaseid": "'"$NOTIONDB"'"},

'"${PAYLOAD#\{}"' > /dev/null

fi

done

echo "Dashboard updated successfully at $(date)"

Make it executable:

chmod +x ~/.openclaw/skills/update-notion-dashboard.sh

Test the Reporter

Run manually to verify it works

~/.openclaw/skills/update-notion-dashboard.sh

Check your Notion dashboard - you should see rows populated

Troubleshooting:
  • "Error: State directory not found" → Your agents haven't run yet, so no state files exist. Run openclaw agent run email-triage first.
  • "401 Unauthorized" → Check your Notion token is correct and saved to ~/.openclaw/credentials/notion-token.txt
  • No rows appear → Verify you shared the database with the integration (Step 2.6 above)

---

Part 3: Add Cost Tracking

Token usage in Notion is useful, but you want to see actual £ costs from your API providers.

Create Cost Tracker Script

Create ~/.openclaw/skills/fetch-api-costs.sh:

#!/bin/bash

Fetch actual API costs from Anthropic and OpenAI

Run once per day via cron

ANTHROPIC_KEY=$(cat ~/.openclaw/credentials/anthropic-api-key.txt)

OPENAI_KEY=$(cat ~/.openclaw/credentials/openai-api-key.txt)

COST_LOG="$HOME/.openclaw/logs/daily-costs.json"

Create log file if it doesn't exist

mkdir -p "$HOME/.openclaw/logs"

touch "$COST_LOG"

TODAY=$(date -u +"%Y-%m-%d")

Fetch Anthropic usage (last 24 hours)

ANTHROPIC_USAGE=$(curl -s -X GET \

"https://api.anthropic.com/v1/usage?startdate=$TODAY&enddate=$TODAY" \

-H "x-api-key: $ANTHROPIC_KEY" \

-H "anthropic-version: 2023-06-01")

ANTHROPICTOKENS=$(echo "$ANTHROPICUSAGE" | jq -r '.total_tokens')

ANTHROPICCOST=$(echo "$ANTHROPICTOKENS * 0.000015" | bc -l)

Fetch OpenAI usage (last 24 hours)

OPENAI_USAGE=$(curl -s -X GET \

"https://api.openai.com/v1/usage?date=$TODAY" \

-H "Authorization: Bearer $OPENAI_KEY")

OPENAITOKENS=$(echo "$OPENAIUSAGE" | jq -r '.total_tokens')

OPENAICOST=$(echo "$OPENAITOKENS * 0.000002" | bc -l)

TOTALCOST=$(echo "$ANTHROPICCOST + $OPENAI_COST" | bc -l)

Log to file

cat >> "$COST_LOG" <

{

"date": "$TODAY",

"anthropictokens": $ANTHROPICTOKENS,

"anthropiccostgbp": $ANTHROPIC_COST,

"openaitokens": $OPENAITOKENS,

"openaicostgbp": $OPENAI_COST,

"totalcostgbp": $TOTAL_COST

}

EOF

echo "Daily costs logged: £$TOTAL_COST"

Optional: Send to Notion (add a "Daily Costs" database)

curl -X POST https://api.notion.com/v1/pages ...

Make it executable:

chmod +x ~/.openclaw/skills/fetch-api-costs.sh

Add to Cron

Run this once per day at 23:55 (just before midnight):

crontab -e

Add:

55 23   * /Users/yourname/.openclaw/skills/fetch-api-costs.sh >> /Users/yourname/.openclaw/logs/cost-tracker.log 2>&1
Why 23:55? Gives the script time to fetch the full day's usage before the date rolls over.

---

Part 4: Advanced Dashboard Views

View 1: Agents by Cost (Last 7 Days)

  1. In your Notion database, click "+ New view"
  2. Choose "Board"
  3. Name: "By Cost (7 days)"
  4. Group by: Status
  5. Sort by: Tokens Used (Descending)
  6. Filter: Last Run is within the past 7 days
Use case: Quickly spot expensive agents that might be running more than needed.

View 2: Failed Agents Only

  1. Click "+ New view"
  2. Choose "Table"
  3. Name: "Failed"
  4. Filter: Status equals "Failed"
  5. Sort by: Last Run (Descending)
Use case: Daily check for broken agents.

View 3: Performance Dashboard

  1. Click "+ New view"
  2. Choose "Gallery"
  3. Name: "Performance"
  4. Card preview: Current Task
  5. Card properties: Status, Success Rate, Cost (£)
Use case: High-level overview for weekly reviews.

---

Part 5: Alerting (Optional)

Want to know immediately when an agent fails? Add alerts to your status reporter.

Slack Alert on Failure

Modify update-notion-dashboard.sh to add this after the Notion update:

Alert on failures

if [ "$STATUS" = "Failed" ]; then

SLACK_WEBHOOK=$(cat ~/.openclaw/credentials/slack-webhook.txt)

curl -X POST "$SLACK_WEBHOOK" \

-H "Content-Type: application/json" \

-d '{

"text": "🚨 Agent Failed: '"$AGENT_NAME"'",

"blocks": [

{

"type": "section",

"text": {

"type": "mrkdwn",

"text": "Agent: '"$AGENTNAME"'\nError: '"$LASTERROR"'\nLast Run: '"$LAST_RUN"'"

}

}

]

}'

fi

Get a Slack webhook:

  1. Go to https://api.slack.com/messaging/webhooks
  2. Create a new webhook for your workspace
  3. Save to ~/.openclaw/credentials/slack-webhook.txt

Email Alert on High Costs

Add this to fetch-api-costs.sh:

Alert if daily cost exceeds £20

if (( $(echo "$TOTAL_COST > 20" | bc -l) )); then

echo "Warning: High API costs today (£$TOTAL_COST)" | \

mail -s "OpenClaw Cost Alert" your-email@example.com

fi

---

Part 6: Real-World Patterns

Pattern 1: Per-Project Cost Tracking

If you're running OpenClaw for multiple clients, you want to bill them accurately.

Solution: Tag agents by project in their config:
{

"name": "client-acme-email-triage",

"tags": ["client:acme", "billable"],

...

}

Then in fetch-api-costs.sh, group costs by tag:

Sum tokens by client tag

ACMETOKENS=$(jq '[.[] | select(.tags[] == "client:acme") | .tokensused] | add' \

"$STATE_DIR"/*.json)

ACMECOST=$(echo "$ACMETOKENS * 0.000015" | bc -l)

echo "Client ACME: £$ACME_COST"

Add this to your invoice: "AI Automation Services: £X.XX (based on metered usage)"

Pattern 2: Budget Alerts

Want to cap your monthly spend at £500?

Create ~/.openclaw/skills/check-monthly-budget.sh:

#!/bin/bash

BUDGET_CAP=500

CURRENT_MONTH=$(date +"%Y-%m")

COST_LOG="$HOME/.openclaw/logs/daily-costs.json"

Sum all costs for current month

MONTHSPEND=$(jq --arg month "$CURRENTMONTH" \

'[.[] | select(.date | startswith($month)) | .totalcostgbp] | add' \

"$COST_LOG")

if (( $(echo "$MONTHSPEND > $BUDGETCAP" | bc -l) )); then

echo "🚨 Budget exceeded: £$MONTHSPEND / £$BUDGETCAP"

# Pause all non-critical agents

openclaw agent pause --tag non-critical

# Send alert

echo "Monthly budget exceeded. All non-critical agents paused." | \

mail -s "OpenClaw Budget Alert" your-email@example.com

fi

Run this daily:

0 9   * /Users/yourname/.openclaw/skills/check-monthly-budget.sh

Pattern 3: Performance Baselines

Track how your agents improve over time.

In Notion, add a Performance Log database:

| Date | Agent Name | Avg Duration (s) | Success Rate (%) | Cost per Task (£) |

|------|------------|------------------|------------------|-------------------|

| 2024-03-01 | Email Triage | 12.3 | 94% | 0.0024 |

| 2024-03-08 | Email Triage | 8.1 | 97% | 0.0016 |

Insight: After optimizing the system prompt, email triage got 34% faster and 67% cheaper.

Log this weekly via a cron job:

#!/bin/bash

Log weekly performance baselines to Notion

WEEK=$(date +"%Y-W%V")

for state_file in ~/.openclaw/state/*.json; do

# Calculate weekly averages

AVGDURATION=$(jq -r '.durationseconds' "$state_file")

SUCCESSRATE=$(jq -r '.successrate24h' "$statefile")

# ... post to Notion Performance Log database

done

---

Common Issues & Solutions

Issue 1: "Database not found" error

Cause: You didn't share the Notion database with your integration. Fix:
  1. Open the database in Notion
  2. Click "Share" (top right)
  3. Search for your integration name
  4. Click "Invite"

Issue 2: Cost formula shows "$0.00" for all agents

Cause: The Notion formula is using the wrong token pricing. Fix: Update the "Cost (£)" formula property:
  • Claude Sonnet 4: prop("Tokens Used") * 0.000015
  • Claude Haiku 4: prop("Tokens Used") * 0.00000025
  • GPT-4: prop("Tokens Used") * 0.00003

If you use multiple models, add a "Model" property and use:

if(prop("Model") == "Sonnet", prop("Tokens Used") * 0.000015,

if(prop("Model") == "Haiku", prop("Tokens Used") * 0.00000025, 0))

Issue 3: Status reporter runs but Notion isn't updating

Cause: The state files don't exist yet, or the reporter can't read them. Fix:
  1. Check state files exist: ls -la ~/.openclaw/state/
  2. Run an agent manually: openclaw agent run email-triage
  3. Verify state file created: cat ~/.openclaw/state/email-triage.json
  4. Check reporter logs: tail -f ~/.openclaw/logs/status-reporter.log

Issue 4: Daily costs are always £0.00

Cause: API providers don't report usage in real-time. Fix: Wait 24-48 hours. Both Anthropic and OpenAI have a delay before usage data appears in their APIs. Your first cost report will be accurate, but it won't appear until the next day.

---

Verification Checklist

Before moving to Module 7, verify:

  • [ ] Notion integration created and token saved
  • [ ] Agent Activity database created with all properties
  • [ ] Database shared with integration
  • [ ] Status reporter agent running every 5 minutes
  • [ ] Agent rows appearing in Notion (run an agent manually to test)
  • [ ] Cost tracker script created and scheduled
  • [ ] At least one dashboard view customized for your needs
  • [ ] (Optional) Slack/email alerts configured

---

What You've Built

You now have complete visibility into your AI operations:

  1. Real-time agent monitoring - see what every agent is doing
  2. Cost tracking - know exactly what you're spending
  3. Performance metrics - identify slow/expensive agents
  4. Failure alerts - get notified immediately when something breaks
  5. Budget controls - automatically pause agents if costs spike
Time saved: 2-3 hours per week you'd spend manually checking logs, SSH-ing into servers, and debugging issues. Risk reduced: Catch runaway agents before they burn £300 in API credits.

---

Next Module Preview

Module 7: Multi-Agent Orchestration

Right now, each agent works independently. But what if you want agents to collaborate?

Examples:

  • Sales agent hands off to onboarding agent when a deal closes
  • Research agent feeds data to content agent, which publishes to social agent
  • Inbox agent triages emails → scheduling agent books meetings → CRM agent logs contacts

In Module 7, you'll learn:

  • Shared memory between agents
  • Event-driven triggers (agent A completes → agent B starts)
  • Supervisor agent pattern (one agent manages others)
  • Parallel execution (run 5 agents simultaneously)
  • Workflow templates (ready-to-use multi-agent setups)

---

Module 6 complete. You now have mission control.

Next: [Module 7 - Multi-Agent Orchestration →](module-07-multi-agent-orchestration.md)

Module 7: Multi-Agent Orchestration

Duration: 90 minutes Prerequisites: Modules 1-6 (especially Module 4 for cron jobs, Module 6 for Notion dashboard)

---

What You'll Build

By the end of this module, you'll have a coordinated team of specialized AI agents that work together autonomously:

  • Content Agent - Writes blog posts, social media, newsletters
  • Distribution Agent - Posts to platforms, schedules tweets, manages campaigns
  • Research Agent - Monitors competitors, finds opportunities, gathers data
  • Coordinator Agent - Routes tasks to the right specialist, tracks progress

These agents share memory, communicate results, and execute in parallel - just like a real team.

---

Why One Agent Isn't Enough

The problem with "do everything" agents:

You start with a single AI assistant that handles everything:

  • Email triage
  • Calendar management
  • Content writing
  • Social media posting
  • Research
  • Reporting
What happens as you scale:
  1. Context collapse - The agent's memory file gets too large (10,000+ words), slowing every request
  2. Conflicting instructions - "Write engaging content" vs "Keep responses concise" creates confusion
  3. Sequential bottleneck - Tasks queue up instead of running in parallel
  4. Cost explosion - Every task loads the entire context, wasting tokens on irrelevant info
  5. Hard to debug - When something fails, you can't tell which "mode" broke
Real example:

Sarah runs a marketing agency. She started with one agent doing everything. After 3 weeks:

  • Email agent was reading 8,000-word SOUL.md on every message (£2.40/day wasted)
  • Content writing agent kept getting "distracted" by email instructions in the same file
  • Research tasks blocked content tasks because they ran sequentially
  • Total cost: £340/month, 60% wasted on unnecessary context
After splitting into 4 specialized agents:
  • Email agent: 400-word SOUL.md (only email rules)
  • Content agent: 600-word SOUL.md (brand voice, style guides)
  • Research agent: 200-word SOUL.md (search parameters, sources)
  • Coordinator: 800-word SOUL.md (routing logic)
  • New cost: £140/month (59% savings)
  • Speed: 3x faster (parallel execution)

---

Multi-Agent Architecture Patterns

Pattern 1: Specialist Team (No Coordinator)

Best for: Independent workflows with minimal overlap

┌──────────────┐     ┌──────────────┐     ┌──────────────┐

│ Email Agent │ │ Content │ │ Social Media │

│ │ │ Agent │ │ Agent │

│ Runs: Hourly │ │ Runs: Daily │ │ Runs: 3x/day │

└──────────────┘ └──────────────┘ └──────────────┘

│ │ │

└────────────────────┴─────────────────────┘

┌─────────▼─────────┐

│ Shared MEMORY.md │

│ (read/append only)│

└────────────────────┘

How it works:
  • Each agent has its own SOUL.md, AGENTS.md, cron schedule
  • All agents read/append to the same ~/.openclaw/MEMORY.md
  • No coordination needed - they just log what they did
  • Simple, cheap, scales easily
Example use case:
  • Email agent triages inbox → logs "3 urgent, 2 scheduled, 1 archived"
  • Content agent writes blog post → logs "Published: 5 ways to automate sales"
  • Social agent posts to Twitter → reads MEMORY.md, sees blog post, tweets link

Pattern 2: Coordinator + Specialists

Best for: Complex workflows requiring task routing and dependency management

┌────────────────────────────────────┐

│ Coordinator Agent │

│ - Reads incoming requests │

│ - Routes to right specialist │

│ - Tracks completion │

│ - Synthesizes results │

└────────┬───────────┬───────────────┘

│ │

┌────▼────┐ ┌────▼────┐ ┌────▼────┐

│ Writer │ │Researcher│ │Designer │

│ Agent │ │ Agent │ │ Agent │

└─────────┘ └──────────┘ └─────────┘

│ │ │

└───────────┴───────────────┘

┌──────────▼──────────┐

│ Task Queue (Notion)│

│ Shared MEMORY.md │

└─────────────────────┘

How it works:
  1. User submits request to Coordinator (via Telegram/Slack)
  2. Coordinator analyzes request, determines which specialists are needed
  3. Coordinator writes tasks to Notion database (status: pending)
  4. Specialist agents run on cron, check for pending tasks with their tag
  5. Specialists complete work, update Notion (status: completed), append to MEMORY.md
  6. Coordinator checks for completed tasks, synthesizes final result, notifies user
Example use case:
  • User: "Launch new product next week"
  • Coordinator → writes 3 tasks:

- [tag: writer] Write product launch blog post

- [tag: designer] Create 5 social media graphics

- [tag: researcher] Find 10 relevant subreddits for launch announcement

  • Specialists execute in parallel (20 mins total vs 60 mins sequential)
  • Coordinator collects results → "Launch pack ready: blog draft, 5 images, 10 subreddits"

Pattern 3: Pipeline (Sequential Handoff)

Best for: Multi-stage workflows where each step depends on the previous

Input → [Research] → [Writer] → [Editor] → [Publisher] → Output

↓ ↓ ↓ ↓

MEMORY.md MEMORY.md MEMORY.md MEMORY.md

How it works:
  1. Research agent runs at 9am, finds trending topics, logs to MEMORY.md
  2. Writer agent runs at 10am, reads MEMORY.md, writes article, logs draft location
  3. Editor agent runs at 11am, reads draft, improves it, logs final version
  4. Publisher agent runs at 12pm, posts to blog, schedules social, logs confirmation
Example use case:
  • Daily newsletter pipeline
  • Weekly report generation
  • Content repurposing (video → blog → tweets → LinkedIn)

---

Part 1: Shared Memory Architecture

The key to multi-agent coordination is shared memory - a single source of truth that all agents can read and update.

The MEMORY.md File

OpenClaw's MEMORY.md is designed for multi-agent workflows:

Location: ~/.openclaw/MEMORY.md Format:

Shared Memory

2026-02-28 09:15 - Email Agent

Triaged 12 new messages:

  • 3 urgent (responded immediately)
  • 5 scheduled for review (added to Notion)
  • 4 archived (newsletters, receipts)

2026-02-28 09:30 - Research Agent

Competitor monitoring:

  • Acme Corp launched new pricing (£99/mo → £79/mo)
  • Startup X raised £2M Series A
  • Industry report: 43% YoY growth in AI automation

2026-02-28 10:00 - Content Agent

Published: "5 Ways to Automate Customer Support"

  • URL: https://yourblog.com/automate-support
  • Word count: 1,200
  • SEO: optimized for "customer support automation"
  • Next: Social agent should promote this

2026-02-28 10:30 - Social Agent

Read MEMORY.md → saw new blog post

Posted to Twitter: [link to tweet]

Scheduled LinkedIn post for 2pm

Key principles:
  1. Timestamped entries - Know who did what when
  2. Agent identification - Prefix every entry with agent name
  3. Append-only - Never delete, never edit (audit trail)
  4. Action-oriented - Log what you DID, not what you thought
  5. Cross-references - Tag other agents when their action is needed

Reading MEMORY.md in Your SOUL.md

Each agent's SOUL.md should reference the shared memory:

Email Agent - SOUL.md

Your Job

Triage Dan's inbox every hour. Archive spam, respond to urgent, log everything.

Reading Shared Memory

Before processing email, read ~/.openclaw/MEMORY.md (last 24 hours only).

  • If Content Agent published a blog post → don't archive related replies
  • If Social Agent scheduled a campaign → expect related emails
  • If Research Agent flagged a competitor → prioritize their newsletters

Writing to Shared Memory

After every run, append your results:

  • How many emails processed
  • Any urgent actions taken
  • Items that other agents should know about

Format:

[timestamp] - Email Agent

[summary of what you did]

Token optimization tip: Don't load the entire MEMORY.md on every request. Use tail -50 ~/.openclaw/MEMORY.md to read only recent entries (last ~6 hours).

---

Part 2: Building a 4-Agent Content Factory

Let's build a real multi-agent system: a content factory that researches, writes, edits, and publishes automatically.

Agent 1: Research Agent

File: ~/.openclaw/agents/research-agent/SOUL.md

Research Agent - SOUL.md

Your Job

Monitor competitors, trending topics, and industry news. Find content opportunities.

Schedule

Run daily at 9am via cron:

0 9 * openclaw run research-agent

Tools You Have

  • Web search (Google, Bing, Twitter)
  • RSS feed reader (Feedly API)
  • Competitor blogs (saved in AGENTS.md)
  • Subreddit monitoring (r/entrepreneur, r/startups)

What You Research

  1. Competitor blog posts (last 24 hours)
  2. Trending topics on Twitter (our industry hashtags)
  3. Subreddit discussions (upvotes > 100)
  4. Google Trends (rising queries related to our keywords)

Output Format

Append to ~/.openclaw/MEMORY.md:

[timestamp] - Research Agent

Trending topics:
  • [topic 1] - [why it's relevant]
  • [topic 2] - [why it's relevant]
Competitor activity:
  • [competitor] published "[title]" - [key takeaway]
Content opportunities:
  • [topic] - [angle we could take]
TASK FOR WRITER AGENT: Write about [topic] from [angle]
Agent config: ~/.openclaw/agents/research-agent/AGENTS.md
name: research-agent

model: claude-sonnet-4.5 # Haiku is too weak for research

max_tokens: 4000

temperature: 0.3 # Low creativity, factual research

timeout: 300 # 5 mins max (web searches can be slow)

tools:

- web_search

- rss_reader

- file_write # For appending to MEMORY.md

credentials:

- twitterapikey: ~/.openclaw/credentials/twitter-readonly.txt

- feedlyapikey: ~/.openclaw/credentials/feedly.txt

Agent 2: Writer Agent

File: ~/.openclaw/agents/writer-agent/SOUL.md

Writer Agent - SOUL.md

Your Job

Read research from MEMORY.md, write blog posts, save drafts to Notion.

Schedule

Run daily at 10am (1 hour after Research Agent):

0 10 * openclaw run writer-agent

Workflow

  1. Read ~/.openclaw/MEMORY.md → find entries tagged "TASK FOR WRITER AGENT"
  2. If task found → write 1,200-word blog post on that topic
  3. Save draft to Notion (database: Blog Drafts)
  4. Append to MEMORY.md: "Draft ready: [title] - Notion ID: [id]"

Writing Style

  • Conversational, practical, real examples
  • Start with a problem (reader's pain point)
  • 3-5 sections with clear H2 headings
  • End with actionable next steps
  • SEO: Include target keyword 5-7 times naturally

Quality Bar

  • No fluff, no obvious statements
  • Real examples (not hypothetical)
  • Specific numbers (not "many" or "often")
  • Cite sources when making claims

Output Format

Append to MEMORY.md:

[timestamp] - Writer Agent

Wrote: "[blog post title]"

  • Topic: [from research]
  • Word count: [count]
  • Notion ID: [id]
  • Status: Draft (needs review)
TASK FOR EDITOR AGENT: Review and publish [Notion ID]

Agent 3: Editor Agent

File: ~/.openclaw/agents/editor-agent/SOUL.md

Editor Agent - SOUL.md

Your Job

Review drafts from Writer Agent, improve quality, mark as ready to publish.

Schedule

Run daily at 11am:

0 11 * openclaw run editor-agent

Workflow

  1. Read ~/.openclaw/MEMORY.md → find "TASK FOR EDITOR AGENT"
  2. Load draft from Notion using the Notion ID
  3. Review and edit:

- Fix grammar/spelling

- Improve clarity (remove jargon)

- Add examples where needed

- Verify links work

- Check SEO (keyword density, meta description)

  1. Update Notion status: "Ready to Publish"
  2. Append to MEMORY.md confirmation

Editing Principles

  • Shorter sentences (max 25 words)
  • Active voice ("we built" not "was built by us")
  • Remove hedging ("possibly", "might", "could")
  • Add concrete numbers (not "many users" → "450 users")

Output Format

[timestamp] - Editor Agent

Edited: "[title]"

  • Changes: [summary of edits]
  • Status: Ready to publish
  • Notion ID: [id]
TASK FOR PUBLISHER AGENT: Publish [Notion ID] today at 2pm

Agent 4: Publisher Agent

File: ~/.openclaw/agents/publisher-agent/SOUL.md

Publisher Agent - SOUL.md

Your Job

Publish approved blog posts to WordPress, schedule social promotion.

Schedule

Run daily at 2pm:

0 14 * openclaw run publisher-agent

Workflow

  1. Read MEMORY.md → find "TASK FOR PUBLISHER AGENT"
  2. Load post from Notion (verify status = "Ready to Publish")
  3. Post to WordPress via API
  4. Create 3 social media posts (Twitter, LinkedIn, Facebook)
  5. Schedule social posts (today 3pm, tomorrow 10am, next week)
  6. Update Notion status: "Published"
  7. Log results to MEMORY.md

WordPress Setup

API endpoint: https://yourblog.com/wp-json/wp/v2/posts

Auth: ~/.openclaw/credentials/wordpress-token.txt

Categories: Auto-tag based on content (use AI to suggest 2-3 categories)

Featured image: Use OpenAI DALL-E to generate (store in WordPress media library)

Social Media Templates

Twitter: "[Compelling hook question]

[1-sentence value prop]

Read more: [link]"

LinkedIn: Longer format (3 paragraphs), professional tone Facebook: Casual, question-based, emoji

Output Format

[timestamp] - Publisher Agent

Published: "[title]"

  • URL: [wordpress URL]
  • Social: Scheduled 3 posts (Twitter 3pm, LinkedIn 10am tomorrow, FB next Mon)
  • Status: Complete

---

Part 3: Inter-Agent Communication via Notion Task Queue

For more complex coordination (not just sequential pipeline), use a Notion database as a shared task queue.

Create Task Queue Database

Notion database name: Agent Task Queue Properties:
  1. Task (Title) - What needs to be done
  2. Assigned To (Select) - research-agent | writer-agent | editor-agent | publisher-agent
  3. Status (Select) - pending | in-progress | completed | failed
  4. Priority (Select) - urgent | high | normal | low
  5. Created At (Date) - Auto-filled
  6. Completed At (Date) - Filled on completion
  7. Dependencies (Text) - "Waiting on: [task ID]"
  8. Result (Text) - Link to output (Notion page, file path, URL)
  9. Notes (Text) - Any context or failures

Agent Task Checker (Add to Every SOUL.md)

Before Running Your Main Job

  1. Check Notion Task Queue for tasks assigned to you:

- Filter: Assigned To = [your agent name]

- Filter: Status = pending

- Sort: Priority (urgent first), then Created At (oldest first)

  1. If tasks found:

- Update Status → in-progress

- Complete the task

- Log result in Result field

- Update Status → completed

- Fill Completed At timestamp

- Append to MEMORY.md

  1. If no tasks:

- Run your scheduled job (as normal)

Example Coordinator Agent

File: ~/.openclaw/agents/coordinator-agent/SOUL.md

Coordinator Agent - SOUL.md

Your Job

Receive requests from Dan (via Telegram), break into tasks, assign to specialists.

How You're Triggered

Dan sends a message to your Telegram bot (see Module 5):

"/run Launch new feature: AI-powered analytics"

Workflow

  1. Analyze the request → determine what work is needed
  2. Break into discrete tasks
  3. Write tasks to Notion Task Queue with appropriate assignments
  4. Notify Dan: "Created 4 tasks, estimated completion: 6 hours"
  5. Monitor task completion (check every 30 mins via cron)
  6. When all tasks complete → synthesize results, notify Dan

Task Breakdown Logic

Request: "Launch new feature: AI-powered analytics" Tasks you create:
  1. [research-agent, high] Research competitor analytics features (30 mins)
  2. [writer-agent, normal] Write feature announcement blog post (60 mins)
  3. [designer-agent, high] Create 5 social graphics for launch (45 mins)
  4. [publisher-agent, normal] Schedule blog + social posts for Friday 9am (15 mins)

Notion Task Creation

For each task, create a row in Agent Task Queue:

  • Task: "[clear description]"
  • Assigned To: [agent name]
  • Status: pending
  • Priority: [based on urgency/dependencies]
  • Dependencies: "Waiting on: [task ID]" (if applicable)
  • Notes: [any context the agent needs]

Monitoring

Run every 30 mins via cron:

/30 * openclaw run coordinator-agent --mode check

In check mode:

  • Query Notion for your active requests
  • If all tasks completed → synthesize results, notify Dan
  • If any task failed → escalate to Dan with error details
  • If task stuck (in-progress > 2 hours) → flag as potential issue

---

Part 4: Parallel Execution vs Sequential

When to Run in Parallel

Use parallel execution when:
  • Tasks are independent (don't depend on each other)
  • You want speed (4 tasks in 30 mins vs 2 hours sequential)
  • Agents have different resource needs (one CPU-heavy, one API-heavy)
Example: Content launch
  • Write blog post (Writer Agent) ← doesn't need graphics
  • Create social graphics (Designer Agent) ← doesn't need blog text
  • Research distribution channels (Research Agent) ← independent work
  • All 3 run at 10am → done by 10:30am instead of 11:30am
Cron setup for parallel:

All agents run at the same time

0 10 * openclaw run writer-agent

0 10 * openclaw run designer-agent

0 10 * openclaw run research-agent

When to Run Sequentially

Use sequential execution when:
  • Later tasks depend on earlier results
  • You need quality gates (review before publishing)
  • Order matters (research → write → edit → publish)
Example: Daily newsletter
  1. 9am: Research trending topics → logs to MEMORY.md
  2. 10am: Writer reads MEMORY.md → writes newsletter → saves to Notion
  3. 11am: Editor reviews → approves or sends back to writer
  4. 12pm: Publisher sends newsletter via SendGrid
Cron setup for sequential:
0 9   * openclaw run research-agent

0 10 * openclaw run writer-agent

0 11 * openclaw run editor-agent

0 12 * openclaw run publisher-agent

---

Part 5: Real-World Multi-Agent Architectures

Riley Brown's 11-Agent Team (£500k+ Revenue)

Riley Brown (solo consultant, AI automation expert) runs his entire business with 11 AI agents. Here's his architecture:

Cloud Agents (Hosted on Railway, always-on):
  1. Email Triage Agent - Runs every 15 mins, archives 80% of email automatically
  2. Meeting Scheduler - Responds to scheduling requests, books Calendly, sends confirmations
  3. Social Media Agent - Posts to Twitter 3x/day, LinkedIn 1x/day, all automated
  4. Content Repurposing Agent - Turns YouTube videos → blog posts → tweets
  5. Lead Scoring Agent - Watches form submissions, scores 0-100, flags hot leads to Slack
  6. Client Onboarding Agent - Sends contracts, invoice #1, Notion workspace invite
  7. Invoice Chaser Agent - Auto-follows up 7 days after invoice, escalates if unpaid at 14 days
  8. Slack Monitor Agent - Watches client Slack channels, flags urgent messages
  9. Weekly Report Agent - Runs Friday 5pm, generates client status reports
  10. Expense Tracker Agent - Watches bank account (via Plaid), categorizes expenses to Notion
  11. Local Agent (Mac Mini in his office):
    1. Orchestrator Agent - Coordinates all other agents, handles complex multi-step workflows, runs compute-heavy tasks (video editing, large file processing)
    Why this works:
    • Cloud agents handle routine, time-based tasks (cheap to run 24/7 on Railway/Render)
    • Local agent runs expensive tasks only when needed (no cloud hosting fees for GPT-4)
    • Total cost: ~£180/month (vs £15k/month for a human assistant)
    • Time saved: 25 hours/week
    Key insight from Riley:

    > "I don't have one smart agent. I have 11 dumb agents that each do ONE thing really well. The magic is in the coordination, not the individual agent intelligence."

    Sarah's Marketing Agency (4-Agent Pipeline)

    Sarah runs a content marketing agency (3 clients, £8k MRR). She uses 4 agents in a pipeline:

    Pipeline:
    1. Research Agent (runs Mon/Wed/Fri 9am) - Monitors client industries, finds trending topics
    2. Writer Agent (runs Mon/Wed/Fri 11am) - Writes 3 blog posts per day (one per client)
    3. Editor Agent (runs Mon/Wed/Fri 2pm) - Reviews posts, fixes issues, marks approved
    4. Publisher Agent (runs Mon/Wed/Fri 4pm) - Posts to client WordPress, schedules social
    Before agents:
    • Sarah wrote all content herself: 12 hours/week
    • Hired a VA for £800/month: quality inconsistent, required heavy editing
    After agents:
    • Agents write first drafts: Sarah reviews for 2 hours/week
    • Quality improved (Claude Sonnet > junior VA)
    • Cost: £60/month in API fees vs £800/month for VA
    • ROI: £740/month savings + 10 hours/week time saved

    ---

    Part 6: Debugging Multi-Agent Systems

    Common Issues

    Problem 1: Agents stepping on each other Symptoms:
    • Two agents try to update the same Notion page simultaneously
    • MEMORY.md entries out of order (timestamps mixed up)
    • Duplicate work (both agents write the same blog post)
    Solutions:
    1. Use file locking for MEMORY.md writes:
    2. In each agent's write script:

      (

      flock -x 200 # Exclusive lock

      echo "## $(date -Iseconds) - Agent Name" >> ~/.openclaw/MEMORY.md

      echo "Entry text here" >> ~/.openclaw/MEMORY.md

      ) 200>/tmp/openclaw-memory.lock

      1. Stagger cron schedules (don't run everything at :00)
      0 9   * openclaw run research-agent    # 9:00
      

      5 9 * openclaw run writer-agent # 9:05

      10 9 * openclaw run social-agent # 9:10

      1. Use Notion task queue with status checks

      - Before starting a task, check if status is already "in-progress" (another agent grabbed it)

      - Use Notion API transactions (if supported) or add "claimed_by" field

      Problem 2: Lost context (agents don't see each other's work) Symptoms:
      • Writer agent writes about a topic that Research agent already covered yesterday
      • Publisher posts the same content twice
      • Agents ask Dan questions that were already answered
      Solutions:
      1. Every agent must read MEMORY.md before acting
      2. - Add to SOUL.md: "First action: read ~/.openclaw/MEMORY.md (last 100 lines)"

        1. Use consistent tagging

        - Research agent tags entries: [tag: trending-topics]

        - Writer agent searches for: grep "\[tag: trending-topics\]" MEMORY.md

        1. Notion as source of truth

        - Don't rely on MEMORY.md alone for critical state

        - Use Notion databases for: published posts, scheduled content, completed tasks

        Problem 3: Dependency deadlocks Symptoms:
        • Writer agent waiting for Research agent, but Research agent is broken/not running
        • Tasks stuck in "pending" forever because assigned agent doesn't exist
        Solutions:
        1. Timeout logic in Coordinator:
        2. In coordinator-agent SOUL.md:

          If a task is pending for > 2 hours:

          • Check if assigned agent is healthy (last run time in MEMORY.md)
          • If agent hasn't run in > 24 hours → reassign task to backup agent or escalate to Dan
          • Update task status: "stalled - reassigned"
          1. Health checks via Module 6 dashboard:
          2. - Monitor last run time for each agent

            - Alert if any agent hasn't reported in > expected interval

            - Automated restart via launchd (macOS) or systemd (Linux)

            Problem 4: Token cost explosion Symptoms:
            • Bill goes from £50/month to £400/month after launching multi-agent system
            • Agents loading huge MEMORY.md files on every run
            Solutions:
            1. Truncate MEMORY.md reads:
            2. Don't read the whole file:

              tail -100 ~/.openclaw/MEMORY.md # Last 100 lines only (~2KB vs 50KB)

              1. Use Haiku for simple agents:

              - Email triage: Haiku is enough (5x cheaper than Sonnet)

              - Research, writing, editing: Use Sonnet (quality matters)

              - See Module 2 for model selection guide

              1. Separate MEMORY.md per agent team:

              - Email agents → ~/.openclaw/memory/email-memory.md

              - Content agents → ~/.openclaw/memory/content-memory.md

              - Cross-reference when needed: "See content-memory.md for blog posts"

              ---

              Part 7: Homework (Build Your First Multi-Agent System)

              Task 1: Two-Agent Pipeline (30 mins)

              Build a simple 2-agent system:

              1. Research Agent - Runs daily at 9am, finds one trending topic in your industry
              2. Writer Agent - Runs daily at 10am, writes a 300-word tweet thread on that topic
              What you'll learn:
              • Sequential execution (Writer depends on Research)
              • Shared MEMORY.md communication
              • Cron scheduling for automated runs
              Deliverable:
              • ~/.openclaw/agents/research-agent/ (SOUL.md, AGENTS.md, cron entry)
              • ~/.openclaw/agents/writer-agent/ (SOUL.md, AGENTS.md, cron entry)
              • First run output in MEMORY.md showing both agents working

              Task 2: Notion Task Queue (45 mins)

              Set up a Notion task queue and test with one agent:

              1. Create "Agent Task Queue" database in Notion (9 properties as described above)
              2. Manually add 2 test tasks assigned to "writer-agent"
              3. Update writer-agent SOUL.md to check queue before running scheduled job
              4. Run the agent, verify it picks up tasks and marks them completed
              What you'll learn:
              • Notion API integration for task management
              • Agent task checking logic
              • Status updates and result logging
              Deliverable:
              • Notion database with completed tasks
              • Updated writer-agent that checks queue first
              • Screenshot of completed tasks in Notion

              Task 3: Add Monitoring to Your Dashboard (60 mins)

              Extend your Module 6 Notion dashboard to track multi-agent health:

              1. Create new database: "Agent Health Monitor"

              - Properties: Agent Name, Last Run (timestamp), Status (healthy/stale/failed), Last Entry (text from MEMORY.md)

              1. Create a health-check agent that runs every 30 mins:

              - Reads MEMORY.md

              - For each agent, finds most recent entry

              - Updates Notion with timestamp

              - Flags agents that haven't run in > expected interval

              1. Add to your existing dashboard (Module 6)
              What you'll learn:
              • Multi-agent monitoring
              • Automated health checks
              • Alert logic (when to escalate)
              Deliverable:
              • Notion health dashboard showing all agent statuses
              • Health-check agent running via cron
              • Example alert (manually break one agent, verify health-check flags it)

              ---

              What's Next

              You now have:
              • Understanding of multi-agent architecture patterns (specialist, coordinator, pipeline)
              • Shared memory system via MEMORY.md
              • Notion task queue for complex coordination
              • Debugging strategies for multi-agent issues
              • Real-world examples (Riley's 11 agents, Sarah's 4-agent pipeline)
              In Module 8, you'll learn about Heartbeat Monitoring - proactive health checks that catch issues before they become expensive failures. You'll build:
              • Silent failure detection (agent stopped working, but didn't alert you)
              • Cost spike alerts (runaway loop protection)
              • Automated recovery (restart failed agents)
              • Uptime monitoring (SLA tracking for business-critical agents)
              Next step: Choose Task 1, 2, or 3 from the homework. Even just Task 1 (two-agent pipeline) will give you the foundational understanding of how multi-agent orchestration works.

              The difference between a solo agent and an AI team is leverage. One agent can save you 5 hours/week. A coordinated team of 5 agents can save you 25 hours/week.

              Let's build your team.

Module 8: Heartbeat Monitoring (45 min)

What You'll Learn

  • The difference between cron jobs and heartbeat monitoring
  • How to set up proactive health checks for your OpenClaw agents
  • Configuring silent hours to avoid 3am Slack alerts
  • Real-world failure scenarios and how heartbeats catch them
  • Building a simple monitoring dashboard

---

Why Heartbeat Monitoring Matters

You've got agents running on cron jobs. Every morning at 7am, your daily brief agent fires up. Every Friday at 5pm, your weekly report agent runs. Perfect.

Until it's not.

What cron jobs DON'T tell you:
  • Did the agent actually run successfully?
  • Did it crash halfway through?
  • Is your API token expired?
  • Is OpenClaw even running?
  • Did the agent get stuck in a loop?

Cron jobs are fire and forget. They run on schedule, but they don't check if things worked.

Heartbeat monitoring is the opposite: proactive health checks. Your agents actively report "I'm alive and healthy" at regular intervals. If they stop reporting, you get alerted.

Real example from the OpenClaw community:

A consultant had an agent that parsed client emails and created Notion tasks. Ran every hour via cron. One day, Notion changed their API response format. The agent crashed silently for 3 days before the consultant noticed they had 47 unprocessed client requests.

With heartbeat monitoring, they would have been alerted within 90 minutes.

---

Cron Jobs vs Heartbeat Monitoring

Cron jobs = "Do this at 7am every day"
  • Scheduled execution
  • No feedback loop
  • Silent failures
Heartbeat monitoring = "Check if the agent is healthy every 30 minutes"
  • Proactive health checks
  • Alerts when something breaks
  • Catches silent failures
When to use each:

| Use Case | Tool | Why |

|----------|------|-----|

| Daily brief at 7am | Cron job | Scheduled task |

| Check if brief agent is working | Heartbeat | Health monitoring |

| Weekly report every Friday | Cron job | Scheduled task |

| Monitor email parser 24/7 | Heartbeat | Critical uptime |

| Monthly invoice reminder | Cron job | Scheduled task |

| Alert if OpenClaw crashes | Heartbeat | System health |

You use BOTH. Cron jobs run your tasks. Heartbeat monitoring makes sure they're working.

---

Setting Up Your First Heartbeat

OpenClaw's heartbeat system uses a simple pattern:

  1. Your agent runs a health check
  2. It reports the result to a monitoring service
  3. If the monitoring service doesn't get a report within X minutes, it alerts you
The .openclaw/heartbeats/ folder:
.openclaw/

├── SOUL.md

├── AGENTS.md

├── MEMORY.md

├── skills/

└── heartbeats/

├── email-parser.yml

├── daily-brief.yml

└── system-health.yml

Each .yml file defines one heartbeat monitor.

---

Example: Monitoring Your Email Parser Agent

Let's say you have an agent that checks your inbox every hour and creates Trello cards from client requests. Critical for your business. Can't afford downtime.

Step 1: Create the heartbeat config

Create .openclaw/heartbeats/email-parser.yml:

name: "Email Parser Agent"

description: "Checks inbox every hour, creates Trello cards"

check_interval: 30 # Run health check every 30 minutes

timeout: 90 # Alert if no successful check in 90 minutes

health_check:

type: "agentlastrun"

agent_name: "email-parser"

maxageminutes: 75 # Agent should have run in last 75 mins

alerts:

- type: "slack"

channel: "#alerts"

message: "⚠️ Email parser hasn't run in 90 minutes. Check OpenClaw."

- type: "telegram"

chat_id: "your-chat-id"

message: "Email parser down. Last successful run: {{lastruntime}}"

silent_hours:

enabled: true

timezone: "Europe/London"

start: "23:00"

end: "07:00"

# No alerts between 11pm-7am unless critical

What this does:
  • Every 30 minutes, OpenClaw checks: "Did the email-parser agent run successfully in the last 75 minutes?"
  • If yes: do nothing
  • If no: send Slack + Telegram alerts
  • Between 11pm-7am: suppress non-critical alerts (you're asleep)
Step 2: Enable the heartbeat
openclaw heartbeat enable email-parser

That's it. OpenClaw now monitors your email parser 24/7.

---

Health Check Types

OpenClaw supports several health check methods:

1. agentlastrun (most common)

Checks when an agent last ran successfully.

health_check:

type: "agentlastrun"

agent_name: "daily-brief"

maxageminutes: 1500 # Should run once per day (24h = 1440m)

2. file_modified

Checks when a file was last modified (useful for agents that write to files).

health_check:

type: "file_modified"

path: ".openclaw/outputs/daily-brief.md"

maxageminutes: 1500

3. api_endpoint

Pings an external URL to check if a service is up.

health_check:

type: "api_endpoint"

url: "https://api.your-service.com/health"

expected_status: 200

timeout_seconds: 10

4. custom_script

Runs a custom script and checks its exit code.

health_check:

type: "custom_script"

script_path: ".openclaw/scripts/check-database.sh"

successexitcode: 0

---

Real-World Heartbeat Configs

Daily Brief Agent (runs at 7am)

name: "Daily Brief Agent"

description: "Generates morning summary from calendar + tasks"

check_interval: 360 # Check every 6 hours

timeout: 1500 # Alert if no run in 25 hours (allows for weekend skip)

health_check:

type: "agentlastrun"

agent_name: "daily-brief"

maxageminutes: 1500

alerts:

- type: "telegram"

chat_id: "12345678"

message: "Daily brief didn't run this morning."

silent_hours:

enabled: false # Always alert (you want to know immediately)

System Health Monitor (runs every 15 minutes)

name: "OpenClaw System Health"

description: "Checks if OpenClaw daemon is running"

check_interval: 15

timeout: 30

health_check:

type: "custom_script"

script_path: ".openclaw/scripts/system-health.sh"

successexitcode: 0

alerts:

- type: "slack"

channel: "#critical"

message: "🚨 OpenClaw daemon is down!"

- type: "telegram"

chat_id: "12345678"

message: "OpenClaw system failure. Check server immediately."

silent_hours:

enabled: false # Critical alerts 24/7

The system-health.sh script:
#!/bin/bash

Check if OpenClaw daemon is running

if pgrep -f "openclaw daemon" > /dev/null; then

echo "OpenClaw daemon: OK"

exit 0

else

echo "OpenClaw daemon: DOWN"

exit 1

fi

API Rate Limit Monitor

name: "OpenAI API Budget Monitor"

description: "Alerts if we're burning through tokens too fast"

check_interval: 60 # Check hourly

timeout: 120

health_check:

type: "custom_script"

script_path: ".openclaw/scripts/check-api-usage.sh"

successexitcode: 0

alerts:

- type: "slack"

channel: "#budget"

message: "⚠️ API usage high: {{usage_dollars}}/day. Budget: $50/day."

silent_hours:

enabled: true

timezone: "America/New_York"

start: "22:00"

end: "08:00"

---

Silent Hours: Don't Wake Up at 3am

You're running agents 24/7. But you're not awake 24/7.

The problem: A non-critical agent fails at 3:17am. Your phone buzzes. You wake up. You can't do anything about it until morning anyway. The solution: Silent hours.
silent_hours:

enabled: true

timezone: "Europe/London"

start: "23:00"

end: "07:00"

What happens during silent hours:
  • Critical alerts still fire (system down, security breach, money-related)
  • Non-critical alerts are queued and sent at 7am as a summary
How to mark an alert as critical:
alerts:

- type: "telegram"

chat_id: "12345678"

message: "Email parser failed"

critical: false # Respect silent hours

- type: "telegram"

chat_id: "12345678"

message: "🚨 Security breach detected!"

critical: true # Alert immediately, even at 3am

Best practice: Only mark alerts as critical if you would actually wake up and fix it immediately. Examples:
  • Payment processing failure (losing money)
  • Security breach (data at risk)
  • System completely down (all agents stopped)

NOT critical:

  • A single agent failed (can wait until morning)
  • API rate limit warning (informational)
  • Weekly report didn't generate (not urgent)

---

Building a Simple Monitoring Dashboard

Heartbeat alerts are great for immediate problems. But you also want a dashboard to see overall health at a glance.

Option 1: Notion Dashboard (Simplest)

Create a Notion page that your heartbeat monitors update via the Notion API:

health_check:

type: "agentlastrun"

agent_name: "email-parser"

maxageminutes: 75

on_success:

- type: "notion_update"

page_id: "your-monitoring-page-id"

property: "Email Parser Status"

value: "✅ Healthy ({{timestamp}})"

on_failure:

- type: "notion_update"

page_id: "your-monitoring-page-id"

property: "Email Parser Status"

value: "❌ Down since {{lastsuccesstime}}"

Your Notion page becomes a live status board:

OpenClaw Health Dashboard

─────────────────────────

Email Parser: ✅ Healthy (2026-03-02 14:37)

Daily Brief: ✅ Healthy (2026-03-02 07:02)

Weekly Report: ✅ Healthy (2026-02-28 17:00)

System Health: ✅ Healthy (2026-03-02 14:35)

API Budget: ⚠️ 67% of daily budget used

Option 2: Slack Channel (Real-Time)

Create a #openclaw-status Slack channel. Configure heartbeats to post status updates:

on_success:

- type: "slack"

channel: "#openclaw-status"

message: "✅ {{agent_name}} healthy"

throttle: 1440 # Only post once per day if healthy

on_failure:

- type: "slack"

channel: "#openclaw-status"

message: "❌ {{agentname}} failed: {{errormessage}}"

throttle: 0 # Post every failure immediately

Option 3: Local HTML Dashboard

OpenClaw can generate a simple HTML dashboard:

openclaw heartbeat status --output dashboard.html

This creates a static HTML page showing all heartbeat statuses. Open it in your browser:

OpenClaw Heartbeat Dashboard

────────────────────────────

Email Parser ✅ Healthy Last check: 2 mins ago

Daily Brief ✅ Healthy Last check: 7 hours ago

Weekly Report ✅ Healthy Last check: 3 days ago

System Health ✅ Healthy Last check: 1 min ago

API Budget Monitor ⚠️ Warning 67% of budget used

Recent Alerts (last 24h):

• 14:22 - Email Parser: temporary failure (API timeout)

• 07:05 - Daily Brief: completed successfully

Serve it with a simple web server:

python3 -m http.server 8080

Visit http://localhost:8080/dashboard.html to see your status board.

---

Common Failure Scenarios & How Heartbeats Catch Them

Scenario 1: API Token Expired

What happened: Your OpenAI API key expired. All agents fail silently. Without heartbeat: You notice 3 days later when you realize you haven't received your daily briefs. With heartbeat: Alert within 90 minutes: "Daily brief agent failed: API authentication error." Fix: Renew token, restart agents.

---

Scenario 2: Dependency Broke

What happened: Your agent uses requests library. You upgraded Python, requests broke. Without heartbeat: Cron job runs, fails, logs error to a file you never check. With heartbeat: Alert immediately: "Email parser failed: ModuleNotFoundError: No module named 'requests'." Fix: pip install requests, restart agent.

---

Scenario 3: Disk Full

What happened: Agent logs filled up /var/log/. No disk space. Agent can't write outputs. Without heartbeat: Agent runs, silently fails to write outputs. With heartbeat: Custom health check script detects low disk space: "System health warning: 98% disk usage." Fix: Clean up logs, configure log rotation.

---

Scenario 4: Rate Limit Hit

What happened: Your agent made too many API calls. Hit rate limit. Subsequent runs fail. Without heartbeat: Agent fails for hours until rate limit resets. With heartbeat: Alert after first failure: "Email parser failed: Rate limit exceeded." Fix: Reduce agent frequency, add rate limit handling, upgrade API plan.

---

Scenario 5: Silent Loop

What happened: Agent got stuck in an infinite loop due to a bug. CPU at 100%. Never finishes. Without heartbeat: Agent runs forever, burns CPU, never completes. With heartbeat: Timeout alert: "Daily brief agent has been running for 45 minutes (expected: 5 mins)." Fix: Kill process, fix bug, restart agent.

---

Advanced: Heartbeat with Auto-Recovery

You can configure heartbeats to automatically attempt recovery:

name: "Email Parser Agent"

health_check:

type: "agentlastrun"

agent_name: "email-parser"

maxageminutes: 75

on_failure:

- type: "telegram"

chat_id: "12345678"

message: "Email parser failed. Attempting restart..."

- type: "recovery_script"

script_path: ".openclaw/scripts/restart-email-parser.sh"

max_attempts: 3

waitbetweenattempts: 300 # 5 minutes

- type: "telegram"

chat_id: "12345678"

message: "Recovery {{status}}: {{message}}"

The recovery script:
#!/bin/bash

Restart email parser agent

echo "Killing stuck email-parser process..."

pkill -f "openclaw run email-parser"

echo "Restarting email-parser agent..."

openclaw run email-parser --daemon

sleep 10

Check if it started successfully

if pgrep -f "openclaw run email-parser" > /dev/null; then

echo "Recovery successful"

exit 0

else

echo "Recovery failed"

exit 1

fi

What this does:
  1. Heartbeat detects failure
  2. Sends Telegram alert: "Attempting restart..."
  3. Runs recovery script (tries up to 3 times, 5 minutes apart)
  4. Sends result: "Recovery successful" or "Recovery failed after 3 attempts"
When to use auto-recovery:
  • Transient failures (API timeouts, network glitches)
  • Known failure modes (process crashes, memory leaks)
When NOT to use auto-recovery:
  • Authentication failures (won't fix itself by restarting)
  • Configuration errors (need manual intervention)
  • Resource exhaustion (need to free up disk/memory first)

---

Monitoring Your Monitoring

Meta-tip: Monitor your heartbeat system itself.

Create a heartbeat for the heartbeat daemon:
name: "Heartbeat System Health"

description: "Checks if the heartbeat monitoring system is running"

check_interval: 30

timeout: 60

health_check:

type: "custom_script"

script_path: ".openclaw/scripts/check-heartbeat-daemon.sh"

successexitcode: 0

alerts:

- type: "telegram"

chat_id: "12345678"

message: "🚨 CRITICAL: Heartbeat monitoring system is down!"

critical: true # Always alert, even during silent hours

Why this matters: If your heartbeat system crashes, all other monitors go silent. You have zero visibility. This meta-monitor ensures you know immediately.

---

Resource Usage & Token Costs

Heartbeat monitoring is lightweight:
  • Health checks run locally (no API calls)
  • Most checks take < 1 second
  • No token usage unless you use LLM-based health checks
Typical resource usage:
  • 5 heartbeat monitors running every 30 minutes
  • CPU: < 0.1%
  • Memory: ~5MB
  • Token cost: $0/day (unless using LLM analysis)
When heartbeats DO use tokens:

If you configure an LLM-based health check (analyzing agent outputs for quality issues):

health_check:

type: "llm_analysis"

agent_name: "daily-brief"

prompt: "Analyze the last daily brief. Is it coherent? Any errors?"

model: "gpt-4o-mini"

expected_response: "No issues detected"

This uses ~500 tokens per check. At 48 checks/day (every 30 mins), that's $0.05/day.

Best practice: Reserve LLM health checks for critical agents where output quality matters more than cost.

---

Key Commands Reference

Enable a heartbeat monitor

openclaw heartbeat enable

Disable a heartbeat monitor

openclaw heartbeat disable

List all heartbeat monitors

openclaw heartbeat list

Check status of all monitors

openclaw heartbeat status

Generate HTML dashboard

openclaw heartbeat status --output dashboard.html

Test a heartbeat config (dry run)

openclaw heartbeat test

View heartbeat logs

openclaw heartbeat logs

Clear heartbeat history

openclaw heartbeat clear

---

Quick Start Checklist

  • [ ] Create .openclaw/heartbeats/ folder
  • [ ] Identify your 3 most critical agents
  • [ ] Create heartbeat configs for each (start with agentlastrun type)
  • [ ] Configure silent hours for your timezone
  • [ ] Set up Slack/Telegram alerts
  • [ ] Enable the heartbeat monitors
  • [ ] Test by intentionally breaking an agent
  • [ ] Verify you receive alerts within timeout period
  • [ ] Create a monitoring dashboard (Notion or Slack)
  • [ ] Set up meta-monitor for the heartbeat system itself
  • ---

    Next Steps

    You now have proactive monitoring for your OpenClaw agents. You'll know within minutes when something breaks, not days later.

    Next module: Advanced Skills - building custom skills for your specific business needs. Recommended resources:
    • OpenClaw docs: [docs.openclaw.ai/heartbeat](https://docs.openclaw.ai/heartbeat)
    • Community patterns: [Zen van Riel's blog](https://zenvanriel.nl/posts/openclaw-monitoring)
    • Cron vs Heartbeat guide: [docs.openclaw.ai/cron-vs-heartbeat](https://docs.openclaw.ai/cron-vs-heartbeat)

    ---

    Module 8 complete. Estimated reading time: 45 minutes.

Module 9: Advanced Skills (75 min)

What You'll Learn

  • How to build custom skills for OpenClaw agents
  • The anatomy of a skill file (triggers, prompts, tools, outputs)
  • Real examples: invoice processor, content repurposer, lead scorer
  • Testing and debugging custom skills
  • Sharing skills across your agent team
  • Performance optimization for token-heavy skills

---

Why Build Custom Skills?

OpenClaw ships with 50+ built-in skills (email triage, meeting notes, web research, etc.). But your business has unique workflows that no off-the-shelf skill can handle.

Real example from the OpenClaw community:

A freelance designer had a repeating workflow:

  1. Client sends project brief via email
  2. Designer extracts requirements
  3. Creates Notion project page
  4. Sends confirmation email with timeline
  5. Adds calendar reminder for first check-in

This took 15 minutes per client. 3 new clients per week = 45 minutes of admin.

Solution: Custom skill called onboard-design-client. One command, entire workflow automated. 45 minutes/week saved = 35 hours/year.

Custom skills turn your unique processes into reusable automations.

---

The Anatomy of a Skill

Skills live in .openclaw/skills/ as YAML files:

.openclaw/

├── SOUL.md

├── AGENTS.md

├── MEMORY.md

├── heartbeats/

└── skills/

├── email-triage.yml (built-in)

├── meeting-notes.yml (built-in)

├── invoice-processor.yml (custom - yours!)

└── lead-scorer.yml (custom - yours!)

Minimal skill structure:
name: "Invoice Processor"

description: "Extract invoice data from PDFs and create Xero entries"

version: "1.0"

author: "your-name"

triggers:

- "process invoice"

- "new invoice"

- "@invoice-processor"

inputs:

- name: "pdf_path"

type: "file"

required: true

description: "Path to invoice PDF"

prompt: |

You are an invoice processing assistant.

Task: Extract data from the invoice PDF at {pdf_path}

Extract:

- Invoice number

- Date

- Supplier name

- Line items (description, quantity, unit price)

- Subtotal, VAT, total

Format the data as JSON.

tools:

- pdf_reader

- xero_api

output:

type: "json"

schema:

invoice_number: string

date: string

supplier: string

line_items: array

total: number

What happens when you run this skill:
openclaw run invoice-processor --pdf_path="./invoice-march-2026.pdf"
  1. OpenClaw loads invoice-processor.yml
  2. Reads the invoice PDF (via pdf_reader tool)
  3. Runs the prompt with Claude
  4. Claude extracts structured data
  5. Uses xero_api tool to create Xero entry
  6. Returns JSON output
  7. Logs execution in .openclaw/logs/skills/invoice-processor-2026-03-02.log

---

Example 1: Lead Scorer

Use case: You get 50+ contact form submissions per week. You need to prioritize which leads to call first. What it does: Scores leads 0-100 based on company size, budget signals, urgency keywords, and LinkedIn profile quality. Create .openclaw/skills/lead-scorer.yml:
name: "Lead Scorer"

description: "Score inbound leads from 0-100 based on fit and urgency"

version: "1.0"

triggers:

- "score lead"

- "qualify lead"

- "@lead-scorer"

inputs:

- name: "email"

type: "string"

required: true

description: "Lead's email address"

- name: "message"

type: "string"

required: true

description: "Their contact form message"

- name: "company_domain"

type: "string"

required: false

description: "Their company website (optional)"

prompt: |

You are a lead qualification assistant for a B2B SaaS company.

Lead details:

- Email: {email}

- Message: {message}

- Company: {company_domain}

Task: Score this lead from 0-100 based on:

1. Company Fit (0-40 points)

- Use Clearbit API to get company size, industry, revenue

- 10-50 employees = 20 pts

- 50-200 employees = 30 pts

- 200+ employees = 40 pts

- Unknown company = 10 pts

2. Budget Signals (0-30 points)

- Mentions "budget" or "investment" = 15 pts

- Asks about pricing = 10 pts

- Mentions competitors = 10 pts

- Says "just browsing" = -5 pts

3. Urgency (0-20 points)

- "ASAP", "urgent", "this week" = 20 pts

- "soon", "next month" = 10 pts

- "exploring", "researching" = 5 pts

4. LinkedIn Quality (0-10 points)

- Use LinkedIn API to check their profile

- C-level = 10 pts

- VP/Director = 7 pts

- Manager = 5 pts

- Individual contributor = 3 pts

Return JSON with:

- score (0-100)

- reasoning (1-2 sentences)

- recommendedaction ("callimmediately", "email_followup", "nurture")

- priority ("high", "medium", "low")

tools:

- clearbit_api

- linkedin_api

output:

type: "json"

schema:

score: number

reasoning: string

recommended_action: string

priority: string

company_data:

name: string

size: string

industry: string

Usage:

Score a lead manually

openclaw run lead-scorer --email="ceo@acme.com" --message="Need AI automation ASAP" --company_domain="acme.com"

Output:

{

"score": 87,

"reasoning": "C-level at 150-person company, urgent timeline, mentions competitors",

"recommendedaction": "callimmediately",

"priority": "high",

"company_data": {

"name": "Acme Corp",

"size": "150 employees",

"industry": "SaaS"

}

}

Integrate with your CRM agent:

In .openclaw/AGENTS.md, add a skill to your sales-agent:

- id: sales-agent

skills:

- lead-scorer

- email-followup

- calendar-booking

Now when a contact form submission comes in, your sales agent automatically runs lead-scorer and prioritizes your pipeline.

---

Example 2: Content Repurposer

Use case: You publish a 2,000-word blog post. You need it repurposed into:
  • 5 LinkedIn posts
  • 10 tweets
  • 1 email newsletter
  • 1 YouTube script

Doing this manually takes 90 minutes. Your content repurposer skill does it in 3 minutes.

Create .openclaw/skills/content-repurposer.yml:
name: "Content Repurposer"

description: "Turn long-form content into social posts, emails, and scripts"

version: "1.0"

triggers:

- "repurpose content"

- "create social posts"

- "@content-repurposer"

inputs:

- name: "source_url"

type: "string"

required: true

description: "URL of blog post to repurpose"

- name: "brand_voice"

type: "string"

required: false

default: "professional"

description: "Tone: professional, casual, technical, witty"

prompt: |

You are a content repurposing assistant.

Task: Read the blog post at {source_url} and create:

1. 5 LinkedIn posts (150-200 words each)

- Each post highlights one key insight from the article

- Include a hook, body, and CTA

- Use {brand_voice} tone

- Add relevant emojis (sparingly)

2. 10 tweets (280 characters max)

- Mix of insights, quotes, and stats from the post

- Use thread format (1/10, 2/10, etc.)

- Include relevant hashtags

3. 1 email newsletter (400-500 words)

- Subject line (under 50 chars)

- Preview text (under 90 chars)

- Email body with 3-4 sections

- CTA to read full article

4. 1 YouTube script (8-10 minutes speaking time)

- Intro hook (30 seconds)

- Main content (7 minutes)

- Outro CTA (30 seconds)

- Include [B-ROLL] markers for visuals

For each format, extract the most compelling angles from the source content.

Don't just summarize - find the nuggets that will perform well on each platform.

Return as structured JSON with separate sections for each format.

tools:

- web_scraper

- readability_api

output:

type: "json"

schema:

linkedin_posts: array

tweets: array

email:

subject: string

preview: string

body: string

youtube_script:

intro: string

main: string

outro: string

post_actions:

- save_to: ".openclaw/output/content-repurpose-{timestamp}.json"

- notify: "slack"

channel: "#content-team"

message: "Content repurposed: {source_url} → 5 LinkedIn posts, 10 tweets, 1 email, 1 script"

Usage:
openclaw run content-repurposer --sourceurl="https://yourblog.com/ai-automation-guide" --brandvoice="witty"
Output saved to: .openclaw/output/content-repurpose-2026-03-02-14-23.json Token cost: ~$0.15 per repurpose (Claude Haiku). 10 blog posts/month = $1.50/month for 50 LinkedIn posts + 100 tweets + 10 emails + 10 scripts.

---

Example 3: Invoice Processor (Full Implementation)

Let's build the invoice processor from earlier, with error handling and real-world edge cases.

Create .openclaw/skills/invoice-processor.yml:
name: "Invoice Processor"

description: "Extract invoice data from PDFs and create Xero entries"

version: "2.0"

author: "your-name"

triggers:

- "process invoice"

- "new invoice"

- "@invoice-processor"

inputs:

- name: "pdf_path"

type: "file"

required: true

description: "Path to invoice PDF"

validation:

file_type: ["pdf"]

max_size: "10MB"

- name: "supplier_override"

type: "string"

required: false

description: "Manually specify supplier if OCR fails"

- name: "auto_approve"

type: "boolean"

required: false

default: false

description: "Auto-approve invoices under £500"

prompt: |

You are an invoice processing assistant with accounting expertise.

Task: Extract structured data from the invoice PDF at {pdf_path}

## Extraction Rules

1. Invoice Number

- Usually top-right or top-left

- Formats: "INV-12345", "Invoice #12345", "No. 12345"

- If multiple numbers present, prioritize "Invoice" label

2. Date

- Look for "Invoice Date", "Date Issued", "Date"

- Convert to ISO 8601 format (YYYY-MM-DD)

- If ambiguous (UK vs US date format), assume UK (DD/MM/YYYY)

3. Supplier

- Usually at top of invoice (company name, logo)

- Cross-check with {supplier_override} if provided

- If OCR fails, return "MANUALREVIEWREQUIRED"

4. Line Items

- Each item needs: description, quantity, unitprice, linetotal

- Handle multi-line descriptions (combine into one field)

- Watch for subtotals disguised as line items (exclude them)

5. Totals

- Subtotal (before VAT)

- VAT amount and rate (usually 20% in UK)

- Total (after VAT)

- Validation: subtotal + VAT should equal total (within £0.01)

6. Payment Terms

- Look for "Payment Due", "Net 30", "Due Date"

- Calculate duedate from invoicedate + terms

## Edge Cases to Handle

- Scanned invoices (poor OCR quality)

- Multi-page invoices

- Multiple currencies (convert to GBP)

- Credit notes (negative amounts)

- Invoices with discounts

- Missing VAT numbers (flag for review)

## Output Format

Return JSON with:

- All extracted fields

- confidence_score (0-100) for OCR quality

- warnings (array of issues found)

- requiresmanualreview (boolean)

If confidence < 90% or totals don't match, set requiresmanualreview = true

tools:

- pdf_reader

- ocr_engine

- xero_api

- currency_converter

error_handling:

- type: "ocr_failure"

action: "notify_slack"

message: "Invoice OCR failed: {pdf_path}. Manual review required."

- type: "xeroapierror"

action: "retry"

max_retries: 3

backoff: "exponential"

- type: "validation_failure"

action: "save_draft"

location: ".openclaw/invoices/pending-review/"

output:

type: "json"

schema:

invoice_number: string

date: string

due_date: string

supplier:

name: string

vat_number: string

line_items:

- description: string

quantity: number

unit_price: number

line_total: number

subtotal: number

vat_amount: number

vat_rate: number

total: number

currency: string

confidence_score: number

warnings: array

requiresmanualreview: boolean

xeroinvoiceid: string

post_actions:

- if: "requiresmanualreview == false AND auto_approve == true AND total < 500"

then:

- action: "xeroapproveinvoice"

invoiceid: "{xeroinvoice_id}"

- action: "notify_slack"

channel: "#finance"

message: "✅ Invoice auto-processed: {supplier.name} - £{total}"

- if: "requiresmanualreview == true"

then:

- action: "save_to"

path: ".openclaw/invoices/pending-review/{invoice_number}.json"

- action: "notify_slack"

channel: "#finance"

message: "⚠️ Invoice needs review: {supplier.name} - {warnings}"

- action: "logtospreadsheet"

sheet: "Invoice Log 2026"

row:

- "{date}"

- "{supplier.name}"

- "£{total}"

- "{xeroinvoiceid}"

- "{confidence_score}%"

Usage:

Process invoice manually

openclaw run invoice-processor --pdf_path="./invoices/march-hosting.pdf"

Auto-approve if under £500

openclaw run invoice-processor --pdfpath="./invoices/march-hosting.pdf" --autoapprove=true

Watch a folder and process new invoices automatically

openclaw watch --folder="./invoices/inbox" --skill="invoice-processor" --auto_approve=true

What happens:
  1. PDF is read and OCR'd
  2. Data extracted with validation
  3. If confidence > 90% and total < £500: auto-approved in Xero
  4. If issues detected: saved to pending review folder + Slack notification
  5. All invoices logged to Google Sheets
Token cost: ~$0.08 per invoice (Claude Haiku). 50 invoices/month = $4/month. Your accountant charges £25/hour. This saves 10+ hours/month = £250/month value.

---

Testing Your Custom Skills

Test workflow:
  1. Unit test: Run skill with sample data
  2. Validation test: Check output schema matches expected format
  3. Edge case test: Throw bad data at it (corrupt PDFs, missing fields)
  4. Cost test: Measure token usage with real inputs
  5. Integration test: Run skill as part of agent workflow
Create .openclaw/skills/tests/invoice-processor-test.yml:
skill: "invoice-processor"

test_cases:

- name: "Standard UK invoice"

inputs:

pdf_path: "./test-data/sample-invoice-001.pdf"

expected:

invoice_number: "INV-12345"

total: 299.99

confidence_score: ">= 95"

requiresmanualreview: false

- name: "Poor quality scan"

inputs:

pdf_path: "./test-data/low-quality-scan.pdf"

expected:

confidence_score: "< 90"

requiresmanualreview: true

- name: "Multi-page invoice"

inputs:

pdf_path: "./test-data/multi-page-invoice.pdf"

expected:

line_items: ">= 15"

total: 4567.89

- name: "Credit note"

inputs:

pdf_path: "./test-data/credit-note-001.pdf"

expected:

total: "< 0"

warnings: "contains 'credit note'"

Run tests:
openclaw test invoice-processor

Output:

✅ Standard UK invoice: PASSED (confidence: 98%, total: £299.99)

✅ Poor quality scan: PASSED (flagged for review as expected)

✅ Multi-page invoice: PASSED (18 line items, £4,567.89)

✅ Credit note: PASSED (negative amount detected)

4/4 tests passed

Average token usage: 3,200 tokens/test

Estimated cost per run: $0.08

---

Performance Optimization

Problem: Your custom skill uses 50,000 tokens per run = $1.25 per execution. You run it 100 times/week = $125/week = $6,500/year. Optimization strategies:

1. Use Haiku for Simple Extraction

Most data extraction doesn't need Opus/Sonnet reasoning. Use Haiku (20x cheaper):

model: "claude-haiku-4-5"  # Add this to skill config
Cost: $0.08 per run → $8/week → $416/year (93% cheaper)

2. Cache Static Prompts

If your prompt includes a 5,000-word style guide, cache it:

prompt: |

{{CACHE_START}}

[Your 5,000-word style guide here]

{{CACHE_END}}

Task: Process this invoice using the style guide above...

OpenClaw caches the style guide for 5 minutes. Subsequent runs reuse the cache → 90% token reduction.

3. Batch Processing

Instead of processing 50 invoices one-by-one (50 API calls), batch them:

openclaw batch invoice-processor --folder="./invoices" --batch_size=10

Processes 10 invoices per API call → 5 API calls instead of 50 → 90% cost reduction.

4. Early Exit for Low-Value Inputs

If 70% of contact form submissions are spam, don't run the full lead-scorer:

pre_checks:

- type: "spam_filter"

if: "email matches disposableemaildomains"

then: "exit_early"

output:

score: 0

reasoning: "Disposable email detected"

priority: "ignore"

Spam leads exit immediately (zero tokens used).

---

Sharing Skills Across Your Agent Team

Scenario: You have 5 agents. Each one needs access to your custom skills. Don't do this:

❌ Copy skill files into each agent's folder

❌ Manually sync changes across 5 copies

Do this:

✅ Store skills in .openclaw/skills/ (shared across all agents)

✅ In AGENTS.md, reference skills by name

.openclaw/AGENTS.md:
agents:

- id: sales-agent

skills:

- lead-scorer # Custom skill

- email-followup # Built-in

- calendar-booking # Built-in

- id: finance-agent

skills:

- invoice-processor # Custom skill

- expense-categorizer # Custom skill

- xero-reconciliation # Built-in

- id: content-agent

skills:

- content-repurposer # Custom skill

- seo-optimizer # Custom skill

- social-scheduler # Built-in

Skill versioning:

When you update a skill, use semantic versioning:

name: "Invoice Processor"

version: "2.1.0" # Major.Minor.Patch

changelog:

- "2.1.0 (2026-03-02): Added multi-currency support"

- "2.0.0 (2026-02-15): Breaking change - new output schema"

- "1.0.0 (2026-01-20): Initial release"

Agents automatically use the latest version unless you pin:

- id: finance-agent

skills:

- invoice-processor@2.0.0 # Pin to specific version

---

Debugging Custom Skills

Problem: Your skill runs but produces wrong output. Debug workflow:

1. Enable verbose logging

openclaw run invoice-processor --pdf_path="./test.pdf" --debug
Output:
[DEBUG] Loading skill: invoice-processor v2.0

[DEBUG] Validating inputs: pdf_path exists, 2.3MB, valid PDF

[DEBUG] Running tool: pdf_reader

[DEBUG] PDF extracted: 1,847 characters

[DEBUG] Running tool: ocr_engine

[DEBUG] OCR confidence: 94%

[DEBUG] Sending prompt to Claude (model: haiku)

[DEBUG] Tokens used: 3,200 (prompt: 2,100, response: 1,100)

[DEBUG] Response received: 847 characters

[DEBUG] Validating output schema: PASSED

[DEBUG] Running postaction: notifyslack

[DEBUG] Execution time: 4.2 seconds

[DEBUG] Cost: $0.08

2. Inspect intermediate outputs

Save the OCR'd text to check what Claude actually saw:

tools:

- pdf_reader:

save_output: true

output_path: ".openclaw/debug/ocr-output-{timestamp}.txt"

3. Test prompt in Claude.ai directly

Copy your prompt, replace variables with real data, paste into Claude.ai. Iterate on prompt until it works perfectly.

4. Use skill playground

openclaw playground invoice-processor

Opens interactive mode:

  • Modify prompt
  • Test with sample data
  • See live token count
  • Compare model responses (Haiku vs Sonnet)

---

Real-World Skill Examples from the Community

1. Client Onboarding (Law Firm)
  • Trigger: New client signs contract
  • Actions: Create Matter in Clio, send welcome email, book intake call, create folder in Dropbox
  • Time saved: 30 mins/client → 10 hours/month
2. Meeting Prep (Sales Team)
  • Trigger: Calendar event in 30 minutes
  • Actions: Pull CRM notes, recent emails, LinkedIn activity, generate discussion topics
  • Time saved: 15 mins/meeting → 12 hours/month
3. Expense Categorizer (Freelancer)
  • Trigger: New bank transaction
  • Actions: Categorize expense, extract VAT, log to spreadsheet, flag if unusual
  • Time saved: 2 hours/month bookkeeping
4. Bug Triager (SaaS Founder)
  • Trigger: New GitHub issue
  • Actions: Check if duplicate, extract steps to reproduce, assign severity, route to team
  • Time saved: 1 hour/day support triage
5. Weekly Metrics (Agency Owner)
  • Trigger: Friday 4pm (cron job)
  • Actions: Pull data from Stripe, Asana, Google Analytics, generate exec summary
  • Time saved: 90 mins/week → 6 hours/month

Browse more examples: https://skills.openclaw.ai/community

---

Next Steps

You now know how to build custom skills for any workflow.

Your homework:
  1. Identify your most painful repeating task (the one you dread doing)
  2. Break it down into steps
  3. Create a skill YAML file
  4. Test with sample data
  5. Integrate into your agent workflow
Start simple: Don't build a 500-line skill on day one. Start with a 30-line skill that does ONE thing well. Add complexity as you learn. In Module 10 (Scaling to £500k+): We'll cover how Riley Brown uses custom skills to run an 11-agent team that generates £500k+/year revenue with zero staff. You'll see the exact skills he built, how he chains them together, and how he measures ROI.

---

Module 9 complete. You can now build custom skills for any workflow in your business.

---

Quick Reference

Skill file location: .openclaw/skills/your-skill-name.yml Test a skill:
openclaw run your-skill-name --input1="value" --input2="value"
Debug a skill:
openclaw run your-skill-name --debug
List all skills:
openclaw skills list
Share a skill:
openclaw skills publish your-skill-name --visibility="public"
Install community skill:
openclaw skills install lead-scorer --author="riley-brown"

---

Next: Module 10 - Scaling to £500k+ (60 min)

Module 10: Scaling to £500k+ (60 min)

What You'll Learn

  • How Riley Brown built a £500k+/year business with 11 OpenClaw agents (zero staff)
  • The 11-agent team structure (CEO, Sales, Finance, Content, Support, etc.)
  • Agent orchestration patterns for complex workflows
  • Measuring ROI on agent work (cost vs. human equivalent)
  • When to add agents vs. when to optimize existing ones
  • Scaling limits and when you need humans

---

The Riley Brown Case Study

Background: Riley Brown ran a B2B content marketing agency. 2021 revenue: £280k. Team: Riley + 4 contractors (writer, designer, VA, bookkeeper). Margins: 35%. Problem: To hit £500k revenue, Riley would need to hire 3-4 more people. Recruiting, training, payroll, management overhead. Margins would drop to 20-25%. Solution: Built an 11-agent OpenClaw team. 2024 revenue: £540k. Team: Riley + 0 staff. Margins: 68%. The agents:
  1. CEO Agent - Strategy, weekly priorities, metrics reporting
  2. Sales Agent - Lead qualification, outreach, proposal generation
  3. Finance Agent - Invoicing, expense tracking, cash flow forecasting
  4. Content Agent - Blog posts, social content, email newsletters
  5. Research Agent - Market research, competitor analysis, trend monitoring
  6. Client Agent - Onboarding, weekly reports, satisfaction checks
  7. Support Agent - Inbox triage, FAQ responses, ticket routing
  8. Operations Agent - Task management, deadline tracking, resource allocation
  9. Marketing Agent - SEO, paid ads, campaign tracking
  10. Analytics Agent - Dashboard generation, KPI tracking, alert monitoring
  11. Personal Agent - Calendar management, meeting prep, travel booking
  12. Key insight: Riley didn't replace humans with agents. He built the business he couldn't afford to build with humans.

    ---

    The 11-Agent Team Structure

    Visual map:
                            ┌─────────────┐
    

    │ CEO Agent │

    │ (Strategy) │

    └──────┬──────┘

    ┌──────────────┼──────────────┐

    │ │ │

    ┌──────▼─────┐ ┌─────▼──────┐ ┌────▼──────┐

    │ Sales │ │ Content │ │ Finance │

    │ Agent │ │ Agent │ │ Agent │

    └──────┬─────┘ └─────┬──────┘ └────┬──────┘

    │ │ │

    ┌──────▼─────┐ ┌─────▼──────┐ ┌────▼──────┐

    │ Research │ │ Marketing │ │Operations │

    │ Agent │ │ Agent │ │ Agent │

    └────────────┘ └────────────┘ └───────────┘

    Agent hierarchy:
    • Tier 1 (Strategic): CEO Agent
    • Tier 2 (Revenue): Sales, Content, Finance
    • Tier 3 (Execution): Research, Marketing, Operations, Support, Client, Analytics, Personal
    Communication flow:
    1. CEO Agent runs every Monday 9am (cron job)
    2. Generates weekly priorities based on metrics
    3. Updates shared MEMORY.md with priorities
    4. Other agents read MEMORY.md before executing
    5. Agents report completed work back to MEMORY.md
    6. CEO Agent reviews progress Friday 5pm, sends Riley a weekly summary

    ---

    CEO Agent: The Orchestrator

    Purpose: Set priorities, allocate resources, report metrics. .openclaw/AGENTS.md config:
    - id: ceo-agent
    

    description: "Strategic orchestrator. Sets weekly priorities and monitors business health."

    model: claude-opus-4-6 # Needs advanced reasoning

    schedule:

    - cron: "0 9 1" # Monday 9am

    task: "weekly_planning"

    - cron: "0 17 5" # Friday 5pm

    task: "weekly_review"

    skills:

    - metrics-dashboard

    - priority-setter

    - resource-allocator

    - executive-summary

    memory:

    shared: true

    write_access: "MEMORY.md"

    context: |

    You are the CEO Agent for Riley Brown's content marketing agency.

    Your job:

    1. Analyze business metrics (revenue, pipeline, content output, client satisfaction)

    2. Identify bottlenecks and opportunities

    3. Set weekly priorities for other agents

    4. Allocate token budget across agents

    5. Report progress to Riley

    Monday task (weekly_planning):

    - Review last week's performance

    - Check pipeline health (CRM data)

    - Analyze content performance (traffic, engagement)

    - Set top 3 priorities for this week

    - Write priorities to MEMORY.md under "## Weekly Priorities - [DATE]"

    Friday task (weekly_review):

    - Check if priorities were achieved

    - Review agent execution logs

    - Calculate ROI (agent cost vs. value created)

    - Summarize wins and blockers

    - Send executive summary email to Riley

    Decision framework:

    - If revenue < £40k/month: prioritize sales agent

    - If pipeline < 10 qualified leads: prioritize research + outreach

    - If content output < 8 posts/week: prioritize content agent

    - If client satisfaction < 4.5/5: prioritize client agent

    What happens every Monday:
    1. CEO Agent wakes up
    2. Pulls metrics from Stripe (revenue), HubSpot (pipeline), Google Analytics (content performance)
    3. Calculates: "Revenue is £38k this month (target: £40k). Pipeline has 7 leads (target: 10). Content output is 6 posts/week (target: 8)."
    4. Writes to MEMORY.md:

    Weekly Priorities - 2026-03-03

    Top 3 priorities this week:
    1. SALES: Close 2 deals from pipeline to hit £40k/month target (£2k shortfall)
    2. RESEARCH: Generate 5 new qualified leads (pipeline is 3 leads short of target)
    3. CONTENT: Increase output to 8 posts/week (currently 6/week)
    Resource allocation:
    • Sales Agent: 40% of token budget
    • Research Agent: 30% of token budget
    • Content Agent: 20% of token budget
    • Other agents: 10% of token budget
    Context:
    • Q1 target: £120k revenue (on track: £114k with 1 week left)
    • Best-performing content: "AI Automation for Law Firms" (1,200 visits, 15 leads)
    • Client satisfaction: 4.7/5 (up from 4.5 last month)
    1. Other agents read this before executing their tasks
    2. Sales Agent prioritizes closing deals over prospecting
    3. Research Agent focuses on lead generation
    4. Content Agent ramps up output
    What happens every Friday:

    CEO Agent reviews the week:

    Weekly Review - 2026-03-03

    Results:
    1. ✅ SALES: Closed 2 deals (£4.2k total). Month revenue: £42.2k (target exceeded by £2.2k)
    2. ✅ RESEARCH: Generated 6 new leads (target: 5). Pipeline now has 13 leads (target: 10)
    3. ❌ CONTENT: Published 7 posts (target: 8). Bottleneck: social media repurposing backlog
    ROI Analysis:
    • Agents cost this week: £47 in API tokens
    • Value created: £4.2k revenue + £1.8k pipeline value = £6k
    • ROI: 128x
    Next week priorities:
    1. CONTENT: Clear social media backlog (40 unpublished posts queued)
    2. OPERATIONS: Optimize content repurposing workflow
    3. CLIENT: Send Q1 reports to all active clients

Riley gets an email summary every Friday at 5pm. Takes 3 minutes to read. No Slack messages, no meetings, no status updates.

---

Sales Agent: From Lead to Close

Purpose: Qualify leads, send proposals, follow up, close deals. Skills:
  • lead-scorer (from Module 9)
  • proposal-generator
  • email-followup
  • crm-updater
Workflow:
  1. Lead comes in (contact form, referral, cold outreach reply)
  2. Sales Agent runs lead-scorer (0-100 score)
  3. If score >= 70: Send personalized proposal within 2 hours
  4. If score 40-69: Add to nurture sequence (5 emails over 2 weeks)
  5. If score < 40: Archive
Proposal generation:
- id: sales-agent

skills:

- lead-scorer

- proposal-generator

context: |

When a lead scores >= 70, generate a proposal using this template:

1. Research their company (Clearbit API)

2. Pull recent LinkedIn activity

3. Identify their top pain points (from contact form message)

4. Create custom proposal:

- Problem statement (specific to their business)

- Proposed solution (3-month content marketing package)

- Case study (similar client, similar results)

- Pricing (£3,500/month or £9,500 for 3 months upfront)

- Next steps (book intro call)

Send proposal via email within 2 hours of lead submission.

Follow up after 2 days, 5 days, and 10 days if no response.

Track all activity in HubSpot CRM.

Performance:
  • Processes 60 leads/month
  • Generates 15 proposals/month (leads scoring >= 70)
  • Closes 3-4 deals/month (20-27% close rate)
  • Cost: ~£18/month in tokens
  • Human equivalent: 20 hours/month at £30/hour = £600/month
  • Savings: £582/month (33x ROI)

---

Finance Agent: From Invoice to Cash Flow Forecast

Purpose: Send invoices, track expenses, reconcile accounts, forecast cash flow. Skills:
  • invoice-processor (from Module 9)
  • expense-categorizer
  • xero-reconciliation
  • cash-flow-forecaster
Workflow: Daily (8am):
  1. Check bank account for new transactions (via Xero API)
  2. Categorize expenses (travel, software, contractors, ads)
  3. Flag unusual transactions (anything > £500)
  4. Update expense tracker spreadsheet
Weekly (Friday 2pm):
  1. Generate invoices for work completed this week
  2. Send invoices via Xero
  3. Check outstanding invoices (> 7 days overdue)
  4. Send payment reminders
Monthly (1st of month, 9am):
  1. Pull revenue, expenses, profit from Xero
  2. Compare to budget
  3. Forecast next 3 months cash flow
  4. Alert if cash flow negative in next 60 days
  5. Send Riley a financial summary
Example monthly summary:
Financial Summary - February 2026

Revenue: £42,300 (up 8% from January)

Expenses: £12,100 (ad spend: £4.2k, contractors: £3.5k, software: £2.1k, other: £2.3k)

Profit: £30,200 (margin: 71%)

Outstanding invoices: £7,800 (2 clients, 8-12 days overdue)

Action: Payment reminders sent

Cash flow forecast (next 3 months):

  • March: £38k revenue, £13k expenses = £25k profit
  • April: £44k revenue, £14k expenses = £30k profit
  • May: £46k revenue, £15k expenses = £31k profit

⚠️ Alert: Ad spend up 35% this month. ROI: £4.2k spent → £8.1k revenue attributed (1.9x ROAS)

Account balance: £84,200

Runway: 6.9 months at current burn rate

Cost: £8/month in tokens Human equivalent: 15 hours/month bookkeeping at £25/hour = £375/month Savings: £367/month (47x ROI)

---

Content Agent: From Idea to Published Post

Purpose: Write blog posts, repurpose to social, schedule posts, optimize for SEO. Skills:
  • content-ideas-generator
  • blog-post-writer
  • content-repurposer (from Module 9)
  • seo-optimizer
  • social-scheduler
Workflow: Monday 10am (after CEO Agent sets priorities):
  1. Generate 10 content ideas based on this week's priorities
  2. Check which ideas have highest SEO potential (Ahrefs API)
  3. Select top 3 ideas
  4. Write 3 blog posts (2,000 words each)
  5. Optimize for SEO (meta tags, internal links, alt text)
  6. Publish to WordPress
Monday 2pm:
  1. Repurpose each blog post into:

- 5 LinkedIn posts

- 10 tweets

- 1 email newsletter

- 1 YouTube script

  1. Schedule social posts across next 7 days (Buffer API)
Daily at 9am:
  • Check yesterday's content performance (GA4 API)
  • Identify top performer
  • Create 2 follow-up pieces on same topic
Performance:
  • Publishes 12 blog posts/month (2,000 words each = 24,000 words/month)
  • Repurposes into 60 LinkedIn posts, 120 tweets, 12 emails, 12 scripts
  • Drives 8,000-12,000 website visits/month
  • Generates 20-30 leads/month
  • Cost: ~£38/month in tokens (Claude Haiku)
  • Human equivalent: Freelance writer at £0.10/word = £2,400/month + VA for repurposing £400/month = £2,800/month
  • Savings: £2,762/month (73x ROI)

---

Research Agent: Competitive Intelligence

Purpose: Monitor competitors, identify trends, find guest post opportunities, discover link building targets. Skills:
  • competitor-tracker
  • trend-monitor
  • backlink-finder
  • guest-post-prospector
Workflow: Daily at 11am:
  1. Check top 5 competitors' blogs for new content (RSS feeds)
  2. Analyze their topics, keywords, backlinks
  3. Identify content gaps (topics they're covering that Riley isn't)
  4. Add content ideas to CEO Agent's queue
Weekly (Tuesday 10am):
  1. Search Google for "write for us [industry]"
  2. Find 20 guest post opportunities
  3. Filter by domain authority (DA > 40)
  4. Send Riley a curated list of top 5 targets
Monthly (15th of month):
  1. Pull Ahrefs data for Riley's site + 5 competitors
  2. Analyze keyword rankings (top 10 gains/losses)
  3. Identify rising trends in industry
  4. Recommend content strategy adjustments
Example research brief:
Competitive Intelligence - March 2026

Top competitor moves this month:

  1. ContentKing published 8 posts on "AI content marketing" (new focus area for them)
  2. MarketMuse launched a new SEO tool (direct competitor feature)
  3. Clearscope announced Series B ($15M) - expect aggressive ad spend

Content gaps we should fill:

  1. "AI content calendar automation" (competitor traffic: 2.4k/month, we rank #0)
  2. "ChatGPT for B2B content" (competitor traffic: 1.8k/month, we rank #12)
  3. "Content marketing ROI calculator" (tool opportunity, competitor DA: 62)

Guest post opportunities (vetted):

  1. SEMrush blog (DA 94, audience: 1.2M/month, topic: "AI content workflows")
  2. Moz blog (DA 91, audience: 800k/month, topic: "technical SEO for content sites")
  3. Ahrefs blog (DA 90, audience: 2.3M/month, topic: "keyword research automation")

Recommended priority: Write "AI content calendar automation" post this week (high volume, low competition, aligns with CEO priority #3)

Cost: £12/month in tokens Human equivalent: Junior analyst 10 hours/month at £20/hour = £200/month Savings: £188/month (16x ROI)

---

Agent Orchestration Patterns

Pattern 1: Sequential Handoffs

Research Agent → Content Agent → Marketing Agent → Analytics Agent

  1. Research finds content opportunity
  2. Content writes blog post
  3. Marketing runs paid promotion
  4. Analytics measures ROI
Pattern 2: Parallel Execution

CEO Agent triggers 3 agents simultaneously:

  • Sales Agent: Close deals
  • Research Agent: Generate leads
  • Content Agent: Publish posts

All work independently, report back to CEO Agent on Friday.

Pattern 3: Event-Driven Triggers

New invoice paid (Xero webhook) → Finance Agent updates cash flow → CEO Agent adjusts token budget

New contact form submission → Sales Agent scores lead → If score >= 70 → Send proposal

Pattern 4: Collaborative Problem-Solving

CEO Agent detects: "Content traffic down 15% this month"

→ Spawns Research Agent: "Analyze why traffic dropped"

→ Research reports: "Google algorithm update hit 'AI content' keywords"

→ CEO Agent spawns Content Agent: "Pivot content strategy to 'content automation' keywords"

→ Content Agent creates 5 new posts on new topic

→ Analytics Agent monitors recovery

Orchestration file: .openclaw/orchestration.yml
workflows:

- name: "new-lead-pipeline"

trigger:

type: "webhook"

source: "hubspot"

event: "contact.created"

steps:

- agent: "sales-agent"

task: "score_lead"

outputvar: "leadscore"

- if: "lead_score >= 70"

then:

- agent: "sales-agent"

task: "generate_proposal"

- agent: "sales-agent"

task: "send_proposal"

- agent: "research-agent"

task: "findsimilarcompanies" # Expand TAM

- if: "leadscore >= 40 AND leadscore < 70"

then:

- agent: "sales-agent"

task: "addtonurture_sequence"

- if: "lead_score < 40"

then:

- agent: "sales-agent"

task: "archive_lead"

- name: "weekly-planning-cycle"

trigger:

type: "cron"

schedule: "0 9 1" # Monday 9am

steps:

- agent: "ceo-agent"

task: "analyze_metrics"

output_var: "priorities"

- agent: "ceo-agent"

task: "setweeklypriorities"

input:

priorities: "{{priorities}}"

- parallel:

- agent: "sales-agent"

task: "updateweeklyfocus"

- agent: "content-agent"

task: "updateweeklyfocus"

- agent: "finance-agent"

task: "updateweeklyfocus"

---

Measuring ROI: Agent Cost vs. Human Equivalent

Riley's calculation:

| Agent | Monthly Token Cost | Human Equivalent Cost | Savings | ROI Multiplier |

|-------|-------------------|-----------------------|---------|----------------|

| CEO Agent | £25 | £1,200 (exec time) | £1,175 | 48x |

| Sales Agent | £18 | £600 (SDR time) | £582 | 33x |

| Finance Agent | £8 | £375 (bookkeeper) | £367 | 47x |

| Content Agent | £38 | £2,800 (writer+VA) | £2,762 | 73x |

| Research Agent | £12 | £200 (analyst) | £188 | 16x |

| Client Agent | £15 | £400 (account mgr) | £385 | 27x |

| Support Agent | £9 | £300 (support rep) | £291 | 33x |

| Operations Agent | £6 | £200 (PM time) | £194 | 32x |

| Marketing Agent | £14 | £500 (marketer) | £486 | 36x |

| Analytics Agent | £4 | £150 (analyst) | £146 | 37x |

| Personal Agent | £8 | £250 (EA time) | £242 | 31x |

| TOTAL | £157/month | £6,975/month | £6,818/month | 44x avg |

Annual savings: £81,816 Key insight: Riley isn't comparing agents to hiring full-time staff. He's comparing to the cost of NOT doing this work at all.

Before agents:

  • No SDR → fewer leads → slower growth
  • No dedicated analyst → missed opportunities
  • No content team → lower SEO traffic
  • Result: £280k/year revenue

With agents:

  • Every revenue-driving function covered
  • Result: £540k/year revenue
True ROI: Not £81k saved, but £260k revenue gained.

---

When to Add Agents vs. Optimize Existing Ones

Add a new agent when:
  • You have a repeating workflow that takes > 2 hours/week
  • The workflow is clearly defined (input → steps → output)
  • The work is currently not being done (opportunity cost)
Optimize an existing agent when:
  • Token costs are > £50/month for a single agent
  • The agent's output quality is inconsistent
  • The agent is duplicating work done by other agents
Riley's decision framework:
Is this work currently being done?

├─ YES → Is it taking > 2 hours/week?

│ │

│ ├─ YES → Build agent to replace human time

│ └─ NO → Don't build (not worth it)

└─ NO → Would doing this work generate revenue or save cost?

├─ YES → Build agent to capture opportunity

└─ NO → Don't build (nice-to-have)

Examples:

❌ Don't build: "Coffee order agent" (saves 5 mins/week)

✅ Build: "Lead scorer agent" (captures revenue opportunity)

✅ Build: "Invoice processor agent" (saves 15 hours/month bookkeeping)

❌ Don't build: "Meeting note beautifier" (output quality doesn't matter)

---

Scaling Limits: When You Need Humans

What agents CAN'T do (as of 2026):
  1. Strategic decisions with incomplete data - Agents follow frameworks. Humans make judgment calls.
  2. Relationship building - Clients want to talk to Riley, not an agent.
  3. Creative direction - Agents can write copy, but Riley sets brand voice.
  4. Complex negotiations - Agents can draft proposals, but Riley closes £10k+ deals.
  5. Crisis management - Agents can alert, but Riley handles client emergencies.
Riley's role in the business:
  • Strategy (5 hours/week): Review CEO Agent reports, adjust priorities
  • Sales (8 hours/week): Intro calls with high-value leads (>£5k deals)
  • Client relationships (6 hours/week): Quarterly check-ins, upsells
  • Quality control (3 hours/week): Review content, approve proposals
  • Total: 22 hours/week on the business, not in the business
What Riley outsources to humans:
  • Video editing (£30/video, 4 videos/month = £120/month)
  • Graphic design (£200/month retainer for custom illustrations)
  • Legal/accounting (£150/month for tax compliance)
Total human cost: £470/month (vs. £6,975/month if he hired for all agent roles)

---

The £1M Roadmap

Riley's plan to scale from £540k to £1M revenue with the same 11-agent team:

Revenue breakdown (current):
  • Content marketing retainers: £360k/year (10 clients at £3k/month avg)
  • One-off projects: £120k/year (£10k avg, 1 per month)
  • Affiliate revenue: £36k/year (recommending tools)
  • Course sales: £24k/year (OpenClaw course + workshops)
Revenue breakdown (£1M target):
  • Content marketing retainers: £600k/year (17 clients at £3k/month avg)
  • One-off projects: £240k/year (£10k avg, 2 per month)
  • Affiliate revenue: £60k/year
  • Course sales: £100k/year
How agents scale this:
  1. Sales Agent: Increase proposals from 15/month to 30/month (more leads, better qualification)
  2. Content Agent: Increase output from 12 posts/month to 20 posts/month (more clients = more content)
  3. Client Agent: Manage 17 clients instead of 10 (no marginal cost increase)
  4. CEO Agent: Optimize resource allocation across higher revenue base
Token cost at £1M revenue: ~£280/month (not £157) Why? More clients = more emails, more reports, more content. Token usage scales linearly with workload. Human cost to achieve £1M without agents: ~£12,000/month (need to hire 2-3 FTEs) Profit margin:
  • With agents: 65% (£650k profit on £1M revenue)
  • With humans: 35% (£350k profit on £1M revenue)
Difference: £300k/year more profit by using agents.

---

Your 11-Agent Team: Starter Template

You don't need to build all 11 agents on day one. Start with the Core 4:

Phase 1: Core 4 Agents (Month 1-2)

  1. Operations Agent - Task management, deadline tracking
  2. Personal Agent - Calendar, email triage, meeting prep
  3. Finance Agent - Invoicing, expense tracking
  4. Content Agent - Blog posts, social content
Goal: Save 15 hours/week. Prove ROI.

Phase 2: Add Revenue Agents (Month 3-4)

  1. Sales Agent - Lead qualification, proposals
  2. Research Agent - Competitive intel, lead gen
Goal: Increase revenue by 20%.

Phase 3: Add Strategic Layer (Month 5-6)

  1. CEO Agent - Weekly planning, metrics, priorities
Goal: System runs autonomously. You focus on strategy.

Phase 4: Full Team (Month 7-12)

8-11. Client Agent, Support Agent, Marketing Agent, Analytics Agent

Goal: Scale to £500k+ revenue with zero staff.

---

Common Mistakes When Scaling

Mistake 1: Building agents before defining workflows

❌ "I'll build a sales agent and see what it does"

✅ "I'll document my sales process, then build an agent that follows it"

Mistake 2: Over-engineering

❌ 50-page prompt with every edge case

✅ 2-page prompt that handles 90% of cases, escalates the rest

Mistake 3: Not measuring ROI

❌ "My agents seem helpful"

✅ "My agents save 15 hours/week = £600/month value vs. £38/month cost = 16x ROI"

Mistake 4: Duplicating work across agents

❌ Sales Agent and Marketing Agent both researching the same lead

✅ Research Agent does research once, both agents read shared memory

Mistake 5: Ignoring agent logs

❌ Agents run silently, errors go unnoticed

✅ Heartbeat monitoring + weekly reviews catch issues early

---

Next Steps

You've completed the OpenClaw Course for CEOs. You now know:

  1. How to secure your OpenClaw system (Module 1)
  2. How to optimize token costs (Module 2)
  3. How to build your first 3 projects (Module 3)
  4. How to automate with cron jobs (Module 4)
  5. How to control OpenClaw from anywhere (Module 5)
  6. How to monitor with mission controls (Module 6)
  7. How to orchestrate multi-agent teams (Module 7)
  8. How to set up proactive monitoring (Module 8)
  9. How to build custom skills (Module 9)
  10. How to scale to £500k+ with 11 agents (Module 10)
  11. Your homework:
    1. Implement Core 4 agents (Operations, Personal, Finance, Content)
    2. Run them for 30 days
    3. Measure time saved and ROI
    4. Share your results in the OpenClaw community
    Join the OpenClaw community:
    • Discord: discord.gg/openclaw
    • Forum: community.openclaw.ai
    • Share your agent configs: skills.openclaw.ai/share
    Need help? Book a 1-hour private setup session with Dan: [LinkedIn DM](https://linkedin.com/in/dancourse) - £1,500

    ---

    Course Complete

    Congratulations. You're now equipped to build an AI-powered business that runs 24/7, scales without hiring, and generates £500k+ in revenue.

    What others have achieved:
    • Sarah (freelance designer): Built 6 agents, doubled revenue to £180k/year, works 25 hours/week
    • Mike (SaaS founder): Built 8 agents, reduced customer support from 20 hours/week to 2 hours/week
    • Jen (consultant): Built 5 agents, launched 3 side hustles, revenue up from £0 to £8k/month in 6 months
    The opportunity is massive. Most businesses still use humans for work that agents can do better, faster, and cheaper.

    You now have a 10-year competitive advantage.

    Go build.

    ---

    Course complete. Total duration: 10 modules, 11 hours of content. You are now an OpenClaw expert.