Run AI Agents That Actually Work. Own your AI stack. £97 launch price.
Buy Now (£97)You've built your first 3 OpenClaw projects. They work. But there's a problem: you still have to REMEMBER to run them.
Daily brief at 8am? You type openclaw run daily-brief.
Weekly report on Friday? You type openclaw run weekly-report.
Monthly invoice reminders? You... forget half the time.
This is not automation. This is just delegating tasks to yourself.
This module teaches you cron jobs - the system that turns your OpenClaw agents from on-demand assistants into autonomous employees that work 24/7 without you lifting a finger. What you'll build:---
A cron job is a scheduled task that runs automatically at specific times. Think of it as setting an alarm for your computer.
Instead of you typing openclaw run daily-brief every morning, you tell your computer:
> "Every day at 8am, run this command for me."
Your computer does it. Rain or shine. Weekends included (unless you tell it otherwise).
OpenClaw offers TWO ways to schedule autonomous work:
Cron jobs (this module):---
Let's build a daily brief agent that summarizes:
Create the agent file:
cd ~/.openclaw/agents
nano daily-brief.md
Paste this agent definition:
Daily Brief Agent
Role
You are my morning briefing assistant. Your job is to scan my email, calendar, and task list, then send me a concise daily brief via Telegram.
Data Sources
- Gmail: Unread emails from last 24 hours (exclude newsletters)
- Google Calendar: Today's events (next 12 hours)
- Notion: Tasks with status "In Progress" or due today
Output Format
Send a Telegram message (use /send-telegram tool):
---
Daily Brief - [Date]
📧 Emails (X unread):
- [Sender]: [Subject] - [1-line summary]
(Max 5 emails. If more, say "...and N more")
📅 Today's Calendar:
- [Time] - [Event title] - [Location if applicable]
(All events for today)
✅ Tasks Due Today:
- [Task title] - [Status]
🔥 Priority Action:
[The ONE thing that absolutely must get done today]
---
Rules
- Keep it under 500 words
- No fluff or motivational quotes
- If calendar empty, say "Clear calendar today"
- If no urgent emails, say "No urgent emails"
- Priority action must be SPECIFIC (not "check emails")
Save and exit (Ctrl+X, Y, Enter).
Before scheduling it, verify it works:
openclaw run daily-brief
Check your Telegram. You should receive a brief within 30 seconds.
Common issues:openclaw config add-gmail and openclaw config add-calendar~/.openclaw/.envEdit your cron schedule:
crontab -e
Add this line (replace YOUR_USERNAME with your actual username):
0 8 * /usr/local/bin/openclaw run daily-brief >> /Users/YOUR_USERNAME/.openclaw/logs/daily-brief.log 2>&1
What this means:
0 8 * = "At 8:00am, every day, every month, every day of week"/usr/local/bin/openclaw = Full path to openclaw commandrun daily-brief = Run the agent we just created>> /Users/YOUR_USERNAME/.openclaw/logs/daily-brief.log = Save output to log file2>&1 = Capture errors tooSave and exit (:wq in vim, or Ctrl+X in nano).
List your active cron jobs:
crontab -l
You should see your daily-brief line.
To test immediately (don't wait until 8am):Manually trigger the cron job command
/usr/local/bin/openclaw run daily-brief >> ~/.openclaw/logs/daily-brief.log 2>&1
Check the log
tail ~/.openclaw/logs/daily-brief.log
If you see output and your Telegram message arrived, it's working.
---
Cron uses 5 fields to define schedules:
*
│ │ │ │ │
│ │ │ │ └─── Day of week (0-7, where 0 and 7 = Sunday)
│ │ │ └───── Month (1-12)
│ │ └─────── Day of month (1-31)
│ └───────── Hour (0-23)
└─────────── Minute (0-59)
| Schedule | Cron Syntax | Description |
|----------|-------------|-------------|
| Every hour | 0 | At minute 0 of every hour |
| Every 15 minutes | /15 * | At :00, :15, :30, :45 |
| Daily at 8am | 0 8 * | 8:00am every day |
| Weekdays at 9am | 0 9 1-5 | 9am Mon-Fri only |
| First of month | 0 9 1 | 9am on the 1st |
| Last day of month | 0 17 28-31 | 5pm on 28th-31st (covers all month-end days) |
| Every Monday at 10am | 0 10 1 | Monday weekly report |
| Twice daily | 0 8,17 * | 8am and 5pm |
- Minute: 0 (at the top of the hour)
- Hour: 17 (5pm in 24-hour format)
- Day of month: * (any day)
- Month: * (every month)
- Day of week: 5 (Friday)
0 17 5---
~/.openclaw/agents/weekly-report.md
Weekly Business Report Agent
Role
Compile a weekly summary of business metrics and send to my Telegram.
Data Sources
- Gmail: Count of emails sent/received this week
- CRM (Notion): Deals closed, pipeline value, new leads
- Calendar: Total meeting hours this week
- Finance (Stripe API): Revenue this week vs last week
Output Format
Weekly Report - Week of [Date]
📊 Metrics:
- Revenue: £X,XXX (↑/↓ Y% vs last week)
- Deals closed: N
- Pipeline value: £X,XXX
- New leads: N
⏰ Time Spent:
- Meetings: X hours
- Emails: X sent, Y received
🎯 Next Week Focus:
[Top 3 priorities based on pipeline and deadlines]
Cron job:
0 17 5 /usr/local/bin/openclaw run weekly-report >> ~/.openclaw/logs/weekly-report.log 2>&1
---
~/.openclaw/agents/invoice-reminders.md
Invoice Reminder Agent
Role
Check for unpaid invoices older than 30 days and draft reminder emails.
Data Sources
- Accounting system (Xero/QuickBooks API): Invoices with status "Sent" or "Overdue"
- Gmail: Check if reminder already sent in last 14 days (avoid duplicate reminders)
Process
- Query invoices older than 30 days with status != "Paid"
- For each invoice:
- Check if reminder sent in last 14 days (search Gmail Sent folder)
- If NOT sent, draft reminder email
- Send drafts to my Telegram for approval (DO NOT auto-send)
Email Template
Subject: Reminder: Invoice #[NUMBER] - £[AMOUNT] outstanding
Hi [Client Name],
I hope you're well. I'm following up on Invoice #[NUMBER] for £[AMOUNT], issued on [DATE].
This invoice is now [DAYS] days overdue. Could you please confirm when payment will be made?
If you've already paid, please let me know so I can update my records.
Thanks,
Dan
Cron job:
0 9 25 /usr/local/bin/openclaw run invoice-reminders >> ~/.openclaw/logs/invoice-reminders.log 2>&1
---
Edit ~/.openclaw/agents/daily-brief.md and add:
Silent Hours
- Do NOT run between 7pm (19:00) and 8am (08:00)
- Do NOT run on Saturday or Sunday
- If triggered during silent hours, exit immediately with log: "Skipped - silent hours"
Implementation
Before processing, check:
python
import datetime
now = datetime.datetime.now()
hour = now.hour
day = now.strftime("%A")
if hour < 8 or hour >= 19:
log("Skipped - outside working hours")
exit()
if day in ["Saturday", "Sunday"]:
log("Skipped - weekend")
exit()
Cron job (runs every day at 8am, but agent self-filters weekends):
bash
0 8 * /usr/local/bin/openclaw run daily-brief >> ~/.openclaw/logs/daily-brief.log 2>&1
---
Part 5: Troubleshooting Cron Jobs
Problem 1: Cron Job Doesn't Run
Symptoms: It's 8am, no daily brief arrived.
Diagnosis:
bash
sudo launchctl list | grep cron # macOS
systemctl status cron # Linux
crontab -l
tail -f /var/log/syslog | grep CRON # Linux
tail -f /var/log/system.log | grep cron # macOS
Common causes:
- Cron daemon not running (restart:
sudo systemctl restart cron on Linux)
- Wrong time zone (cron uses system time, check with
date)
- Syntax error in crontab (validate at crontab.guru)
---
Problem 2: Cron Runs But Agent Fails
Symptoms: Cron triggers, but no Telegram message. Log shows errors.
Diagnosis:
bash
tail -50 ~/.openclaw/logs/daily-brief.log
/usr/local/bin/openclaw run daily-brief
Common causes:
- Missing environment variables (cron doesn't load
.bashrc or .zshrc)
- API tokens not accessible (cron runs with limited PATH and ENV)
- Wrong file permissions (agent file not readable)
Fix: Add environment variables to crontab
bash
crontab -e
Add these lines at the TOP of your crontab (before any cron jobs):
cron
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
HOME=/Users/YOUR_USERNAME
ANTHROPICAPIKEY=sk-ant-your-key-here
Now your cron jobs have access to the same environment as your terminal sessions.
---
Problem 3: Cron Job Runs Multiple Times
Symptoms: You get 3 daily briefs at 8am instead of 1.
Diagnosis:
bash
crontab -l | grep daily-brief
Cause: You accidentally added the same cron job multiple times.
Fix:
bash
crontab -e
---
Part 6: Advanced Patterns
Pattern 1: Conditional Execution (Only Run If Data Changed)
Use case: Weekly report only runs if there's new data (avoid empty reports).
Add this to your weekly-report.md agent:
markdown
Before generating report:
- Log: "No significant activity this week - report skipped"
- Exit without sending Telegram message
Your cron job runs every Friday, but the agent decides whether to actually send a report.
---
Pattern 2: Retry on Failure
Use case: If Gmail API is down at 8am, retry at 8:30am and 9am.
Create a wrapper script: ~/.openclaw/scripts/daily-brief-retry.sh
bash
#!/bin/bash
LOG_FILE="$HOME/.openclaw/logs/daily-brief.log"
/usr/local/bin/openclaw run daily-brief >> "$LOG_FILE" 2>&1
if [ $? -ne 0 ]; then
echo "First attempt failed. Retrying in 30 minutes..." >> "$LOG_FILE"
sleep 1800 # 30 minutes
/usr/local/bin/openclaw run daily-brief >> "$LOG_FILE" 2>&1
fi
Make it executable:
bash
chmod +x ~/.openclaw/scripts/daily-brief-retry.sh
Update crontab to use the wrapper:
bash
0 8 * /Users/YOUR_USERNAME/.openclaw/scripts/daily-brief-retry.sh
---
Pattern 3: Staggered Schedules (Avoid API Rate Limits)
Use case: You have 5 agents that all query Gmail API. Running them simultaneously hits rate limits.
Solution: Stagger by 10 minutes.
cron
0 8 * /usr/local/bin/openclaw run daily-brief
10 8 * /usr/local/bin/openclaw run email-triage
20 8 * /usr/local/bin/openclaw run task-review
30 8 * /usr/local/bin/openclaw run calendar-prep
40 8 * /usr/local/bin/openclaw run meeting-notes-summary
Each agent runs 10 minutes apart. No rate limit collisions.
---
Part 7: Maintenance & Monitoring
Log Rotation (Prevent Disk Space Issues)
Cron job logs grow over time. Set up automatic log rotation:
Create ~/.openclaw/scripts/rotate-logs.sh:
bash
#!/bin/bash
LOG_DIR="$HOME/.openclaw/logs"
find "$LOG_DIR" -name "*.log" -mtime +7 -exec gzip {} \;
find "$LOG_DIR" -name "*.log.gz" -mtime +30 -delete
Make executable:
bash
chmod +x ~/.openclaw/scripts/rotate-logs.sh
Add to crontab (runs daily at 2am):
bash
0 2 * /Users/YOUR_USERNAME/.openclaw/scripts/rotate-logs.sh
---
Weekly Cron Health Check
Create an agent that verifies your cron jobs are running correctly:
~/.openclaw/agents/cron-health-check.md:
markdown
Verify all cron jobs executed in the last 7 days. Alert if any failed.
~/.openclaw/logs/- Check last modified date (should be within 7 days)
- Check for error patterns ("failed", "timeout", "exception")
Job: [name]
Last run: [date]
Status: FAILED
Error: [first line of error]
See full log: ~/.openclaw/logs/[name].log
Run every Sunday at 6pm
Crontab:
bash
0 18 0 /usr/local/bin/openclaw run cron-health-check >> ~/.openclaw/logs/cron-health-check.log 2>&1
``
---
Create an agent that runs every Friday at 4pm and asks you via Telegram:
> "What were your 3 biggest wins this week?"
You reply via Telegram. The agent:
tool to prompt for wins via Telegram for "every Friday at 4pm"---
✅ Cron job fundamentals: Schedule syntax, testing, troubleshooting
✅ Real-world templates: Daily briefs, weekly reports, invoice reminders
✅ Advanced patterns: Conditional execution, retries, staggered schedules
✅ Maintenance: Log rotation, health checks, silent hours
Next steps:---
Your OpenClaw agents run on your computer. They work 24/7 via cron jobs. But there's a limitation: you can only interact with them when you're sitting at your desk.
Client calls while you're out? You can't ask your agent to pull their file.
Idea hits you on a walk? You can't tell your content agent to draft it.
Emergency at 11pm? You can't trigger your monitoring agent from bed.
This is the missing piece: gateways. Gateways turn your OpenClaw agents into teammates you can message from your phone, just like texting a colleague. Telegram, Slack, iMessage, WhatsApp - you pick the app you already use. What you'll build:---
Riley Brown (creator of OpenClaw) runs 11 cloud agents + 1 Mac Mini agent. They handle:
Without gateways, he'd need to remote desktop into his computer every time. With gateways, he just texts his agent.
OpenClaw supports 4 gateway platforms:
| Platform | Best For | Setup Time |
|----------|----------|------------|
| Telegram | Personal use, fastest setup | 15 min |
| Slack | Team collaboration, company workspace | 25 min |
| iMessage | Apple ecosystem, minimal friction | 40 min (Mac only) |
| WhatsApp | International teams, non-technical users | 35 min |
This module covers Telegram (easiest) and Slack (most common for teams). iMessage and WhatsApp follow similar patterns - see docs.openclaw.ai/gateways for full guides.---
@BotFather/newbot- Bot name: "My OpenClaw Assistant" (display name)
- Bot username: "myopenclawbot" (must end in bot, must be unique)
1234567890:ABCdefGHIjklMNOpqrsTUVwxyzTell OpenClaw about your Telegram bot:
openclaw gateway add telegram
When prompted:
all (you can restrict later)OpenClaw will create ~/.openclaw/gateways/telegram.yml:
platform: telegram
bot_token: "1234567890:ABCdefGHIjklMNOpqrsTUVwxyz"
allowed_users: []
allowed_commands: all
webhook_url: null
polling_interval: 2
openclaw gateway start telegram
You should see:
✓ Telegram gateway started
✓ Polling for messages every 2 seconds
✓ Send /start to your bot to begin
/startWelcome! Your chat ID is 123456789.
Available commands:
/run [agent-name] - Run an agent
/list - List available agents
/status - Check agent status
/logs [agent-name] - View recent logs
/help - Show all commands
Copy your chat ID (the number in the welcome message).
Stop the gateway (Ctrl+C in the terminal running it).
Edit the gateway config:
nano ~/.openclaw/gateways/telegram.yml
Add your chat ID to allowed_users:
allowed_users:
- 123456789
Save (Ctrl+X, Y, Enter).
Restart the gateway:
openclaw gateway start telegram
From Telegram on your phone, send:
/run daily-brief
Within 5 seconds, your bot should reply with your daily brief (the agent from Module 4).
If it works, you're done. Your agents are now phone-accessible.---
Typing /run send-proposal-to John Smith on mobile is tedious. Shortcuts fix this.
Edit your gateway config:
nano ~/.openclaw/gateways/telegram.yml
Add a shortcuts section:
shortcuts:
brief: "run daily-brief"
proposals: "run proposal-generator"
leads: "run lead-qualifier"
report: "run weekly-report"
Restart the gateway. Now you can type:
/brief
Instead of:
/run daily-brief
You can pass parameters to agents:
shortcuts:
pitch: "run proposal-generator --client=$1 --budget=$2"
Usage:
/pitch "Acme Corp" "15000"
This runs:
openclaw run proposal-generator --client="Acme Corp" --budget="15000"
---
When you close your terminal, the gateway stops. For 24/7 access, run it as a system service.
Create the service file:
nano ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist
Paste (replace YOUR_USERNAME with your actual username):
Label
com.openclaw.telegram-gateway
ProgramArguments
/usr/local/bin/openclaw
gateway
start
telegram
RunAtLoad
KeepAlive
StandardOutPath
/Users/YOUR_USERNAME/.openclaw/logs/telegram-gateway.log
StandardErrorPath
/Users/YOUR_USERNAME/.openclaw/logs/telegram-gateway-error.log
Load the service:
launchctl load ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist
The gateway now runs 24/7, even after reboots.
To stop it:launchctl unload ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist
For Linux, create /etc/systemd/system/openclaw-telegram.service:
[Unit]
Description=OpenClaw Telegram Gateway
After=network.target
[Service]
Type=simple
User=YOUR_USERNAME
ExecStart=/usr/local/bin/openclaw gateway start telegram
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable openclaw-telegram
sudo systemctl start openclaw-telegram
---
In your app settings:
- chat:write (send messages)
- commands (handle slash commands)
- files:write (send files)
- channels:read (list channels)
- groups:read (list private channels)
xoxb-)openclaw gateway add slack
When prompted:
xoxb- tokenIn Slack app settings:
- Command: /openclaw
- Request URL: https://your-tailscale-url/webhook/slack (we'll set this up in Part 6)
- Short description: "Run OpenClaw agents"
- Usage hint: [agent-name] [args]
Start the Slack gateway:
openclaw gateway start slack
In any Slack channel, type:
/openclaw run daily-brief
The bot should reply with your brief in a thread (visible only to you).
You can route channels to specific agents:
Edit ~/.openclaw/gateways/slack.yml:
channel_agents:
C01234ABC: sales-agent # #sales channel
C56789XYZ: support-agent # #support channel
G98765DEF: proposal-generator # #proposals private channel
Get channel IDs: right-click channel → View channel details → scroll to bottom.
Now when anyone in #sales types /openclaw run, it automatically uses the sales-agent.
---
Your OpenClaw agents run on your laptop/Mac Mini at home. When you're on the road, you can't reach them directly (they're behind your home router).
Bad solutions:Tailscale creates a secure private network between your devices. It's like your devices are on the same Wi-Fi, even when they're not.
brew install tailscale
Linux:
curl -fsSL https://tailscale.com/install.sh | sh
Windows: Download from https://tailscale.com/download
sudo tailscale up
This opens a browser to log in. Use Google/Microsoft/GitHub account (or create Tailscale account).
Your device is now on your Tailscale network.
tailscale ip
You'll see something like: 100.64.0.5
This is your device's private IP on the Tailscale network.
OpenClaw includes a built-in webhook server for receiving gateway commands.
Start the webhook server:
openclaw serve --port 8080
Expose it on Tailscale:
tailscale serve https / http://localhost:8080
You'll see:
Available within your Tailscale network at:
https://your-machine-name.tailnet-name.ts.net
This URL is now accessible from any device logged into your Tailscale network (your phone, laptop, tablet).
If you want Telegram/Slack to push messages to your server (instead of polling), update your gateway configs:
Telegram:webhook_url: "https://your-machine-name.tailnet-name.ts.net/webhook/telegram"
polling_interval: null
Slack:
Use the same URL in your Slack app's Slash Commands → Request URL.
Install Tailscale app on your phone (iOS/Android).
Log in with the same account.
Now your phone can reach your OpenClaw instance via https://your-machine-name.tailnet-name.ts.net.
---
Your gateway configs should ALWAYS have allowedusers or allowedchannels defined.
allowed_users: []
Good (only you can use them):
allowed_users:
- 123456789
Some agents should be read-only (daily-brief, status-check). Others can write (proposal-generator, email-sender).
In your agent definition (~/.openclaw/agents/agent-name.md), specify:
permissions:
read: [gmail, calendar, notion]
write: []
OpenClaw will enforce this. If the agent tries to send an email but write: [gmail] isn't listed, it fails.
If you run agents for work AND personal projects, use separate gateways:
This prevents accidentally running your work proposal-generator for a personal project (or vice versa).
Check who's using your agents:
tail -f ~/.openclaw/logs/telegram-gateway.log
Look for:
Set up alerts (Module 8: Heartbeat Monitoring) to notify you of suspicious activity.
Telegram and Slack tokens don't expire, but you should rotate them periodically:
/token or Slack app settings)~/.openclaw/gateways/*.yml---
/pitch "Acme Corp" "Website redesign" "15000"/run meeting-notes---
/run daily-brief in Telegram, no response.
Diagnosis:
Check if gateway is running
ps aux | grep "openclaw gateway"
Check logs
tail -50 ~/.openclaw/logs/telegram-gateway.log
Common causes:
openclaw gateway start telegram)allowed_users)List available agents
openclaw list agents
Check agent file exists
ls ~/.openclaw/agents/
Fix: Verify agent name matches filename (e.g., daily-brief.md = agent name daily-brief).
https://your-machine-name.tailnet-name.ts.net from phone.
Diagnosis:
Check Tailscale status
tailscale status
Verify serve is running
curl http://localhost:8080
Common causes:
openclaw serve not running (start it)sudo ufw allow 8080 on Linux)/openclaw run daily-brief in Slack shows "dispatch failed" error.
Diagnosis:
Check Slack app event logs:
https://your-tailscale-url/webhook/slack)openclaw gateway start slack)---
/run daily-brief from your phonetailscale servehttps://your-machine-name.tailnet-name.ts.net from phone/openclaw run daily-brief in Slack---
In Module 6: Mission Controls, you'll build a Notion dashboard that shows:
This turns your phone-accessible agents into a monitored AI team with real-time visibility.
But first, make sure your Telegram gateway is working. Everything from here builds on mobile access.
---
| Command | Description |
|---------|-------------|
| /run [agent] | Run an agent |
| /list | List all available agents |
| /status | Check gateway status |
| /logs [agent] | View recent logs for an agent |
| /help | Show all commands |
Start gateway
openclaw gateway start telegram
Stop gateway
(Ctrl+C if running in terminal, or:)
launchctl unload ~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist
View gateway logs
tail -f ~/.openclaw/logs/telegram-gateway.log
List active gateways
openclaw gateway list
Test gateway manually
openclaw gateway test telegram
Check Tailscale status
tailscale status
Restart Tailscale serve
tailscale serve https / http://localhost:8080
~/.openclaw/gateways/telegram.yml~/.openclaw/gateways/slack.yml~/.openclaw/logs/telegram-gateway.log~/Library/LaunchAgents/com.openclaw.telegram-gateway.plist---
You now have phone-in-pocket AI agents. Next: visibility into what they're doing.---
By the end of this module, you'll have a centralized Notion dashboard that gives you complete visibility into your OpenClaw operations:
Think of this as your "mission control" - one place to see everything happening in your AI team.
---
Without visibility, your AI agents are a black box. You don't know:
Andrew Chen (consultant, 5 agents running) discovered he was spending £340/month on an email agent that was stuck in a loop, retrying the same failed API call 400+ times per day. He only noticed when his Anthropic bill arrived.
With a dashboard: He would have seen the spike in tokens within 2 hours and killed the loop before it cost £300.---
Your mission control setup has 3 components:
┌─────────────────────────────────────┐
│ Notion Dashboard (read) │
│ ┌─────────────────────────────┐ │
│ │ Agent Activity │ │
│ │ Cost Tracker │ │
│ │ Performance Metrics │ │
│ │ Health Status │ │
│ └─────────────────────────────┘ │
└────────────▲────────────────────────┘
│
│ (write updates)
│
┌────────────┴────────────────────────┐
│ Status Reporter Agent │
│ - Runs every 5 minutes (cron) │
│ - Queries .openclaw/state/ │
│ - Posts to Notion API │
└────────────▲────────────────────────┘
│
│ (read state)
│
┌────────────┴────────────────────────┐
│ OpenClaw Agents │
│ .openclaw/state/*.json │
└─────────────────────────────────────┘
---
OpenClaw Mission Control- ✅ Read content
- ✅ Update content
- ✅ Insert content
.openclaw/credentials/notion-token.txt (NOT in your git repo).
mkdir -p ~/.openclaw/credentials
echo "yourintegrationtoken_here" > ~/.openclaw/credentials/notion-token.txt
chmod 600 ~/.openclaw/credentials/notion-token.txt
| Property Name | Type | Description |
|--------------|------|-------------|
| Agent Name | Title | Name of the agent (e.g. "Email Triage") |
| Status | Select | Running / Idle / Failed / Paused |
| Last Run | Date | When the agent last executed |
| Duration | Number | How long the last run took (seconds) |
| Tokens Used | Number | Total tokens consumed in last run |
| Cost (£) | Formula | prop("Tokens Used") * 0.000015 |
| Success Rate | Number | % of successful runs (last 24h) |
| Current Task | Text | What the agent is working on |
| Last Error | Text | Most recent error message (if any) |
- Click "Share" in the top right
- Search for "OpenClaw Mission Control"
- Click "Invite"
- Open the database as a full page
- Copy the URL: https://www.notion.so/yourworkspace/abc123?v=xyz
- The database ID is the abc123 part (32 characters)
- Save it: echo "yourdatabaseid" > ~/.openclaw/credentials/notion-db-id.txt
---
This agent runs every 5 minutes, checks what your other agents are doing, and updates the Notion dashboard.
Create .openclaw/agents/status-reporter.json:
{
"name": "status-reporter",
"description": "Updates Notion dashboard with agent activity and performance metrics",
"model": "claude-haiku-4",
"schedule": "/5 *",
"tools": ["bash", "read"],
"memory_scope": "isolated",
"max_tokens": 1000,
"system_prompt": "You are a status reporter. Read agent state files from .openclaw/state/ and update the Notion dashboard. Be concise - you run every 5 minutes."
}
Why Haiku? Status reporting is simple data aggregation. Haiku costs £0.25 per million input tokens (vs £3 for Sonnet). Running every 5 minutes = 288 runs/day. With Haiku: ~£2/month. With Sonnet: ~£24/month.
Create ~/.openclaw/skills/update-notion-dashboard.sh:
#!/bin/bash
Update Notion dashboard with current agent status
Called by status-reporter agent every 5 minutes
NOTION_TOKEN=$(cat ~/.openclaw/credentials/notion-token.txt)
NOTION_DB=$(cat ~/.openclaw/credentials/notion-db-id.txt)
STATE_DIR="$HOME/.openclaw/state"
Check if state directory exists
if [ ! -d "$STATE_DIR" ]; then
echo "Error: State directory not found at $STATE_DIR"
exit 1
fi
Process each agent's state file
for statefile in "$STATEDIR"/*.json; do
[ -e "$state_file" ] || continue
AGENTNAME=$(jq -r '.name' "$statefile")
STATUS=$(jq -r '.status' "$state_file")
LASTRUN=$(jq -r '.lastrun' "$state_file")
DURATION=$(jq -r '.durationseconds' "$statefile")
TOKENS=$(jq -r '.tokensused' "$statefile")
SUCCESSRATE=$(jq -r '.successrate24h' "$statefile")
CURRENTTASK=$(jq -r '.currenttask' "$state_file")
LASTERROR=$(jq -r '.lasterror // "None"' "$state_file")
# Search for existing row in Notion
SEARCH_RESPONSE=$(curl -s -X POST \
"https://api.notion.com/v1/databases/$NOTION_DB/query" \
-H "Authorization: Bearer $NOTION_TOKEN" \
-H "Notion-Version: 2022-06-28" \
-H "Content-Type: application/json" \
-d '{
"filter": {
"property": "Agent Name",
"title": {
"equals": "'"$AGENT_NAME"'"
}
}
}')
PAGEID=$(echo "$SEARCHRESPONSE" | jq -r '.results[0].id // empty')
# Build the update payload
PAYLOAD=$(cat <
{
"properties": {
"Agent Name": {
"title": [{"text": {"content": "$AGENT_NAME"}}]
},
"Status": {
"select": {"name": "$STATUS"}
},
"Last Run": {
"date": {"start": "$LAST_RUN"}
},
"Duration": {
"number": $DURATION
},
"Tokens Used": {
"number": $TOKENS
},
"Success Rate": {
"number": $SUCCESS_RATE
},
"Current Task": {
"richtext": [{"text": {"content": "$CURRENTTASK"}}]
},
"Last Error": {
"richtext": [{"text": {"content": "$LASTERROR"}}]
}
}
}
EOF
)
if [ -n "$PAGE_ID" ]; then
# Update existing row
curl -s -X PATCH \
"https://api.notion.com/v1/pages/$PAGE_ID" \
-H "Authorization: Bearer $NOTION_TOKEN" \
-H "Notion-Version: 2022-06-28" \
-H "Content-Type: application/json" \
-d "$PAYLOAD" > /dev/null
else
# Create new row
curl -s -X POST \
"https://api.notion.com/v1/pages" \
-H "Authorization: Bearer $NOTION_TOKEN" \
-H "Notion-Version: 2022-06-28" \
-H "Content-Type: application/json" \
-d '{
"parent": {"databaseid": "'"$NOTIONDB"'"},
'"${PAYLOAD#\{}"' > /dev/null
fi
done
echo "Dashboard updated successfully at $(date)"
Make it executable:
chmod +x ~/.openclaw/skills/update-notion-dashboard.sh
Run manually to verify it works
~/.openclaw/skills/update-notion-dashboard.sh
Check your Notion dashboard - you should see rows populated
Troubleshooting:
openclaw agent run email-triage first.~/.openclaw/credentials/notion-token.txt---
Token usage in Notion is useful, but you want to see actual £ costs from your API providers.
Create ~/.openclaw/skills/fetch-api-costs.sh:
#!/bin/bash
Fetch actual API costs from Anthropic and OpenAI
Run once per day via cron
ANTHROPIC_KEY=$(cat ~/.openclaw/credentials/anthropic-api-key.txt)
OPENAI_KEY=$(cat ~/.openclaw/credentials/openai-api-key.txt)
COST_LOG="$HOME/.openclaw/logs/daily-costs.json"
Create log file if it doesn't exist
mkdir -p "$HOME/.openclaw/logs"
touch "$COST_LOG"
TODAY=$(date -u +"%Y-%m-%d")
Fetch Anthropic usage (last 24 hours)
ANTHROPIC_USAGE=$(curl -s -X GET \
"https://api.anthropic.com/v1/usage?startdate=$TODAY&enddate=$TODAY" \
-H "x-api-key: $ANTHROPIC_KEY" \
-H "anthropic-version: 2023-06-01")
ANTHROPICTOKENS=$(echo "$ANTHROPICUSAGE" | jq -r '.total_tokens')
ANTHROPICCOST=$(echo "$ANTHROPICTOKENS * 0.000015" | bc -l)
Fetch OpenAI usage (last 24 hours)
OPENAI_USAGE=$(curl -s -X GET \
"https://api.openai.com/v1/usage?date=$TODAY" \
-H "Authorization: Bearer $OPENAI_KEY")
OPENAITOKENS=$(echo "$OPENAIUSAGE" | jq -r '.total_tokens')
OPENAICOST=$(echo "$OPENAITOKENS * 0.000002" | bc -l)
TOTALCOST=$(echo "$ANTHROPICCOST + $OPENAI_COST" | bc -l)
Log to file
cat >> "$COST_LOG" <
{
"date": "$TODAY",
"anthropictokens": $ANTHROPICTOKENS,
"anthropiccostgbp": $ANTHROPIC_COST,
"openaitokens": $OPENAITOKENS,
"openaicostgbp": $OPENAI_COST,
"totalcostgbp": $TOTAL_COST
}
EOF
echo "Daily costs logged: £$TOTAL_COST"
Optional: Send to Notion (add a "Daily Costs" database)
curl -X POST https://api.notion.com/v1/pages ...
Make it executable:
chmod +x ~/.openclaw/skills/fetch-api-costs.sh
Run this once per day at 23:55 (just before midnight):
crontab -e
Add:
55 23 * /Users/yourname/.openclaw/skills/fetch-api-costs.sh >> /Users/yourname/.openclaw/logs/cost-tracker.log 2>&1
Why 23:55? Gives the script time to fetch the full day's usage before the date rolls over.
---
---
Want to know immediately when an agent fails? Add alerts to your status reporter.
Modify update-notion-dashboard.sh to add this after the Notion update:
Alert on failures
if [ "$STATUS" = "Failed" ]; then
SLACK_WEBHOOK=$(cat ~/.openclaw/credentials/slack-webhook.txt)
curl -X POST "$SLACK_WEBHOOK" \
-H "Content-Type: application/json" \
-d '{
"text": "🚨 Agent Failed: '"$AGENT_NAME"'",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Agent: '"$AGENTNAME"'\nError: '"$LASTERROR"'\nLast Run: '"$LAST_RUN"'"
}
}
]
}'
fi
Get a Slack webhook:
~/.openclaw/credentials/slack-webhook.txtAdd this to fetch-api-costs.sh:
Alert if daily cost exceeds £20
if (( $(echo "$TOTAL_COST > 20" | bc -l) )); then
echo "Warning: High API costs today (£$TOTAL_COST)" | \
mail -s "OpenClaw Cost Alert" your-email@example.com
fi
---
If you're running OpenClaw for multiple clients, you want to bill them accurately.
Solution: Tag agents by project in their config:{
"name": "client-acme-email-triage",
"tags": ["client:acme", "billable"],
...
}
Then in fetch-api-costs.sh, group costs by tag:
Sum tokens by client tag
ACMETOKENS=$(jq '[.[] | select(.tags[] == "client:acme") | .tokensused] | add' \
"$STATE_DIR"/*.json)
ACMECOST=$(echo "$ACMETOKENS * 0.000015" | bc -l)
echo "Client ACME: £$ACME_COST"
Add this to your invoice: "AI Automation Services: £X.XX (based on metered usage)"
Want to cap your monthly spend at £500?
Create ~/.openclaw/skills/check-monthly-budget.sh:
#!/bin/bash
BUDGET_CAP=500
CURRENT_MONTH=$(date +"%Y-%m")
COST_LOG="$HOME/.openclaw/logs/daily-costs.json"
Sum all costs for current month
MONTHSPEND=$(jq --arg month "$CURRENTMONTH" \
'[.[] | select(.date | startswith($month)) | .totalcostgbp] | add' \
"$COST_LOG")
if (( $(echo "$MONTHSPEND > $BUDGETCAP" | bc -l) )); then
echo "🚨 Budget exceeded: £$MONTHSPEND / £$BUDGETCAP"
# Pause all non-critical agents
openclaw agent pause --tag non-critical
# Send alert
echo "Monthly budget exceeded. All non-critical agents paused." | \
mail -s "OpenClaw Budget Alert" your-email@example.com
fi
Run this daily:
0 9 * /Users/yourname/.openclaw/skills/check-monthly-budget.sh
Track how your agents improve over time.
In Notion, add a Performance Log database:
| Date | Agent Name | Avg Duration (s) | Success Rate (%) | Cost per Task (£) |
|------|------------|------------------|------------------|-------------------|
| 2024-03-01 | Email Triage | 12.3 | 94% | 0.0024 |
| 2024-03-08 | Email Triage | 8.1 | 97% | 0.0016 |
Insight: After optimizing the system prompt, email triage got 34% faster and 67% cheaper.Log this weekly via a cron job:
#!/bin/bash
Log weekly performance baselines to Notion
WEEK=$(date +"%Y-W%V")
for state_file in ~/.openclaw/state/*.json; do
# Calculate weekly averages
AVGDURATION=$(jq -r '.durationseconds' "$state_file")
SUCCESSRATE=$(jq -r '.successrate24h' "$statefile")
# ... post to Notion Performance Log database
done
---
prop("Tokens Used") * 0.000015prop("Tokens Used") * 0.00000025prop("Tokens Used") * 0.00003If you use multiple models, add a "Model" property and use:
if(prop("Model") == "Sonnet", prop("Tokens Used") * 0.000015,
if(prop("Model") == "Haiku", prop("Tokens Used") * 0.00000025, 0))
ls -la ~/.openclaw/state/openclaw agent run email-triagecat ~/.openclaw/state/email-triage.jsontail -f ~/.openclaw/logs/status-reporter.log---
Before moving to Module 7, verify:
---
You now have complete visibility into your AI operations:
---
Right now, each agent works independently. But what if you want agents to collaborate?
Examples:
In Module 7, you'll learn:
---
Module 6 complete. You now have mission control.Next: [Module 7 - Multi-Agent Orchestration →](module-07-multi-agent-orchestration.md)
---
By the end of this module, you'll have a coordinated team of specialized AI agents that work together autonomously:
These agents share memory, communicate results, and execute in parallel - just like a real team.
---
You start with a single AI assistant that handles everything:
Sarah runs a marketing agency. She started with one agent doing everything. After 3 weeks:
---
Best for: Independent workflows with minimal overlap
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Email Agent │ │ Content │ │ Social Media │
│ │ │ Agent │ │ Agent │
│ Runs: Hourly │ │ Runs: Daily │ │ Runs: 3x/day │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
└────────────────────┴─────────────────────┘
│
┌─────────▼─────────┐
│ Shared MEMORY.md │
│ (read/append only)│
└────────────────────┘
How it works:
~/.openclaw/MEMORY.mdBest for: Complex workflows requiring task routing and dependency management
┌────────────────────────────────────┐
│ Coordinator Agent │
│ - Reads incoming requests │
│ - Routes to right specialist │
│ - Tracks completion │
│ - Synthesizes results │
└────────┬───────────┬───────────────┘
│ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│ Writer │ │Researcher│ │Designer │
│ Agent │ │ Agent │ │ Agent │
└─────────┘ └──────────┘ └─────────┘
│ │ │
└───────────┴───────────────┘
│
┌──────────▼──────────┐
│ Task Queue (Notion)│
│ Shared MEMORY.md │
└─────────────────────┘
How it works:
- [tag: writer] Write product launch blog post
- [tag: designer] Create 5 social media graphics
- [tag: researcher] Find 10 relevant subreddits for launch announcement
Best for: Multi-stage workflows where each step depends on the previous
Input → [Research] → [Writer] → [Editor] → [Publisher] → Output
↓ ↓ ↓ ↓
MEMORY.md MEMORY.md MEMORY.md MEMORY.md
How it works:
---
The key to multi-agent coordination is shared memory - a single source of truth that all agents can read and update.
OpenClaw's MEMORY.md is designed for multi-agent workflows:
Location:~/.openclaw/MEMORY.md
Format:
Shared Memory
2026-02-28 09:15 - Email Agent
Triaged 12 new messages:
- 3 urgent (responded immediately)
- 5 scheduled for review (added to Notion)
- 4 archived (newsletters, receipts)
2026-02-28 09:30 - Research Agent
Competitor monitoring:
- Acme Corp launched new pricing (£99/mo → £79/mo)
- Startup X raised £2M Series A
- Industry report: 43% YoY growth in AI automation
2026-02-28 10:00 - Content Agent
Published: "5 Ways to Automate Customer Support"
- URL: https://yourblog.com/automate-support
- Word count: 1,200
- SEO: optimized for "customer support automation"
- Next: Social agent should promote this
2026-02-28 10:30 - Social Agent
Read MEMORY.md → saw new blog post
Posted to Twitter: [link to tweet]
Scheduled LinkedIn post for 2pm
Key principles:
Each agent's SOUL.md should reference the shared memory:
Email Agent - SOUL.md
Your Job
Triage Dan's inbox every hour. Archive spam, respond to urgent, log everything.
Reading Shared Memory
Before processing email, read ~/.openclaw/MEMORY.md (last 24 hours only).
- If Content Agent published a blog post → don't archive related replies
- If Social Agent scheduled a campaign → expect related emails
- If Research Agent flagged a competitor → prioritize their newsletters
Writing to Shared Memory
After every run, append your results:
- How many emails processed
- Any urgent actions taken
- Items that other agents should know about
Format:
[timestamp] - Email Agent
[summary of what you did]
Token optimization tip: Don't load the entire MEMORY.md on every request. Use tail -50 ~/.openclaw/MEMORY.md to read only recent entries (last ~6 hours).
---
Let's build a real multi-agent system: a content factory that researches, writes, edits, and publishes automatically.
~/.openclaw/agents/research-agent/SOUL.md
Research Agent - SOUL.md
Your Job
Monitor competitors, trending topics, and industry news. Find content opportunities.
Schedule
Run daily at 9am via cron:
0 9 * openclaw run research-agent
Tools You Have
- Web search (Google, Bing, Twitter)
- RSS feed reader (Feedly API)
- Competitor blogs (saved in AGENTS.md)
- Subreddit monitoring (r/entrepreneur, r/startups)
What You Research
- Competitor blog posts (last 24 hours)
- Trending topics on Twitter (our industry hashtags)
- Subreddit discussions (upvotes > 100)
- Google Trends (rising queries related to our keywords)
Output Format
Append to ~/.openclaw/MEMORY.md:
[timestamp] - Research Agent
Trending topics:
- [topic 1] - [why it's relevant]
- [topic 2] - [why it's relevant]
Competitor activity:
- [competitor] published "[title]" - [key takeaway]
Content opportunities:
- [topic] - [angle we could take]
TASK FOR WRITER AGENT: Write about [topic] from [angle]
Agent config: ~/.openclaw/agents/research-agent/AGENTS.md
name: research-agent
model: claude-sonnet-4.5 # Haiku is too weak for research
max_tokens: 4000
temperature: 0.3 # Low creativity, factual research
timeout: 300 # 5 mins max (web searches can be slow)
tools:
- web_search
- rss_reader
- file_write # For appending to MEMORY.md
credentials:
- twitterapikey: ~/.openclaw/credentials/twitter-readonly.txt
- feedlyapikey: ~/.openclaw/credentials/feedly.txt
~/.openclaw/agents/writer-agent/SOUL.md
Writer Agent - SOUL.md
Your Job
Read research from MEMORY.md, write blog posts, save drafts to Notion.
Schedule
Run daily at 10am (1 hour after Research Agent):
0 10 * openclaw run writer-agent
Workflow
- Read ~/.openclaw/MEMORY.md → find entries tagged "TASK FOR WRITER AGENT"
- If task found → write 1,200-word blog post on that topic
- Save draft to Notion (database: Blog Drafts)
- Append to MEMORY.md: "Draft ready: [title] - Notion ID: [id]"
Writing Style
- Conversational, practical, real examples
- Start with a problem (reader's pain point)
- 3-5 sections with clear H2 headings
- End with actionable next steps
- SEO: Include target keyword 5-7 times naturally
Quality Bar
- No fluff, no obvious statements
- Real examples (not hypothetical)
- Specific numbers (not "many" or "often")
- Cite sources when making claims
Output Format
Append to MEMORY.md:
[timestamp] - Writer Agent
Wrote: "[blog post title]"
- Topic: [from research]
- Word count: [count]
- Notion ID: [id]
- Status: Draft (needs review)
TASK FOR EDITOR AGENT: Review and publish [Notion ID]
~/.openclaw/agents/editor-agent/SOUL.md
Editor Agent - SOUL.md
Your Job
Review drafts from Writer Agent, improve quality, mark as ready to publish.
Schedule
Run daily at 11am:
0 11 * openclaw run editor-agent
Workflow
- Read ~/.openclaw/MEMORY.md → find "TASK FOR EDITOR AGENT"
- Load draft from Notion using the Notion ID
- Review and edit:
- Fix grammar/spelling
- Improve clarity (remove jargon)
- Add examples where needed
- Verify links work
- Check SEO (keyword density, meta description)
- Update Notion status: "Ready to Publish"
- Append to MEMORY.md confirmation
Editing Principles
- Shorter sentences (max 25 words)
- Active voice ("we built" not "was built by us")
- Remove hedging ("possibly", "might", "could")
- Add concrete numbers (not "many users" → "450 users")
Output Format
[timestamp] - Editor Agent
Edited: "[title]"
- Changes: [summary of edits]
- Status: Ready to publish
- Notion ID: [id]
TASK FOR PUBLISHER AGENT: Publish [Notion ID] today at 2pm
~/.openclaw/agents/publisher-agent/SOUL.md
Publisher Agent - SOUL.md
Your Job
Publish approved blog posts to WordPress, schedule social promotion.
Schedule
Run daily at 2pm:
0 14 * openclaw run publisher-agent
Workflow
- Read MEMORY.md → find "TASK FOR PUBLISHER AGENT"
- Load post from Notion (verify status = "Ready to Publish")
- Post to WordPress via API
- Create 3 social media posts (Twitter, LinkedIn, Facebook)
- Schedule social posts (today 3pm, tomorrow 10am, next week)
- Update Notion status: "Published"
- Log results to MEMORY.md
WordPress Setup
API endpoint: https://yourblog.com/wp-json/wp/v2/posts
Auth: ~/.openclaw/credentials/wordpress-token.txt
Categories: Auto-tag based on content (use AI to suggest 2-3 categories)
Featured image: Use OpenAI DALL-E to generate (store in WordPress media library)
Social Media Templates
Twitter: "[Compelling hook question]
[1-sentence value prop]
Read more: [link]"
LinkedIn: Longer format (3 paragraphs), professional tone
Facebook: Casual, question-based, emoji
Output Format
[timestamp] - Publisher Agent
Published: "[title]"
- URL: [wordpress URL]
- Social: Scheduled 3 posts (Twitter 3pm, LinkedIn 10am tomorrow, FB next Mon)
- Status: Complete
---
For more complex coordination (not just sequential pipeline), use a Notion database as a shared task queue.
Before Running Your Main Job
- Check Notion Task Queue for tasks assigned to you:
- Filter: Assigned To = [your agent name]
- Filter: Status = pending
- Sort: Priority (urgent first), then Created At (oldest first)
- If tasks found:
- Update Status → in-progress
- Complete the task
- Log result in Result field
- Update Status → completed
- Fill Completed At timestamp
- Append to MEMORY.md
- If no tasks:
- Run your scheduled job (as normal)
~/.openclaw/agents/coordinator-agent/SOUL.md
Coordinator Agent - SOUL.md
Your Job
Receive requests from Dan (via Telegram), break into tasks, assign to specialists.
How You're Triggered
Dan sends a message to your Telegram bot (see Module 5):
"/run Launch new feature: AI-powered analytics"
Workflow
- Analyze the request → determine what work is needed
- Break into discrete tasks
- Write tasks to Notion Task Queue with appropriate assignments
- Notify Dan: "Created 4 tasks, estimated completion: 6 hours"
- Monitor task completion (check every 30 mins via cron)
- When all tasks complete → synthesize results, notify Dan
Task Breakdown Logic
Request: "Launch new feature: AI-powered analytics"
Tasks you create:
- [research-agent, high] Research competitor analytics features (30 mins)
- [writer-agent, normal] Write feature announcement blog post (60 mins)
- [designer-agent, high] Create 5 social graphics for launch (45 mins)
- [publisher-agent, normal] Schedule blog + social posts for Friday 9am (15 mins)
Notion Task Creation
For each task, create a row in Agent Task Queue:
- Task: "[clear description]"
- Assigned To: [agent name]
- Status: pending
- Priority: [based on urgency/dependencies]
- Dependencies: "Waiting on: [task ID]" (if applicable)
- Notes: [any context the agent needs]
Monitoring
Run every 30 mins via cron:
/30 * openclaw run coordinator-agent --mode check
In check mode:
- Query Notion for your active requests
- If all tasks completed → synthesize results, notify Dan
- If any task failed → escalate to Dan with error details
- If task stuck (in-progress > 2 hours) → flag as potential issue
---
All agents run at the same time
0 10 * openclaw run writer-agent
0 10 * openclaw run designer-agent
0 10 * openclaw run research-agent
0 9 * openclaw run research-agent
0 10 * openclaw run writer-agent
0 11 * openclaw run editor-agent
0 12 * openclaw run publisher-agent
---
Riley Brown (solo consultant, AI automation expert) runs his entire business with 11 AI agents. Here's his architecture:
Cloud Agents (Hosted on Railway, always-on):> "I don't have one smart agent. I have 11 dumb agents that each do ONE thing really well. The magic is in the coordination, not the individual agent intelligence."
Sarah runs a content marketing agency (3 clients, £8k MRR). She uses 4 agents in a pipeline:
Pipeline:---
In each agent's write script:
(
flock -x 200 # Exclusive lock
echo "## $(date -Iseconds) - Agent Name" >> ~/.openclaw/MEMORY.md
echo "Entry text here" >> ~/.openclaw/MEMORY.md
) 200>/tmp/openclaw-memory.lock
0 9 * openclaw run research-agent # 9:00
5 9 * openclaw run writer-agent # 9:05
10 9 * openclaw run social-agent # 9:10
- Before starting a task, check if status is already "in-progress" (another agent grabbed it)
- Use Notion API transactions (if supported) or add "claimed_by" field
Problem 2: Lost context (agents don't see each other's work) Symptoms:- Add to SOUL.md: "First action: read ~/.openclaw/MEMORY.md (last 100 lines)"
- Research agent tags entries: [tag: trending-topics]
- Writer agent searches for: grep "\[tag: trending-topics\]" MEMORY.md
- Don't rely on MEMORY.md alone for critical state
- Use Notion databases for: published posts, scheduled content, completed tasks
Problem 3: Dependency deadlocks Symptoms:In coordinator-agent SOUL.md:
If a task is pending for > 2 hours:
- Check if assigned agent is healthy (last run time in MEMORY.md)
- If agent hasn't run in > 24 hours → reassign task to backup agent or escalate to Dan
- Update task status: "stalled - reassigned"
- Monitor last run time for each agent
- Alert if any agent hasn't reported in > expected interval
- Automated restart via launchd (macOS) or systemd (Linux)
Problem 4: Token cost explosion Symptoms:Don't read the whole file:
tail -100 ~/.openclaw/MEMORY.md # Last 100 lines only (~2KB vs 50KB)
- Email triage: Haiku is enough (5x cheaper than Sonnet)
- Research, writing, editing: Use Sonnet (quality matters)
- See Module 2 for model selection guide
- Email agents → ~/.openclaw/memory/email-memory.md
- Content agents → ~/.openclaw/memory/content-memory.md
- Cross-reference when needed: "See content-memory.md for blog posts"
---
Build a simple 2-agent system:
~/.openclaw/agents/research-agent/ (SOUL.md, AGENTS.md, cron entry)~/.openclaw/agents/writer-agent/ (SOUL.md, AGENTS.md, cron entry)Set up a Notion task queue and test with one agent:
Extend your Module 6 Notion dashboard to track multi-agent health:
- Properties: Agent Name, Last Run (timestamp), Status (healthy/stale/failed), Last Entry (text from MEMORY.md)
- Reads MEMORY.md
- For each agent, finds most recent entry
- Updates Notion with timestamp
- Flags agents that haven't run in > expected interval
---
The difference between a solo agent and an AI team is leverage. One agent can save you 5 hours/week. A coordinated team of 5 agents can save you 25 hours/week.
Let's build your team.
---
You've got agents running on cron jobs. Every morning at 7am, your daily brief agent fires up. Every Friday at 5pm, your weekly report agent runs. Perfect.
Until it's not.
What cron jobs DON'T tell you:Cron jobs are fire and forget. They run on schedule, but they don't check if things worked.
Heartbeat monitoring is the opposite: proactive health checks. Your agents actively report "I'm alive and healthy" at regular intervals. If they stop reporting, you get alerted.
Real example from the OpenClaw community:A consultant had an agent that parsed client emails and created Notion tasks. Ran every hour via cron. One day, Notion changed their API response format. The agent crashed silently for 3 days before the consultant noticed they had 47 unprocessed client requests.
With heartbeat monitoring, they would have been alerted within 90 minutes.
---
| Use Case | Tool | Why |
|----------|------|-----|
| Daily brief at 7am | Cron job | Scheduled task |
| Check if brief agent is working | Heartbeat | Health monitoring |
| Weekly report every Friday | Cron job | Scheduled task |
| Monitor email parser 24/7 | Heartbeat | Critical uptime |
| Monthly invoice reminder | Cron job | Scheduled task |
| Alert if OpenClaw crashes | Heartbeat | System health |
You use BOTH. Cron jobs run your tasks. Heartbeat monitoring makes sure they're working.---
OpenClaw's heartbeat system uses a simple pattern:
.openclaw/heartbeats/ folder:
.openclaw/
├── SOUL.md
├── AGENTS.md
├── MEMORY.md
├── skills/
└── heartbeats/
├── email-parser.yml
├── daily-brief.yml
└── system-health.yml
Each .yml file defines one heartbeat monitor.
---
Let's say you have an agent that checks your inbox every hour and creates Trello cards from client requests. Critical for your business. Can't afford downtime.
Step 1: Create the heartbeat configCreate .openclaw/heartbeats/email-parser.yml:
name: "Email Parser Agent"
description: "Checks inbox every hour, creates Trello cards"
check_interval: 30 # Run health check every 30 minutes
timeout: 90 # Alert if no successful check in 90 minutes
health_check:
type: "agentlastrun"
agent_name: "email-parser"
maxageminutes: 75 # Agent should have run in last 75 mins
alerts:
- type: "slack"
channel: "#alerts"
message: "⚠️ Email parser hasn't run in 90 minutes. Check OpenClaw."
- type: "telegram"
chat_id: "your-chat-id"
message: "Email parser down. Last successful run: {{lastruntime}}"
silent_hours:
enabled: true
timezone: "Europe/London"
start: "23:00"
end: "07:00"
# No alerts between 11pm-7am unless critical
What this does:
openclaw heartbeat enable email-parser
That's it. OpenClaw now monitors your email parser 24/7.
---
OpenClaw supports several health check methods:
agentlastrun (most common)Checks when an agent last ran successfully.
health_check:
type: "agentlastrun"
agent_name: "daily-brief"
maxageminutes: 1500 # Should run once per day (24h = 1440m)
file_modifiedChecks when a file was last modified (useful for agents that write to files).
health_check:
type: "file_modified"
path: ".openclaw/outputs/daily-brief.md"
maxageminutes: 1500
api_endpointPings an external URL to check if a service is up.
health_check:
type: "api_endpoint"
url: "https://api.your-service.com/health"
expected_status: 200
timeout_seconds: 10
custom_scriptRuns a custom script and checks its exit code.
health_check:
type: "custom_script"
script_path: ".openclaw/scripts/check-database.sh"
successexitcode: 0
---
name: "Daily Brief Agent"
description: "Generates morning summary from calendar + tasks"
check_interval: 360 # Check every 6 hours
timeout: 1500 # Alert if no run in 25 hours (allows for weekend skip)
health_check:
type: "agentlastrun"
agent_name: "daily-brief"
maxageminutes: 1500
alerts:
- type: "telegram"
chat_id: "12345678"
message: "Daily brief didn't run this morning."
silent_hours:
enabled: false # Always alert (you want to know immediately)
name: "OpenClaw System Health"
description: "Checks if OpenClaw daemon is running"
check_interval: 15
timeout: 30
health_check:
type: "custom_script"
script_path: ".openclaw/scripts/system-health.sh"
successexitcode: 0
alerts:
- type: "slack"
channel: "#critical"
message: "🚨 OpenClaw daemon is down!"
- type: "telegram"
chat_id: "12345678"
message: "OpenClaw system failure. Check server immediately."
silent_hours:
enabled: false # Critical alerts 24/7
The system-health.sh script:
#!/bin/bash
Check if OpenClaw daemon is running
if pgrep -f "openclaw daemon" > /dev/null; then
echo "OpenClaw daemon: OK"
exit 0
else
echo "OpenClaw daemon: DOWN"
exit 1
fi
name: "OpenAI API Budget Monitor"
description: "Alerts if we're burning through tokens too fast"
check_interval: 60 # Check hourly
timeout: 120
health_check:
type: "custom_script"
script_path: ".openclaw/scripts/check-api-usage.sh"
successexitcode: 0
alerts:
- type: "slack"
channel: "#budget"
message: "⚠️ API usage high: {{usage_dollars}}/day. Budget: $50/day."
silent_hours:
enabled: true
timezone: "America/New_York"
start: "22:00"
end: "08:00"
---
You're running agents 24/7. But you're not awake 24/7.
The problem: A non-critical agent fails at 3:17am. Your phone buzzes. You wake up. You can't do anything about it until morning anyway. The solution: Silent hours.silent_hours:
enabled: true
timezone: "Europe/London"
start: "23:00"
end: "07:00"
What happens during silent hours:
alerts:
- type: "telegram"
chat_id: "12345678"
message: "Email parser failed"
critical: false # Respect silent hours
- type: "telegram"
chat_id: "12345678"
message: "🚨 Security breach detected!"
critical: true # Alert immediately, even at 3am
Best practice: Only mark alerts as critical if you would actually wake up and fix it immediately. Examples:
NOT critical:
---
Heartbeat alerts are great for immediate problems. But you also want a dashboard to see overall health at a glance.
Option 1: Notion Dashboard (Simplest)Create a Notion page that your heartbeat monitors update via the Notion API:
health_check:
type: "agentlastrun"
agent_name: "email-parser"
maxageminutes: 75
on_success:
- type: "notion_update"
page_id: "your-monitoring-page-id"
property: "Email Parser Status"
value: "✅ Healthy ({{timestamp}})"
on_failure:
- type: "notion_update"
page_id: "your-monitoring-page-id"
property: "Email Parser Status"
value: "❌ Down since {{lastsuccesstime}}"
Your Notion page becomes a live status board:
OpenClaw Health Dashboard
─────────────────────────
Email Parser: ✅ Healthy (2026-03-02 14:37)
Daily Brief: ✅ Healthy (2026-03-02 07:02)
Weekly Report: ✅ Healthy (2026-02-28 17:00)
System Health: ✅ Healthy (2026-03-02 14:35)
API Budget: ⚠️ 67% of daily budget used
Option 2: Slack Channel (Real-Time)
Create a #openclaw-status Slack channel. Configure heartbeats to post status updates:
on_success:
- type: "slack"
channel: "#openclaw-status"
message: "✅ {{agent_name}} healthy"
throttle: 1440 # Only post once per day if healthy
on_failure:
- type: "slack"
channel: "#openclaw-status"
message: "❌ {{agentname}} failed: {{errormessage}}"
throttle: 0 # Post every failure immediately
Option 3: Local HTML Dashboard
OpenClaw can generate a simple HTML dashboard:
openclaw heartbeat status --output dashboard.html
This creates a static HTML page showing all heartbeat statuses. Open it in your browser:
OpenClaw Heartbeat Dashboard
────────────────────────────
Email Parser ✅ Healthy Last check: 2 mins ago
Daily Brief ✅ Healthy Last check: 7 hours ago
Weekly Report ✅ Healthy Last check: 3 days ago
System Health ✅ Healthy Last check: 1 min ago
API Budget Monitor ⚠️ Warning 67% of budget used
Recent Alerts (last 24h):
• 14:22 - Email Parser: temporary failure (API timeout)
• 07:05 - Daily Brief: completed successfully
Serve it with a simple web server:
python3 -m http.server 8080
Visit http://localhost:8080/dashboard.html to see your status board.
---
---
requests library. You upgraded Python, requests broke.
Without heartbeat: Cron job runs, fails, logs error to a file you never check.
With heartbeat: Alert immediately: "Email parser failed: ModuleNotFoundError: No module named 'requests'."
Fix: pip install requests, restart agent.
---
/var/log/. No disk space. Agent can't write outputs.
Without heartbeat: Agent runs, silently fails to write outputs.
With heartbeat: Custom health check script detects low disk space: "System health warning: 98% disk usage."
Fix: Clean up logs, configure log rotation.
---
---
---
You can configure heartbeats to automatically attempt recovery:
name: "Email Parser Agent"
health_check:
type: "agentlastrun"
agent_name: "email-parser"
maxageminutes: 75
on_failure:
- type: "telegram"
chat_id: "12345678"
message: "Email parser failed. Attempting restart..."
- type: "recovery_script"
script_path: ".openclaw/scripts/restart-email-parser.sh"
max_attempts: 3
waitbetweenattempts: 300 # 5 minutes
- type: "telegram"
chat_id: "12345678"
message: "Recovery {{status}}: {{message}}"
The recovery script:
#!/bin/bash
Restart email parser agent
echo "Killing stuck email-parser process..."
pkill -f "openclaw run email-parser"
echo "Restarting email-parser agent..."
openclaw run email-parser --daemon
sleep 10
Check if it started successfully
if pgrep -f "openclaw run email-parser" > /dev/null; then
echo "Recovery successful"
exit 0
else
echo "Recovery failed"
exit 1
fi
What this does:
---
Meta-tip: Monitor your heartbeat system itself.
Create a heartbeat for the heartbeat daemon:name: "Heartbeat System Health"
description: "Checks if the heartbeat monitoring system is running"
check_interval: 30
timeout: 60
health_check:
type: "custom_script"
script_path: ".openclaw/scripts/check-heartbeat-daemon.sh"
successexitcode: 0
alerts:
- type: "telegram"
chat_id: "12345678"
message: "🚨 CRITICAL: Heartbeat monitoring system is down!"
critical: true # Always alert, even during silent hours
Why this matters: If your heartbeat system crashes, all other monitors go silent. You have zero visibility. This meta-monitor ensures you know immediately.
---
If you configure an LLM-based health check (analyzing agent outputs for quality issues):
health_check:
type: "llm_analysis"
agent_name: "daily-brief"
prompt: "Analyze the last daily brief. Is it coherent? Any errors?"
model: "gpt-4o-mini"
expected_response: "No issues detected"
This uses ~500 tokens per check. At 48 checks/day (every 30 mins), that's $0.05/day.
Best practice: Reserve LLM health checks for critical agents where output quality matters more than cost.---
Enable a heartbeat monitor
openclaw heartbeat enable
Disable a heartbeat monitor
openclaw heartbeat disable
List all heartbeat monitors
openclaw heartbeat list
Check status of all monitors
openclaw heartbeat status
Generate HTML dashboard
openclaw heartbeat status --output dashboard.html
Test a heartbeat config (dry run)
openclaw heartbeat test
View heartbeat logs
openclaw heartbeat logs
Clear heartbeat history
openclaw heartbeat clear
---
.openclaw/heartbeats/ folderagentlastrun type)---
You now have proactive monitoring for your OpenClaw agents. You'll know within minutes when something breaks, not days later.
Next module: Advanced Skills - building custom skills for your specific business needs. Recommended resources:---
Module 8 complete. Estimated reading time: 45 minutes.---
OpenClaw ships with 50+ built-in skills (email triage, meeting notes, web research, etc.). But your business has unique workflows that no off-the-shelf skill can handle.
Real example from the OpenClaw community:A freelance designer had a repeating workflow:
This took 15 minutes per client. 3 new clients per week = 45 minutes of admin.
Solution: Custom skill calledonboard-design-client. One command, entire workflow automated. 45 minutes/week saved = 35 hours/year.
Custom skills turn your unique processes into reusable automations.
---
Skills live in .openclaw/skills/ as YAML files:
.openclaw/
├── SOUL.md
├── AGENTS.md
├── MEMORY.md
├── heartbeats/
└── skills/
├── email-triage.yml (built-in)
├── meeting-notes.yml (built-in)
├── invoice-processor.yml (custom - yours!)
└── lead-scorer.yml (custom - yours!)
Minimal skill structure:
name: "Invoice Processor"
description: "Extract invoice data from PDFs and create Xero entries"
version: "1.0"
author: "your-name"
triggers:
- "process invoice"
- "new invoice"
- "@invoice-processor"
inputs:
- name: "pdf_path"
type: "file"
required: true
description: "Path to invoice PDF"
prompt: |
You are an invoice processing assistant.
Task: Extract data from the invoice PDF at {pdf_path}
Extract:
- Invoice number
- Date
- Supplier name
- Line items (description, quantity, unit price)
- Subtotal, VAT, total
Format the data as JSON.
tools:
- pdf_reader
- xero_api
output:
type: "json"
schema:
invoice_number: string
date: string
supplier: string
line_items: array
total: number
What happens when you run this skill:
openclaw run invoice-processor --pdf_path="./invoice-march-2026.pdf"
invoice-processor.ymlpdf_reader tool)xero_api tool to create Xero entry.openclaw/logs/skills/invoice-processor-2026-03-02.log---
.openclaw/skills/lead-scorer.yml:
name: "Lead Scorer"
description: "Score inbound leads from 0-100 based on fit and urgency"
version: "1.0"
triggers:
- "score lead"
- "qualify lead"
- "@lead-scorer"
inputs:
- name: "email"
type: "string"
required: true
description: "Lead's email address"
- name: "message"
type: "string"
required: true
description: "Their contact form message"
- name: "company_domain"
type: "string"
required: false
description: "Their company website (optional)"
prompt: |
You are a lead qualification assistant for a B2B SaaS company.
Lead details:
- Email: {email}
- Message: {message}
- Company: {company_domain}
Task: Score this lead from 0-100 based on:
1. Company Fit (0-40 points)
- Use Clearbit API to get company size, industry, revenue
- 10-50 employees = 20 pts
- 50-200 employees = 30 pts
- 200+ employees = 40 pts
- Unknown company = 10 pts
2. Budget Signals (0-30 points)
- Mentions "budget" or "investment" = 15 pts
- Asks about pricing = 10 pts
- Mentions competitors = 10 pts
- Says "just browsing" = -5 pts
3. Urgency (0-20 points)
- "ASAP", "urgent", "this week" = 20 pts
- "soon", "next month" = 10 pts
- "exploring", "researching" = 5 pts
4. LinkedIn Quality (0-10 points)
- Use LinkedIn API to check their profile
- C-level = 10 pts
- VP/Director = 7 pts
- Manager = 5 pts
- Individual contributor = 3 pts
Return JSON with:
- score (0-100)
- reasoning (1-2 sentences)
- recommendedaction ("callimmediately", "email_followup", "nurture")
- priority ("high", "medium", "low")
tools:
- clearbit_api
- linkedin_api
output:
type: "json"
schema:
score: number
reasoning: string
recommended_action: string
priority: string
company_data:
name: string
size: string
industry: string
Usage:
Score a lead manually
openclaw run lead-scorer --email="ceo@acme.com" --message="Need AI automation ASAP" --company_domain="acme.com"
Output:
{
"score": 87,
"reasoning": "C-level at 150-person company, urgent timeline, mentions competitors",
"recommendedaction": "callimmediately",
"priority": "high",
"company_data": {
"name": "Acme Corp",
"size": "150 employees",
"industry": "SaaS"
}
}
Integrate with your CRM agent:
In .openclaw/AGENTS.md, add a skill to your sales-agent:
- id: sales-agent
skills:
- lead-scorer
- email-followup
- calendar-booking
Now when a contact form submission comes in, your sales agent automatically runs lead-scorer and prioritizes your pipeline.
---
Doing this manually takes 90 minutes. Your content repurposer skill does it in 3 minutes.
Create.openclaw/skills/content-repurposer.yml:
name: "Content Repurposer"
description: "Turn long-form content into social posts, emails, and scripts"
version: "1.0"
triggers:
- "repurpose content"
- "create social posts"
- "@content-repurposer"
inputs:
- name: "source_url"
type: "string"
required: true
description: "URL of blog post to repurpose"
- name: "brand_voice"
type: "string"
required: false
default: "professional"
description: "Tone: professional, casual, technical, witty"
prompt: |
You are a content repurposing assistant.
Task: Read the blog post at {source_url} and create:
1. 5 LinkedIn posts (150-200 words each)
- Each post highlights one key insight from the article
- Include a hook, body, and CTA
- Use {brand_voice} tone
- Add relevant emojis (sparingly)
2. 10 tweets (280 characters max)
- Mix of insights, quotes, and stats from the post
- Use thread format (1/10, 2/10, etc.)
- Include relevant hashtags
3. 1 email newsletter (400-500 words)
- Subject line (under 50 chars)
- Preview text (under 90 chars)
- Email body with 3-4 sections
- CTA to read full article
4. 1 YouTube script (8-10 minutes speaking time)
- Intro hook (30 seconds)
- Main content (7 minutes)
- Outro CTA (30 seconds)
- Include [B-ROLL] markers for visuals
For each format, extract the most compelling angles from the source content.
Don't just summarize - find the nuggets that will perform well on each platform.
Return as structured JSON with separate sections for each format.
tools:
- web_scraper
- readability_api
output:
type: "json"
schema:
linkedin_posts: array
tweets: array
email:
subject: string
preview: string
body: string
youtube_script:
intro: string
main: string
outro: string
post_actions:
- save_to: ".openclaw/output/content-repurpose-{timestamp}.json"
- notify: "slack"
channel: "#content-team"
message: "Content repurposed: {source_url} → 5 LinkedIn posts, 10 tweets, 1 email, 1 script"
Usage:
openclaw run content-repurposer --sourceurl="https://yourblog.com/ai-automation-guide" --brandvoice="witty"
Output saved to:
.openclaw/output/content-repurpose-2026-03-02-14-23.json
Token cost: ~$0.15 per repurpose (Claude Haiku). 10 blog posts/month = $1.50/month for 50 LinkedIn posts + 100 tweets + 10 emails + 10 scripts.
---
Let's build the invoice processor from earlier, with error handling and real-world edge cases.
Create.openclaw/skills/invoice-processor.yml:
name: "Invoice Processor"
description: "Extract invoice data from PDFs and create Xero entries"
version: "2.0"
author: "your-name"
triggers:
- "process invoice"
- "new invoice"
- "@invoice-processor"
inputs:
- name: "pdf_path"
type: "file"
required: true
description: "Path to invoice PDF"
validation:
file_type: ["pdf"]
max_size: "10MB"
- name: "supplier_override"
type: "string"
required: false
description: "Manually specify supplier if OCR fails"
- name: "auto_approve"
type: "boolean"
required: false
default: false
description: "Auto-approve invoices under £500"
prompt: |
You are an invoice processing assistant with accounting expertise.
Task: Extract structured data from the invoice PDF at {pdf_path}
## Extraction Rules
1. Invoice Number
- Usually top-right or top-left
- Formats: "INV-12345", "Invoice #12345", "No. 12345"
- If multiple numbers present, prioritize "Invoice" label
2. Date
- Look for "Invoice Date", "Date Issued", "Date"
- Convert to ISO 8601 format (YYYY-MM-DD)
- If ambiguous (UK vs US date format), assume UK (DD/MM/YYYY)
3. Supplier
- Usually at top of invoice (company name, logo)
- Cross-check with {supplier_override} if provided
- If OCR fails, return "MANUALREVIEWREQUIRED"
4. Line Items
- Each item needs: description, quantity, unitprice, linetotal
- Handle multi-line descriptions (combine into one field)
- Watch for subtotals disguised as line items (exclude them)
5. Totals
- Subtotal (before VAT)
- VAT amount and rate (usually 20% in UK)
- Total (after VAT)
- Validation: subtotal + VAT should equal total (within £0.01)
6. Payment Terms
- Look for "Payment Due", "Net 30", "Due Date"
- Calculate duedate from invoicedate + terms
## Edge Cases to Handle
- Scanned invoices (poor OCR quality)
- Multi-page invoices
- Multiple currencies (convert to GBP)
- Credit notes (negative amounts)
- Invoices with discounts
- Missing VAT numbers (flag for review)
## Output Format
Return JSON with:
- All extracted fields
- confidence_score (0-100) for OCR quality
- warnings (array of issues found)
- requiresmanualreview (boolean)
If confidence < 90% or totals don't match, set requiresmanualreview = true
tools:
- pdf_reader
- ocr_engine
- xero_api
- currency_converter
error_handling:
- type: "ocr_failure"
action: "notify_slack"
message: "Invoice OCR failed: {pdf_path}. Manual review required."
- type: "xeroapierror"
action: "retry"
max_retries: 3
backoff: "exponential"
- type: "validation_failure"
action: "save_draft"
location: ".openclaw/invoices/pending-review/"
output:
type: "json"
schema:
invoice_number: string
date: string
due_date: string
supplier:
name: string
vat_number: string
line_items:
- description: string
quantity: number
unit_price: number
line_total: number
subtotal: number
vat_amount: number
vat_rate: number
total: number
currency: string
confidence_score: number
warnings: array
requiresmanualreview: boolean
xeroinvoiceid: string
post_actions:
- if: "requiresmanualreview == false AND auto_approve == true AND total < 500"
then:
- action: "xeroapproveinvoice"
invoiceid: "{xeroinvoice_id}"
- action: "notify_slack"
channel: "#finance"
message: "✅ Invoice auto-processed: {supplier.name} - £{total}"
- if: "requiresmanualreview == true"
then:
- action: "save_to"
path: ".openclaw/invoices/pending-review/{invoice_number}.json"
- action: "notify_slack"
channel: "#finance"
message: "⚠️ Invoice needs review: {supplier.name} - {warnings}"
- action: "logtospreadsheet"
sheet: "Invoice Log 2026"
row:
- "{date}"
- "{supplier.name}"
- "£{total}"
- "{xeroinvoiceid}"
- "{confidence_score}%"
Usage:
Process invoice manually
openclaw run invoice-processor --pdf_path="./invoices/march-hosting.pdf"
Auto-approve if under £500
openclaw run invoice-processor --pdfpath="./invoices/march-hosting.pdf" --autoapprove=true
Watch a folder and process new invoices automatically
openclaw watch --folder="./invoices/inbox" --skill="invoice-processor" --auto_approve=true
What happens:
---
.openclaw/skills/tests/invoice-processor-test.yml:
skill: "invoice-processor"
test_cases:
- name: "Standard UK invoice"
inputs:
pdf_path: "./test-data/sample-invoice-001.pdf"
expected:
invoice_number: "INV-12345"
total: 299.99
confidence_score: ">= 95"
requiresmanualreview: false
- name: "Poor quality scan"
inputs:
pdf_path: "./test-data/low-quality-scan.pdf"
expected:
confidence_score: "< 90"
requiresmanualreview: true
- name: "Multi-page invoice"
inputs:
pdf_path: "./test-data/multi-page-invoice.pdf"
expected:
line_items: ">= 15"
total: 4567.89
- name: "Credit note"
inputs:
pdf_path: "./test-data/credit-note-001.pdf"
expected:
total: "< 0"
warnings: "contains 'credit note'"
Run tests:
openclaw test invoice-processor
Output:
✅ Standard UK invoice: PASSED (confidence: 98%, total: £299.99)
✅ Poor quality scan: PASSED (flagged for review as expected)
✅ Multi-page invoice: PASSED (18 line items, £4,567.89)
✅ Credit note: PASSED (negative amount detected)
4/4 tests passed
Average token usage: 3,200 tokens/test
Estimated cost per run: $0.08
---
Most data extraction doesn't need Opus/Sonnet reasoning. Use Haiku (20x cheaper):
model: "claude-haiku-4-5" # Add this to skill config
Cost: $0.08 per run → $8/week → $416/year (93% cheaper)
If your prompt includes a 5,000-word style guide, cache it:
prompt: |
{{CACHE_START}}
[Your 5,000-word style guide here]
{{CACHE_END}}
Task: Process this invoice using the style guide above...
OpenClaw caches the style guide for 5 minutes. Subsequent runs reuse the cache → 90% token reduction.
Instead of processing 50 invoices one-by-one (50 API calls), batch them:
openclaw batch invoice-processor --folder="./invoices" --batch_size=10
Processes 10 invoices per API call → 5 API calls instead of 50 → 90% cost reduction.
If 70% of contact form submissions are spam, don't run the full lead-scorer:
pre_checks:
- type: "spam_filter"
if: "email matches disposableemaildomains"
then: "exit_early"
output:
score: 0
reasoning: "Disposable email detected"
priority: "ignore"
Spam leads exit immediately (zero tokens used).
---
❌ Copy skill files into each agent's folder
❌ Manually sync changes across 5 copies
Do this:✅ Store skills in .openclaw/skills/ (shared across all agents)
✅ In AGENTS.md, reference skills by name
.openclaw/AGENTS.md:
agents:
- id: sales-agent
skills:
- lead-scorer # Custom skill
- email-followup # Built-in
- calendar-booking # Built-in
- id: finance-agent
skills:
- invoice-processor # Custom skill
- expense-categorizer # Custom skill
- xero-reconciliation # Built-in
- id: content-agent
skills:
- content-repurposer # Custom skill
- seo-optimizer # Custom skill
- social-scheduler # Built-in
Skill versioning:
When you update a skill, use semantic versioning:
name: "Invoice Processor"
version: "2.1.0" # Major.Minor.Patch
changelog:
- "2.1.0 (2026-03-02): Added multi-currency support"
- "2.0.0 (2026-02-15): Breaking change - new output schema"
- "1.0.0 (2026-01-20): Initial release"
Agents automatically use the latest version unless you pin:
- id: finance-agent
skills:
- invoice-processor@2.0.0 # Pin to specific version
---
openclaw run invoice-processor --pdf_path="./test.pdf" --debug
Output:
[DEBUG] Loading skill: invoice-processor v2.0
[DEBUG] Validating inputs: pdf_path exists, 2.3MB, valid PDF
[DEBUG] Running tool: pdf_reader
[DEBUG] PDF extracted: 1,847 characters
[DEBUG] Running tool: ocr_engine
[DEBUG] OCR confidence: 94%
[DEBUG] Sending prompt to Claude (model: haiku)
[DEBUG] Tokens used: 3,200 (prompt: 2,100, response: 1,100)
[DEBUG] Response received: 847 characters
[DEBUG] Validating output schema: PASSED
[DEBUG] Running postaction: notifyslack
[DEBUG] Execution time: 4.2 seconds
[DEBUG] Cost: $0.08
Save the OCR'd text to check what Claude actually saw:
tools:
- pdf_reader:
save_output: true
output_path: ".openclaw/debug/ocr-output-{timestamp}.txt"
Copy your prompt, replace variables with real data, paste into Claude.ai. Iterate on prompt until it works perfectly.
openclaw playground invoice-processor
Opens interactive mode:
---
Browse more examples: https://skills.openclaw.ai/community
---
You now know how to build custom skills for any workflow.
Your homework:---
Module 9 complete. You can now build custom skills for any workflow in your business.---
.openclaw/skills/your-skill-name.yml
Test a skill:
openclaw run your-skill-name --input1="value" --input2="value"
Debug a skill:
openclaw run your-skill-name --debug
List all skills:
openclaw skills list
Share a skill:
openclaw skills publish your-skill-name --visibility="public"
Install community skill:
openclaw skills install lead-scorer --author="riley-brown"
---
Next: Module 10 - Scaling to £500k+ (60 min)---
---
┌─────────────┐
│ CEO Agent │
│ (Strategy) │
└──────┬──────┘
│
┌──────────────┼──────────────┐
│ │ │
┌──────▼─────┐ ┌─────▼──────┐ ┌────▼──────┐
│ Sales │ │ Content │ │ Finance │
│ Agent │ │ Agent │ │ Agent │
└──────┬─────┘ └─────┬──────┘ └────┬──────┘
│ │ │
┌──────▼─────┐ ┌─────▼──────┐ ┌────▼──────┐
│ Research │ │ Marketing │ │Operations │
│ Agent │ │ Agent │ │ Agent │
└────────────┘ └────────────┘ └───────────┘
Agent hierarchy:
---
.openclaw/AGENTS.md config:
- id: ceo-agent
description: "Strategic orchestrator. Sets weekly priorities and monitors business health."
model: claude-opus-4-6 # Needs advanced reasoning
schedule:
- cron: "0 9 1" # Monday 9am
task: "weekly_planning"
- cron: "0 17 5" # Friday 5pm
task: "weekly_review"
skills:
- metrics-dashboard
- priority-setter
- resource-allocator
- executive-summary
memory:
shared: true
write_access: "MEMORY.md"
context: |
You are the CEO Agent for Riley Brown's content marketing agency.
Your job:
1. Analyze business metrics (revenue, pipeline, content output, client satisfaction)
2. Identify bottlenecks and opportunities
3. Set weekly priorities for other agents
4. Allocate token budget across agents
5. Report progress to Riley
Monday task (weekly_planning):
- Review last week's performance
- Check pipeline health (CRM data)
- Analyze content performance (traffic, engagement)
- Set top 3 priorities for this week
- Write priorities to MEMORY.md under "## Weekly Priorities - [DATE]"
Friday task (weekly_review):
- Check if priorities were achieved
- Review agent execution logs
- Calculate ROI (agent cost vs. value created)
- Summarize wins and blockers
- Send executive summary email to Riley
Decision framework:
- If revenue < £40k/month: prioritize sales agent
- If pipeline < 10 qualified leads: prioritize research + outreach
- If content output < 8 posts/week: prioritize content agent
- If client satisfaction < 4.5/5: prioritize client agent
What happens every Monday:
Weekly Priorities - 2026-03-03
Top 3 priorities this week:
- SALES: Close 2 deals from pipeline to hit £40k/month target (£2k shortfall)
- RESEARCH: Generate 5 new qualified leads (pipeline is 3 leads short of target)
- CONTENT: Increase output to 8 posts/week (currently 6/week)
Resource allocation:
- Sales Agent: 40% of token budget
- Research Agent: 30% of token budget
- Content Agent: 20% of token budget
- Other agents: 10% of token budget
Context:
- Q1 target: £120k revenue (on track: £114k with 1 week left)
- Best-performing content: "AI Automation for Law Firms" (1,200 visits, 15 leads)
- Client satisfaction: 4.7/5 (up from 4.5 last month)
CEO Agent reviews the week:
Weekly Review - 2026-03-03
Results:
- ✅ SALES: Closed 2 deals (£4.2k total). Month revenue: £42.2k (target exceeded by £2.2k)
- ✅ RESEARCH: Generated 6 new leads (target: 5). Pipeline now has 13 leads (target: 10)
- ❌ CONTENT: Published 7 posts (target: 8). Bottleneck: social media repurposing backlog
ROI Analysis:
- Agents cost this week: £47 in API tokens
- Value created: £4.2k revenue + £1.8k pipeline value = £6k
- ROI: 128x
Next week priorities:
- CONTENT: Clear social media backlog (40 unpublished posts queued)
- OPERATIONS: Optimize content repurposing workflow
- CLIENT: Send Q1 reports to all active clients
Riley gets an email summary every Friday at 5pm. Takes 3 minutes to read. No Slack messages, no meetings, no status updates.
---
lead-scorer (from Module 9)proposal-generatoremail-followupcrm-updaterlead-scorer (0-100 score)- id: sales-agent
skills:
- lead-scorer
- proposal-generator
context: |
When a lead scores >= 70, generate a proposal using this template:
1. Research their company (Clearbit API)
2. Pull recent LinkedIn activity
3. Identify their top pain points (from contact form message)
4. Create custom proposal:
- Problem statement (specific to their business)
- Proposed solution (3-month content marketing package)
- Case study (similar client, similar results)
- Pricing (£3,500/month or £9,500 for 3 months upfront)
- Next steps (book intro call)
Send proposal via email within 2 hours of lead submission.
Follow up after 2 days, 5 days, and 10 days if no response.
Track all activity in HubSpot CRM.
Performance:
---
invoice-processor (from Module 9)expense-categorizerxero-reconciliationcash-flow-forecasterFinancial Summary - February 2026
Revenue: £42,300 (up 8% from January)
Expenses: £12,100 (ad spend: £4.2k, contractors: £3.5k, software: £2.1k, other: £2.3k)
Profit: £30,200 (margin: 71%)
Outstanding invoices: £7,800 (2 clients, 8-12 days overdue)
Action: Payment reminders sent
Cash flow forecast (next 3 months):
- March: £38k revenue, £13k expenses = £25k profit
- April: £44k revenue, £14k expenses = £30k profit
- May: £46k revenue, £15k expenses = £31k profit
⚠️ Alert: Ad spend up 35% this month. ROI: £4.2k spent → £8.1k revenue attributed (1.9x ROAS)
Account balance: £84,200
Runway: 6.9 months at current burn rate
Cost: £8/month in tokens
Human equivalent: 15 hours/month bookkeeping at £25/hour = £375/month
Savings: £367/month (47x ROI)
---
content-ideas-generatorblog-post-writercontent-repurposer (from Module 9)seo-optimizersocial-scheduler- 5 LinkedIn posts
- 10 tweets
- 1 email newsletter
- 1 YouTube script
---
competitor-trackertrend-monitorbacklink-finderguest-post-prospectorCompetitive Intelligence - March 2026
Top competitor moves this month:
- ContentKing published 8 posts on "AI content marketing" (new focus area for them)
- MarketMuse launched a new SEO tool (direct competitor feature)
- Clearscope announced Series B ($15M) - expect aggressive ad spend
Content gaps we should fill:
- "AI content calendar automation" (competitor traffic: 2.4k/month, we rank #0)
- "ChatGPT for B2B content" (competitor traffic: 1.8k/month, we rank #12)
- "Content marketing ROI calculator" (tool opportunity, competitor DA: 62)
Guest post opportunities (vetted):
- SEMrush blog (DA 94, audience: 1.2M/month, topic: "AI content workflows")
- Moz blog (DA 91, audience: 800k/month, topic: "technical SEO for content sites")
- Ahrefs blog (DA 90, audience: 2.3M/month, topic: "keyword research automation")
Recommended priority: Write "AI content calendar automation" post this week (high volume, low competition, aligns with CEO priority #3)
Cost: £12/month in tokens
Human equivalent: Junior analyst 10 hours/month at £20/hour = £200/month
Savings: £188/month (16x ROI)
---
Research Agent → Content Agent → Marketing Agent → Analytics Agent
CEO Agent triggers 3 agents simultaneously:
All work independently, report back to CEO Agent on Friday.
Pattern 3: Event-Driven TriggersNew invoice paid (Xero webhook) → Finance Agent updates cash flow → CEO Agent adjusts token budget
New contact form submission → Sales Agent scores lead → If score >= 70 → Send proposal
Pattern 4: Collaborative Problem-SolvingCEO Agent detects: "Content traffic down 15% this month"
→ Spawns Research Agent: "Analyze why traffic dropped"
→ Research reports: "Google algorithm update hit 'AI content' keywords"
→ CEO Agent spawns Content Agent: "Pivot content strategy to 'content automation' keywords"
→ Content Agent creates 5 new posts on new topic
→ Analytics Agent monitors recovery
Orchestration file:.openclaw/orchestration.yml
workflows:
- name: "new-lead-pipeline"
trigger:
type: "webhook"
source: "hubspot"
event: "contact.created"
steps:
- agent: "sales-agent"
task: "score_lead"
outputvar: "leadscore"
- if: "lead_score >= 70"
then:
- agent: "sales-agent"
task: "generate_proposal"
- agent: "sales-agent"
task: "send_proposal"
- agent: "research-agent"
task: "findsimilarcompanies" # Expand TAM
- if: "leadscore >= 40 AND leadscore < 70"
then:
- agent: "sales-agent"
task: "addtonurture_sequence"
- if: "lead_score < 40"
then:
- agent: "sales-agent"
task: "archive_lead"
- name: "weekly-planning-cycle"
trigger:
type: "cron"
schedule: "0 9 1" # Monday 9am
steps:
- agent: "ceo-agent"
task: "analyze_metrics"
output_var: "priorities"
- agent: "ceo-agent"
task: "setweeklypriorities"
input:
priorities: "{{priorities}}"
- parallel:
- agent: "sales-agent"
task: "updateweeklyfocus"
- agent: "content-agent"
task: "updateweeklyfocus"
- agent: "finance-agent"
task: "updateweeklyfocus"
---
| Agent | Monthly Token Cost | Human Equivalent Cost | Savings | ROI Multiplier |
|-------|-------------------|-----------------------|---------|----------------|
| CEO Agent | £25 | £1,200 (exec time) | £1,175 | 48x |
| Sales Agent | £18 | £600 (SDR time) | £582 | 33x |
| Finance Agent | £8 | £375 (bookkeeper) | £367 | 47x |
| Content Agent | £38 | £2,800 (writer+VA) | £2,762 | 73x |
| Research Agent | £12 | £200 (analyst) | £188 | 16x |
| Client Agent | £15 | £400 (account mgr) | £385 | 27x |
| Support Agent | £9 | £300 (support rep) | £291 | 33x |
| Operations Agent | £6 | £200 (PM time) | £194 | 32x |
| Marketing Agent | £14 | £500 (marketer) | £486 | 36x |
| Analytics Agent | £4 | £150 (analyst) | £146 | 37x |
| Personal Agent | £8 | £250 (EA time) | £242 | 31x |
| TOTAL | £157/month | £6,975/month | £6,818/month | 44x avg |
Annual savings: £81,816 Key insight: Riley isn't comparing agents to hiring full-time staff. He's comparing to the cost of NOT doing this work at all.Before agents:
With agents:
---
Is this work currently being done?
│
├─ YES → Is it taking > 2 hours/week?
│ │
│ ├─ YES → Build agent to replace human time
│ └─ NO → Don't build (not worth it)
│
└─ NO → Would doing this work generate revenue or save cost?
│
├─ YES → Build agent to capture opportunity
└─ NO → Don't build (nice-to-have)
Examples:
❌ Don't build: "Coffee order agent" (saves 5 mins/week)
✅ Build: "Lead scorer agent" (captures revenue opportunity)
✅ Build: "Invoice processor agent" (saves 15 hours/month bookkeeping)
❌ Don't build: "Meeting note beautifier" (output quality doesn't matter)
---
---
Riley's plan to scale from £540k to £1M revenue with the same 11-agent team:
Revenue breakdown (current):---
8-11. Client Agent, Support Agent, Marketing Agent, Analytics Agent
Goal: Scale to £500k+ revenue with zero staff.---
❌ "I'll build a sales agent and see what it does"
✅ "I'll document my sales process, then build an agent that follows it"
Mistake 2: Over-engineering❌ 50-page prompt with every edge case
✅ 2-page prompt that handles 90% of cases, escalates the rest
Mistake 3: Not measuring ROI❌ "My agents seem helpful"
✅ "My agents save 15 hours/week = £600/month value vs. £38/month cost = 16x ROI"
Mistake 4: Duplicating work across agents❌ Sales Agent and Marketing Agent both researching the same lead
✅ Research Agent does research once, both agents read shared memory
Mistake 5: Ignoring agent logs❌ Agents run silently, errors go unnoticed
✅ Heartbeat monitoring + weekly reviews catch issues early
---
You've completed the OpenClaw Course for CEOs. You now know:
---
Congratulations. You're now equipped to build an AI-powered business that runs 24/7, scales without hiring, and generates £500k+ in revenue.
What others have achieved:You now have a 10-year competitive advantage.
Go build.---
Course complete. Total duration: 10 modules, 11 hours of content. You are now an OpenClaw expert.