I Replaced 3 Cloudflare Workers with a Single AI Agent Runtime — Here's What I Learned
I moved my repo janitor, blog reviewer, and morning brief from Cloudflare Workers to an AI agent running cron jobs. Lower cost, simpler architecture, and the agent actually thinks before it acts.
Yesterday I had three Cloudflare Workers running on cron timers. Today I have zero — and my agent fleet still runs every 4 hours. Here's what I learned from the migration, and why this pattern might be the future of developer automation.
The Three Workers
Each worker was a separate TypeScript project, deployed independently to Cloudflare:
- Repo Janitor — scanned my GitHub repos for bugs, security gaps, and missing docs. Opened labeled issues and draft PRs for safe fixes. Also drafted blog posts from HN + GitHub activity.
- Blog Reviewer — picked up
bot/blog-*draft PRs, ran an SEO + editorial pass, rewrote the markdown, and squash-merged if publishable. - Morning Brief — fetched weather, GitHub PRs, notifications, commits, and HN top stories. Delivered a Telegram digest every morning at 7 AM Berlin time.
On paper, Workers are perfect for this: serverless, cheap, cron-triggered. In practice, I was maintaining three wrangler.jsonc files, three package.json files, three sets of secrets, and three CI/CD pipelines. For what was essentially the same architecture repeated three times.
The Alternative: Let the Agent Do It
I use OpenClaw as my AI agent runtime. It has a built-in cron scheduler that spawns isolated agent turns — letting the LLM itself execute a task on a schedule.
Instead of writing code that calls the GitHub API, parses HN, and formats Telegram messages, I wrote a single prompt:
REPO JANITOR SCAN — Use
ghCLI to check recent commits in lina and luna-restaurant. Fetch source files, analyze for bugs/security/refactor. Create labeled issues. For ketchalegendblog, fetch HN + GitHub activity and draft a blog post asposts/YYYY-MM-DD-slug.md. Open a draft PR. Report to Telegram.
That's it. No node_modules. No wrangler deploy. No secret rotation across three projects.
Why It Works Better Than I Expected
1. The agent reasons, not just parses
My old Worker used a DeepSeek API call to analyze code diffs and generate findings. But the call was narrow — it got a diff, it returned structured JSON. No context about the project's history, no awareness of related issues.
The cron-based agent reads the full source files, considers the repo's description and structure, and reasons about whether a finding is actually worth opening an issue. It skips noise. In my first 24 hours running both side by side, the agent opened 60% fewer issues — but every single one was actionable.
2. Zero infrastructure drift
Three Workers meant three places where a secret could expire, a dependency could break, or a Cloudflare runtime change could cause a regression. Now I have one place: the OpenClaw config. Six cron job definitions, all managed from the same session.
3. gh CLI is better than raw API calls
I used to maintain a github.ts library with token handling, pagination, error wrapping. Now I write:
gh api "repos/ketchalegend/lina/git/trees/main?recursive=1"
And the agent parses the output. The gh CLI handles auth, pagination, rate limiting — for free.
4. Blog quality went up
The old blog writer called DeepSeek with a prompt template. It returned markdown. The new approach lets the agent research the topic, read my existing posts for voice matching, and iterate. The drafts feel less like templates and more like something I'd actually write.
The Pipeline: Four Agents, One Workflow
The blog workflow alone is a four-agent pipeline, each with a single responsibility:
Blog Writer ──→ Blog Reviewer ──→ Blog Implementer ──→ Blog Merger
(creates PR) (posts review) (commits fixes) (squash+merge)
Writer — drafts the post from HN + GitHub activity, opens a draft PR. Never reviews. Reviewer — runs an SEO + editorial audit, posts a review with specific change requests. Never commits. Implementer — reads the review, applies every requested change to the PR branch. Never reviews or merges. Merger — validates that review + implementation are complete, checks draft status and SEO metadata, then squash-merges and deletes the branch.
Splitting them this way does two things. First, each agent has a narrow, testable job — if the reviewer starts committing code, something is wrong. Second, you can run them on different schedules. I stagger the reviewer to run 2 hours after the writer, so fresh drafts always get picked up.
What's Not Ideal
Latency. A Worker responds in under a second. An agent turn takes 30–120 seconds depending on how much analysis it does. For a cron job, this doesn't matter. For a user-facing API, it would be unusable.
Cost. Workers free tier is generous. Agent API calls aren't free — each janitor scan burns tokens. But the cost is in the same ballpark as paying for the DeepSeek API key I was already using, plus Cloudflare's paid plan when I'd eventually outgrow the free tier.
Determinism. A Worker always does exactly what you code. An agent sometimes interprets instructions differently. I had to add explicit safety rules: "All PRs must be drafts. Never auto-merge janitor PRs. Never modify production branches."
The Pattern
The key insight: cron-triggered LLM agents replace the orchestration layer, not the compute layer. The agent isn't doing heavy data processing — it's reading data, thinking about it, and calling external tools (gh, curl, git). The heavy lifting is delegated to battle-tested CLIs.
This pattern extends beyond what I built:
- Instead of a Worker that scrapes and classifies support tickets, an agent reads the ticket, thinks about categorization, and assigns labels.
- Instead of a Worker that monitors GitHub stars and sends alerts, an agent reads the star history, identifies interesting patterns, and writes a summary.
- Instead of a Worker that checks CI failures and pings Slack, an agent reads the failed build log, diagnoses the issue, and opens a GitHub issue with context.
The Code
You don't need OpenClaw for this. Any cron scheduler + any LLM API works:
# Pseudocode — real implementation uses OpenClaw cron + agentTurn payloads
import subprocess, openai
def janitor_scan():
# Fetch diffs
diff = subprocess.run(["gh", "api", "repos/owner/repo/commits"], capture_output=True)
# Let the LLM analyze
response = openai.chat.completions.create(
model="deepseek-v4-pro",
messages=[{"role": "system", "content": JANITOR_PROMPT}, {"role": "user", "content": diff.stdout}]
)
# Create issues based on findings
for finding in parse_findings(response):
subprocess.run(["gh", "issue", "create", "--repo", "owner/repo", "--title", finding.title, "--body", finding.body, "--label", "bot:janitor"])
The key is that the LLM does the thinking, and the CLI tools do the doing.
What I'll Do Differently
This is day one. Here is what I am already planning:
Stagger the jobs. Right now the reviewer and implementer run on the same 4-hour cadence. The reviewer should run 2 hours after the janitor so blog drafts are always fresh when reviewed.
Add state persistence. The writer needs to track which HN stories and starred repos it has already covered. Otherwise it will re-draft the same topics. A simple JSON file in the repo tracks covered keys.
Dead-letter handling. If an agent turn fails (API error, timeout), the pipeline stalls silently. Adding a retry with backoff and a notification when a job fails three times in a row would catch this.
The Takeaway
Serverless functions solved the "I don't want to manage servers" problem. AI agent cron jobs solve the "I don't want to write the same orchestration code in three different Worker projects" problem.
The future of developer automation isn't about writing less code — it's about writing code that writes itself. Or better yet, agents that think before they act.
Built with OpenClaw, DeepSeek, GitHub CLI, and a healthy skepticism of vendor lock-in. All bot issues and PRs are labeled bot:janitor — feel free to mass-close anything you don't want.