- The 5 Things That Will Break When You Upgrade (Fix in 60 Seconds)
- The 93% Cost Cut: GPT-5.4-nano Token Routing Masterclass
- The Search Stack You Actually Need
- 30+ Security Fixes: OpenClaw Burns the Legacy Boats
- Model Upgrades: MiniMax Benchmarks + Anthropic Vertex
- Everything Else That's New
- Upgrade Checklist
1. The 5 Things That Will Break When You Upgrade
Run openclaw update without reading this and you will hit at least one of these. Each fix takes under 60 seconds.
openclaw doctor --fix
This single command handles breaks #1, #3, and partially #4 automatically. Run it before you touch anything else.
Break #1: Chrome Extension Relay Removed
The legacy Chrome extension relay path is gone. If your config has driver: "extension" or browser.relayBindHost, it will fail silently.
Fix: openclaw doctor --fix migrates host-local browser config to existing-session or user automatically. Nothing manual required.
Break #2: .moltbot and .clawdbot State Directories Dead
If you have state files under ~/.moltbot, they will not be found. Auto-detection and migration fallback are gone.
Fix: Move your state manually:
mv ~/.moltbot ~/.openclaw # OR set explicitly: export OPENCLAW_STATE_DIR=~/.openclaw export OPENCLAW_CONFIG_PATH=~/.openclaw/openclaw.json
Break #3: CLAWDBOT_* and MOLTBOT_* Environment Variables Removed
Every old compatibility env name is gone. If you have scripts, LaunchAgents, or cron jobs that set these, they silently do nothing now.
Replacement map:
# Old → New CLAWDBOT_API_KEY → OPENCLAW_API_KEY CLAWDBOT_PORT → OPENCLAW_PORT CLAWDBOT_STATE_DIR → OPENCLAW_STATE_DIR MOLTBOT_CONFIG_PATH → OPENCLAW_CONFIG_PATH MOLTBOT_LOG_LEVEL → OPENCLAW_LOG_LEVEL
Search your shell profile, LaunchAgents, and any startup scripts: grep -r "CLAWDBOT\|MOLTBOT" ~/.zshrc ~/Library/LaunchAgents/
Break #4: nano-banana-pro Image Generation Path Dead
The bundled nano-banana-pro skill wrapper is removed. Any config referencing it produces no output and no error.
Fix -- exact JSON to add to your config:
{
"agents": {
"defaults": {
"imageGenerationModel": {
"primary": "google/gemini-3-pro-image-preview"
}
}
}
}
Break #5: Plugin SDK Breaking Change -- No Compatibility Shim
The legacy openclaw/extension-api is removed with zero compatibility shim. Any plugin importing from it is broken with no fallback.
Fix: Migrate all imports to the new public SDK surface:
# Old (broken):
import { something } from 'openclaw/extension-api'
# New:
import { something } from 'openclaw/plugin-sdk/core'
import { something } from 'openclaw/plugin-sdk/testing'
If you use community plugins, check their repos for a 2026.3.22 compatibility update before upgrading.
2. The 93% Cost Cut: GPT-5.4-nano Token Routing Masterclass
The earlier version of this article said "90% cost cut." That was wrong. Here are the real numbers:
| Model | Input / 1M tokens | Output / 1M tokens | vs Sonnet |
|---|---|---|---|
| Claude Sonnet 4.6 | $3.00 | $15.00 | baseline |
| GPT-5.4-mini | $0.40 | $1.60 | ~87% cheaper |
| GPT-5.4-nano | $0.20 | $1.25 | 93% cheaper |
But the bigger story is the context window. GPT-5.4-nano has a 400K token context window. That changes what cron jobs can do.
300 pages of logs. An entire codebase. A week of Telegram history. All of it fits in a single Nano call for $0.20. You can now run deep log analysis in a cron job without paying frontier model prices.
Here is the exact High/Low Reasoning Split config. Copy this, paste it into your openclaw.json, and your token costs restructure immediately:
{
"agents": {
"list": [
{
"id": "research-worker",
"model": "anthropic/claude-sonnet-4-6",
"thinking": "on",
"description": "Deep research, complex decisions, code review"
},
{
"id": "ops-cron",
"model": "openai/gpt-5.4-nano",
"thinking": "off",
"description": "Status checks, log summaries, routine cron jobs"
},
{
"id": "status-checker",
"model": "openai/gpt-5.4-nano",
"thinking": "off",
"description": "Health checks, balance monitors, quick lookups"
},
{
"id": "data-enrichment",
"model": "deepseek/deepseek-chat",
"thinking": "off",
"description": "Bulk data processing, enrichment batches"
}
]
}
}
The rule: if the task is deterministic and the input is structured, use Nano. If the task requires judgment, use Sonnet. You will never need to pay Opus prices for anything that runs in a cron.
3. The Search Stack You Actually Need
The original article got the search provider framing wrong on three counts. Here is the corrected breakdown:
| Provider | Type | Install | Use it for |
|---|---|---|---|
| Brave Search | Native, bundled default | None -- already there | General search. Privacy-first. No API key needed. This is the recommended default. |
| Exa | Plugin | openclaw plugins install exa |
Neural/semantic search. When keyword search fails, Exa finds the right result by meaning. |
| Tavily | Skill / MCP-based | openclaw plugins install tavily |
RAG-optimized. Returns structured excerpts built for injection into agent context. |
| Firecrawl | Plugin + autonomous agent | openclaw plugins install firecrawl |
Multi-step autonomous web research. Not a scraper. See below. |
The Firecrawl Reveal -- What Everyone Missed
Firecrawl is the only search provider in this list that operates as an autonomous research agent via firecrawl_agent. The difference is significant.
Basic search (Exa, Tavily, Brave): You give it a query. It returns results. Your agent reads and synthesizes them.
Firecrawl agent: You give it a goal. It navigates, extracts, structures, and returns the finished output. Your agent does not coordinate the steps -- Firecrawl does.
Using firecrawl_agent, find the pricing plans for the top 5 CRM tools and return them as a JSON comparison table with columns: name, starter_price, pro_price, enterprise_price, free_tier.
Firecrawl navigates to each site, extracts the pricing tables autonomously, and returns structured JSON. No URLs needed. No coordination required.
This is the capability that separates basic AI search from autonomous research. Use Exa or Tavily for lookups. Use Firecrawl when you want a finished output, not raw results.
4. 30+ Security Fixes: OpenClaw Burns the Legacy Boats
The original article said "eight security fixes." The real count is 30+ patches across five distinct attack surfaces. This is not a maintenance release. It is an enterprise-grade security overhaul.
| Attack Surface | What was patched | Why it matters |
|---|---|---|
| Sandbox | JVM injection blocked (MAVEN_OPTS, SBT_OPTS, GRADLE_OPTS, ANT_OPTS). glibc tunable exploitation blocked (GLIBC_TUNABLES). .NET dependency hijack blocked. | OpenClaw running on your Mac Mini can no longer be used as a vector to inject malicious JVM or runtime configuration by a compromised exec environment. |
| Network | SSRF pinning on explicit-proxy. Bonjour/DNS-SD fail-closed on unresolved endpoints. Gateway probe caller-timeout honored. | A malicious local service could no longer steer routing or SSH auto-target selection through DNS-SD hints. |
| Identity | iOS setup codes bound to intended node profile. Device token rotation hardened. Pairing bootstrap rejection for scope escalation. | A compromised QR pairing flow can no longer request broader scopes than the intended node allows. |
| Encoding | Hidden Hangul Unicode filler code points blocked in exec approval prompts across gateway/chat and macOS native approval UI. | Visually empty Unicode padding can no longer hide a different command behind the one you approved. |
| Data access | Windows UNC/file:// media path injection blocked. jq removed from safe-bin allowlist. | jq -n env could dump every host secret to stdout. It is no longer trusted by default. |
The sandbox hardening is the most important patch for Mac Mini operators. Your personal machine is now protected against JVM and glibc injection through the exec environment -- attack vectors that were theoretically exploitable before this release.
OpenClaw is not just patching bugs. It is systematically closing the gap between "runs on developer hardware" and "safe to run on a machine that matters."
5. Model Upgrades: Real Benchmarks, Real Context
MiniMax M2.7 -- What It Actually Is
The original article called MiniMax M2.7 "the best local-first option." That was wrong. MiniMax M2.7 is a 230 billion parameter Mixture-of-Experts cloud model. It is not local. You are calling their API.
But here is why you still care about it:
| Model | PinchBench | SWE-Pro | Context |
|---|---|---|---|
| Claude Opus 4.6 | 87.4% | ~60% | 200K |
| MiniMax M2.7 | 86.2% | 56.22% | TBC |
| GPT-5.4 | ~85% | ~55% | 128K |
MiniMax M2.7 benchmarks within 1.2 points of Claude Opus 4.6 on PinchBench. That is not a local model. That is a frontier-class cloud model from a Chinese AI company. Highspeed mode adds lower latency at reduced cost.
minimax/minimax-m2.7 minimax/minimax-m2.7-highspeed # FastMode: /fast or params.fastMode
Anthropic Vertex Native Support
Claude can now be routed natively through Google Vertex AI. This matters for two specific use cases:
- Enterprise billing: Claude usage billed through your Google Cloud account instead of Anthropic directly
- Data residency: Your prompts stay within your GCP region -- relevant for compliance-sensitive deployments
{
"models": {
"providers": {
"anthropic-vertex": {
"gcpProject": "your-project-id",
"gcpRegion": "us-central1"
}
}
}
}
6. Everything Else Worth Knowing
ClawHub Plugin Marketplace
OpenClaw now prefers ClawHub before npm for named packages. One command, no hunting:
openclaw plugins install <name> openclaw skills search <keyword>
Per-Agent Reasoning
Assign thinking depth at the agent level. Combined with the routing config above, this is how you stop paying for reasoning you do not need.
48-Hour Agent Timeout
Default went from 600 seconds to 48 hours. Overnight agents now actually run overnight.
/btw Side Questions
/btw what model am I running? /btw how much context is left?
Quick answers without touching your session context. Dismisses automatically.
OpenShell and SSH Sandbox Backends
Pluggable sandbox backends replace Docker-only. OpenShell adds mirror and remote workspace modes. SSH backend adds secret-backed key, certificate, and known_hosts support.
Browser Expansion
Brave, Edge, and any Chromium browser now supported via userDataDir:
{
"browser": {
"profiles": {
"brave": {
"userDataDir": "~/Library/Application Support/BraveSoftware/Brave-Browser"
}
}
}
}
Gateway Cold Start Fixed
WhatsApp and Discord were taking 30+ seconds to start. Bundled plugins now load from compiled dist entries. Cold start drops to seconds. The "Unknown model" error on first message after boot is also fixed.
Android SMS and Call Log Search
sms.search and callLog.search are now available for Android nodes. Your agent can now search your texts and call history through the gateway.
7. Upgrade Checklist -- In This Order
- Run
openclaw update - Run
openclaw doctor --fix-- fixes breaks #1, #3, #4 automatically - Search for old env vars:
grep -r "CLAWDBOT\|MOLTBOT" ~/.zshrc ~/Library/LaunchAgents/ - If you have
~/.moltbot: move it to~/.openclaw - If you use custom plugins: check they are migrated off
openclaw/extension-api - Add the image gen config if you use image generation
- Copy the High/Low Reasoning Split config into your openclaw.json
- Run
openclaw plugins install exa tavily firecrawl - Test one agent turn to confirm everything works
Help me upgrade to OpenClaw 2026.3.22 without breaking anything. Do these steps without asking for confirmation on each: 1. Run: openclaw doctor --fix 2. Search my shell config for CLAWDBOT or MOLTBOT env vars and tell me what you find 3. Add the High/Low Reasoning Split config to my openclaw.json (research → Sonnet, cron → gpt-5.4-nano) 4. Run: openclaw plugins install exa tavily firecrawl 5. Add the imageGenerationModel config for gemini-3-pro-image-preview 6. Report what was changed and what needs my attention
Get the next breakdown before it goes public
GnawClaw publishes technically accurate, copy-paste-ready guides for every major OpenClaw release. No fluff, no summaries -- implementation playbooks.