LLM Supervisor

Automatically switches between cloud and local LLMs on rate limits, with user confirmation required for local code generation.

Pasang
$clawhub install llm-supervisor-agent

LLM Supervisor

Automatically switches OpenClaw between cloud and local Ollama models when rate limits occur.

Features

  • Detects rate-limit / overload errors from Anthropic/OpenAI

  • Switches to a local Ollama fallback model

  • Requires explicit confirmation before local code generation

  • Supports manual commands:

Commands

  • /llm status

  • /llm switch cloud

  • /llm switch local

Default Local Model

  • qwen2.5:7b

Safety

Local code generation requires the user to type:

CONFIRM LOCAL CODE