This simulates a real conversation between a user and the OpenClaw agent. EvoClaw intercepts every message and response through its transparent proxy — zero extra latency added to the user.
This is a browser simulation of exactly what happens inside EvoClaw when your agent receives a message. Pick a scenario, press Play, and watch all 6 stages run live — intercept, scoring, skill injection, cloud training, skill evolution, and model weight hot-swap.
This simulates a real conversation between a user and the OpenClaw agent. EvoClaw intercepts every message and response through its transparent proxy — zero extra latency added to the user.
After each response, a judge LLM rates it on a 0.0–1.0 scale. Scores above 0.7 are good. Below 0.3 triggers Skill Evolution. These scores weight how much each turn influences the gradient update.
EvoClaw retrieves the most relevant skills from its bank based on the conversation content and injects them into the system prompt. The agent immediately becomes more capable — before any retraining.
When the buffer fills (32 turns by default), EvoClaw submits a cloud LoRA training job to Tinker. The training runs remotely — your machine just sends the data and receives updated weights.
When the agent fails (reward < 0.3), EvoClaw sends the full trajectory to a skill-generation LLM. It analyzes what went wrong and creates a new, targeted skill that gets added to the bank permanently.
Once training completes, updated LoRA weights are pushed directly to the Tinker sampling endpoint and swapped in with zero service interruption. The cycle repeats automatically.
The simulation above shows how EvoClaw works. Now talk to the real agent — it learns from every message and remembers across sessions.
ASK EVOCLAW →