← All projects
OpenHands (v0) All-hands AI v0 — autonomous software engineer agent. Event-sourced state, microagents, controller-level guardrails.
Mark "OpenHands (v0)" as studied
The v0 of OpenHands. Event-sourced from the start, with controller-level guardrails and microagents (skills triggered by event patterns). Read this as the “before” half of an interesting v0→v1 evolution; pair with openhands-2 and use the v1↔v2 diff to see what the team chose to refactor.
The first-pass shape of an event-sourced agent. Coarser separations than v1, but the core idea is already in place.
The controller / runtime split. v1 makes this cleaner; v0 shows the seam where they realized the split mattered.
Microagents as event subscribers. Same pattern that survives in v1.
The actionable lessons are in v1; this codebase is most useful as the contrast that makes v1’s choices visible.
The full per-doc analysis lives below — these are the original numbered analyses, rendered as styled HTML. Pick a section to study deeper.
Insights from this project Non-obvious tricks pulled out as standalone study cards.
Concepts touched Agent loop The skeleton every agent shares — read state, ask the model, parse, act, repeat — and how the wiring choices shape every other system around it. Prompt caching Pay 10–50% of input cost on cached tokens. The art is choosing where the static-vs-dynamic boundary lives. Memory compression Long sessions overflow the context window. The good implementations don't summarize — they enumerate what to keep. Multi-agent coordination When one agent isn't enough — three questions to answer for any review pipeline, planner-executor flow, or critic loop. Sandboxing Limit what the tool layer can do regardless of what the agent intends. Docker, firewall, process limits, or all three. Guardrails Layered defenses — prompt, schema, controller, sandbox — each catching a different class of failure. The story you tell auditors. Provider abstraction Decouple agent code from a single LLM vendor. Three patterns observed; each pays a different tax. Tool calling formats How the agent and the model agree on 'I want to run this function.' Get this wrong and you're locked into one provider, can't stream cleanly, or pay for trailing hallucination. Other projects on these concepts