February 5, 2026 | Posted by Benji Taylor
Introducing Agentation 2.0
A new way for humans and AI to collaborate on UI
Since launch, Agentation has already become part of how a lot of developers work with AI on UI, with over 1.8k GitHub stars and hundreds of thousands of installs via npm.
Version 1 was annotate, copy, paste. You’d annotate something, copy the structured output, hand it to your agent. Good context, but requires a manual handoff.
Version 2 is annotate and collaborate. Your agent sees your annotations directly. It has the full picture: what you’re pointing at, what you said, what’s pending across your whole site. You work together until it’s fixed.
MCP Integration
The Model Context Protocol server is the biggest addition in 2.0. It’s what makes the direct connection possible.
With MCP, agents can fetch your current annotations, acknowledge them, ask follow-up questions, resolve issues with summaries, or dismiss feedback with reasons. Your annotations flow directly into the agent’s context.
The server runs locally and supports multiple interfaces: MCP tools for direct agent integration, an HTTP API for custom workflows, and Server-Sent Events for real-time updates. It’s designed to work with Claude Code and any MCP-compatible client.
Here’s what the workflow looks like:
You: “What annotations do I have?”
Agent: “3 annotations: button on /checkout, contrast on /settings, typo on /about.”
You: “Fix the button”
Agent: “Left-align or center with the form?”
You: “Center”
Agent: “Done. Marked as resolved.”
Sessions & Smart Filtering
Every page now gets its own session, and every annotation carries rich metadata: when it was created, when it was last updated, its current status, and who resolved it. This unlocks entirely new ways to work with feedback.
Ask your agent things like:
- “What feedback has been waiting the longest?”
- “Show me just the blocking issues”
- “Which pages have unresolved annotations?”
- “What did I mark as a question vs a fix request?”
Status transitions are first-class too. When an agent starts working on your feedback, it can mark it as acknowledged so you know it’s in progress. When it’s done, it resolves with a summary. If it decides not to act, it dismisses with a reason. Every state change is timestamped, so you always know the full history.
Standardized Schema
We’ve published a formal Annotation Format Schema that defines exactly how annotations are structured. The schema makes annotations portable across tools and predictable for anything that consumes them.
The schema includes intent and severity fields, so you can flag something as a blocking bug vs a minor suggestion, or distinguish between “fix this” and “I have a question about this.” Agents can use these signals to prioritize work automatically.
JSON Schema and TypeScript definitions are both available. If you’re building tools that consume annotations, the schema is your starting point.
Webhooks
Webhooks let you subscribe to annotation events and push them anywhere. Configure a URL, and every annotation gets delivered as a structured JSON payload.
Some workflows you could build:
- GitHub Issues: Automatically create issues from annotations, labeled by severity. Pair with a GitHub Action that triggers Claude Code to fix them.
- Slack alerts: Post blocking issues to a channel with a “Fix it” button that invokes your agent.
- Linear sync: Turn annotations into tickets, with component paths pre-filled so engineers know exactly where to look.
- Review dashboard: Aggregate feedback across your team into a single view, sorted by age and severity.
The schema is stable enough to build on. If you can receive a POST request, you can integrate Agentation into your workflow.
React Component Detection
When you hover over an element in a React app, Agentation now shows the full component hierarchy. Not just the DOM element, but the actual components from your codebase.
This makes it dramatically easier for AI agents to find the right file. Instead of searching for a class name that might be generated, they can search for ProductCard or CheckoutButton, the names you actually use.
The detection adapts to your output format: disabled in Compact mode, framework-filtered in Standard, CSS-correlated in Detailed, and everything (including internals) in Forensic.
What’s Next
Agentation is still new. The vision is a world where UI feedback loops shrink from hours to seconds. Point at something, say what’s wrong, and watch it get fixed in real time.
If you haven’t tried Agentation yet, install it and see how it changes the way you work with AI agents. If you’re already using it, update to 2.0 and let us know what you think.