Anthropic's Conway Is an Always-On AI Agent. Here's Why That Changes Everything.
A code leak revealed Anthropic is building Conway — an AI agent that never sleeps. Extensions, webhooks, background browsing. This isn't a chatbot anymore.

On March 28, 2026, a developer combing through Anthropic's public JavaScript bundles found something the company hadn't announced yet. Buried in minified code were references to a system called "Conway" — an internal project name for what appears to be Anthropic's first always-on AI agent platform. The leaked code referenced extensions, webhook listeners, background browsing sessions, and persistent task queues. This wasn't a chatbot upgrade. It was an entirely different category of product.
Anthropic hasn't confirmed the Conway name or provided an official timeline. But the code artifacts, combined with recent API changes and job postings for "agent infrastructure engineers," paint a clear picture: the company behind Claude is building an agent that runs continuously, acts on triggers, and maintains state across sessions without user prompting.
If that sounds familiar, it should. Open-source platforms like OpenClaw and managed services like RapidClaw have been deploying always-on agents to Telegram and Discord for months. Conway's emergence validates a thesis the agent community has been arguing since late 2025: the future of AI isn't conversations. It's continuous, autonomous operation.

What is Anthropic Conway?#
Based on the leaked code references and corroborating signals from Anthropic's recent API updates, Conway appears to be an agent runtime that sits on top of Claude. Rather than waiting for a user to open a chat window and type a message, Conway agents run persistently in the background, responding to events, executing scheduled tasks, and browsing the web autonomously.
The code references suggest several core components:
- Extensions system — A plugin architecture that lets Conway agents connect to external services. References to Gmail, Google Calendar, Slack, and GitHub extensions were found in the bundle. This mirrors the MCP (Model Context Protocol) approach Anthropic has already shipped, but with tighter integration into a persistent runtime.
- Webhook listeners — Conway agents can register HTTP endpoints that trigger agent actions when external events occur. A Stripe payment comes in, a GitHub issue gets labeled, a calendar event starts in 15 minutes — the agent reacts without human prompting.
- Background browsing — References to headless browser sessions suggest Conway can research topics, monitor web pages, and extract data autonomously. This goes beyond the computer-use capabilities Anthropic demonstrated in late 2025 by making browsing a background process rather than a user-supervised activity.
- Persistent task queues — Conway appears to maintain a queue of pending actions that survive across sessions. An agent can plan a sequence of steps, execute some now, and schedule others for later.
The name "Conway" likely references John Horton Conway, the mathematician behind the Game of Life — a cellular automaton where simple rules produce complex emergent behavior. The metaphor is apt: Conway agents appear designed to follow simple instruction sets that produce sophisticated autonomous behavior over time.
What is an always-on AI agent?#
An always-on AI agent is fundamentally different from a chatbot. A chatbot waits for you. An agent works for you, continuously, whether you're at your desk or asleep.
The distinction matters because it changes the entire interaction model. With ChatGPT or Claude's current chat interface, every session starts from zero context. You open the app, explain what you need, get a response, close the app. The AI does nothing between sessions. It's reactive, stateless, and idle 99.9% of the time.
An always-on agent flips this. It runs 24/7, maintains persistent memory of your preferences and past interactions, monitors data sources for relevant changes, and proactively delivers information or takes action based on predefined triggers. You don't ask it to check your email — it's already triaging your inbox when you wake up. You don't ask it to research competitors — it's been tracking their pricing pages overnight and has a summary ready.
This is the model that separates AI agents from ChatGPT at a fundamental level. ChatGPT is a brilliant tool you pick up and put down. An always-on agent is a team member that never clocks out.

The always-on paradigm has been proven in production by open-source projects. OpenClaw agents, deployed through platforms like RapidClaw, already run persistently on Telegram and Discord — delivering morning briefings, triaging emails, monitoring competitors, and managing workflows around the clock. Conway would bring this paradigm to Anthropic's ecosystem with the weight of Claude's reasoning capabilities behind it.
The architecture behind Conway#
The leaked code and Anthropic's recent infrastructure moves suggest Conway's architecture follows a pattern familiar to anyone who's worked with agent frameworks:
User Config + Instructions
|
v
Conway Runtime (persistent process)
|
┌───┴───┐
| |
Webhook Scheduled
Triggers Tasks (cron)
| |
v v
Claude API (reasoning engine)
|
v
Extensions (Gmail, Slack, GitHub, Browse)
|
v
Action Execution + Memory Write
This is not a novel architecture. It's the same event-driven agent loop that OpenClaw implements, that LangGraph enables, and that dozens of agent frameworks have converged on. What makes Conway notable is not architectural innovation but rather distribution and trust. Anthropic has 200 million+ users across Claude's consumer and API products. If Conway ships as a first-party feature, it instantly becomes the most widely available always-on agent platform in existence.
The webhook system deserves particular attention. Current Claude usage requires the user to initiate every interaction. Webhooks invert this — external systems push events to the agent, and the agent decides how to respond. This is the difference between checking your dashboard every hour and getting a Telegram message the moment something needs your attention.
Conway vs. the competition: where does it fit?#
The always-on agent space is no longer theoretical. Multiple platforms ship persistent agents today. Here's how Conway likely stacks up based on the leaked capabilities:
| Feature | Conway (leaked) | ChatGPT | Claude (current) | OpenClaw / RapidClaw |
|---|---|---|---|---|
| Always-on execution | Yes | No | No | Yes |
| Persistent memory | Yes | Limited (GPT memory) | Limited (project context) | Yes (structured memory) |
| Webhook triggers | Yes | No | No | Yes (via ClawFlows) |
| Scheduled tasks | Likely | No | No | Yes (cron-based) |
| External integrations | Extensions system | GPTs/plugins | MCP servers | MCP + native integrations |
| Messaging deployment | Unknown | No | No | Telegram, Discord |
| Background browsing | Yes | No | Limited (artifacts) | Via browser skills |
| Open source | No | No | No | Yes (OpenClaw core) |
| Self-hostable | No | No | No | Yes |
| Available today | No | N/A | N/A | Yes |
The comparison reveals Conway's positioning: it's Anthropic's answer to the agent gap. Claude is the best reasoning model many developers have used, but it has lacked the runtime infrastructure to compete with platforms that deploy persistent agents. Conway closes that gap.
But the table also reveals Conway's likely limitations. It almost certainly won't be open source. It probably won't be self-hostable. And it may not deploy to messaging platforms like Telegram where users actually spend their time. These are the same constraints that limit ChatGPT's agent ambitions and the same gaps that open-source alternatives fill.

What Conway gets right#
Three aspects of Conway's apparent design stand out as genuinely forward-thinking.
First, the extensions model is cleaner than plugins. OpenAI's plugin system was a well-documented mess — inconsistent APIs, security concerns, and poor discoverability. Anthropic's MCP protocol already provides a more elegant abstraction for tool use. Conway extensions appear to build on MCP rather than reinventing it, which means the existing ecosystem of MCP servers could work with Conway out of the box. That's a smart architectural bet.
Second, webhook triggers solve the "last mile" problem. The biggest complaint about AI assistants is that you have to remember to use them. Webhooks mean the agent activates itself when relevant events occur. Your agent doesn't wait for you to ask "did anyone reply to my proposal?" — it watches your inbox and tells you the moment a reply arrives. This is the interaction pattern that drives retention in always-on agent platforms.
Third, background browsing as a first-class capability. Most agent platforms treat web browsing as an afterthought — a tool the agent can invoke when asked. Conway appears to treat it as a continuous capability, with agents able to maintain browsing sessions over time. This enables use cases like monitoring a competitor's pricing page daily, tracking regulatory filings, or watching for restocks on specific product pages.
What Conway might get wrong#
Conway also carries risks that the broader agent community has already encountered and, in some cases, solved.
Closed ecosystem lock-in. If Conway only runs on Anthropic's infrastructure, users have zero portability. Your agent configuration, memory, and workflows become hostage to Anthropic's pricing and policy decisions. The people who've stopped using ChatGPT and built their own agents did so partly to escape exactly this kind of dependency.
No messaging platform deployment. The leaked code shows no references to Telegram, Discord, or Slack deployment — only to those platforms as extensions Conway can read from. There's a massive difference between an agent that can read your Slack messages and an agent that lives in your Slack as a team member you can @ mention. Always-on agents that deploy directly to messaging platforms see dramatically higher engagement because they meet users where they already are, rather than requiring users to visit another dashboard.
Pricing uncertainty. Always-on agents consume compute continuously. They're not burst workloads like chat — they're persistent processes that eat tokens around the clock. Anthropic will need to figure out a pricing model that makes 24/7 agent operation affordable for individual users, not just enterprise customers. OpenClaw-based platforms have solved this with tiered credit systems and efficient model routing, but a first-party Anthropic solution may default to premium pricing that puts always-on agents out of reach for freelancers and small teams.
Safety at scale. Anthropic is the "safety-first" AI lab. An always-on agent that browses the web, executes webhooks, and acts autonomously raises the safety bar significantly compared to a chatbot that only responds to direct prompts. The company will likely ship Conway with aggressive guardrails, which could limit its practical utility compared to more permissive open-source alternatives.
What this means for the agent ecosystem#
Conway's emergence confirms what builders in the agent space have known: always-on is the inevitable direction for AI. The chatbot era — where AI waits passively for human input — is transitioning into the agent era, where AI operates continuously on behalf of its users.
This has several downstream effects:
Validation for open-source agent platforms. OpenClaw, the leading open-source agent framework, has been deploying always-on agents since early 2026. Conway entering the space validates the architecture and use cases that the open-source community pioneered. It also means more developers will be searching for "always-on AI agent" — and many of them will discover that open-source options already exist and are production-ready.
Pressure on OpenAI. ChatGPT has no always-on agent capability. OpenAI's Agents SDK is a developer framework, not a consumer product. If Anthropic ships Conway as a consumer feature inside Claude, OpenAI will be forced to respond. The agent race between the two largest AI companies will accelerate.
Enterprise demand spike. Companies that have been experimenting with agents will see Anthropic's entry as a signal to invest seriously. Gartner's recent projection of a 1,445% surge in multi-agent system deployments by 2028 was already aggressive. Conway could pull that timeline forward.
Hybrid architectures will win. The most likely outcome isn't that Conway replaces open-source agents or vice versa. It's that sophisticated users run Conway for tasks that benefit from Claude's reasoning depth while using open-source agents for tasks that require platform flexibility, messaging deployment, and cost control. The agent stack of 2027 will be heterogeneous.
Frequently Asked Questions#
What is Anthropic Conway?#
Conway is the internal project name for Anthropic's always-on AI agent platform, discovered through leaked code references in March 2026. It appears to be a persistent agent runtime built on top of Claude that supports extensions, webhook triggers, background browsing, and scheduled tasks. Anthropic has not officially announced Conway or provided a release timeline.
What is an always-on AI agent?#
An always-on AI agent is an AI system that runs continuously rather than only responding when a user opens a chat window. It maintains persistent memory, monitors data sources, responds to triggers and webhooks, and can execute tasks on a schedule. Unlike chatbots, always-on agents operate autonomously between user interactions — triaging emails, monitoring competitors, delivering briefings, and managing workflows around the clock.
How is Conway different from Claude?#
Claude is a conversational AI that responds to direct prompts in a chat interface. Conway appears to be a runtime layer on top of Claude that enables persistent, autonomous operation. Where Claude waits for you to ask a question, a Conway agent would proactively monitor your data sources, react to events via webhooks, and deliver information without being prompted. Think of Claude as the brain and Conway as the body that lets it act independently.
Can I use an always-on AI agent today?#
Yes. Open-source platforms like OpenClaw and managed services like RapidClaw already deploy always-on AI agents. RapidClaw provisions agents to Telegram and Discord in under 60 seconds, with persistent memory, morning briefings, email triage, and scheduled workflows included. Conway would be Anthropic's entry into a category that open-source tools have served for months.
Will Conway be free?#
Pricing hasn't been announced. Given that always-on agents consume compute continuously, it's likely Conway will be a premium feature, potentially available to Claude Pro or Team subscribers. For cost-effective always-on agents today, open-source and managed platforms offer tiered pricing starting well below what a first-party Anthropic solution is likely to cost.
Conway is a significant signal, but it's still vapor until Anthropic ships it. The always-on agent paradigm it validates is already real and production-tested. If you've been waiting for permission from a major AI lab to take always-on agents seriously, consider this your signal.
If you want an always-on AI agent working for you today — not in six months when Conway might launch — RapidClaw deploys a personal AI agent to your Telegram in under 60 seconds. Morning briefings, email triage, competitor monitoring, and scheduled workflows, all running 24/7 on your behalf. No waitlist. No leaked code archaeology required.
Ready to build your own AI agent?
Deploy a personal AI agent to Telegram or Discord in 60 seconds. From $19/mo.
Get StartedRelated Posts

OpenAI Agents SDK vs Claude Agent SDK: A Founder's Honest Take
Comparing OpenAI Agents SDK and Claude Agent SDK from a founder who uses both. Handoffs vs MCP, tracing vs control, and which one to pick in 2026.

Solopreneurs Using AI Agents Report 340% Revenue Increases. Here's What They're Actually Doing.
An Indie Hackers survey found solo operators running AI agents averaged 340% revenue growth with zero increase in working hours. Here's the playbook they're using.

Mizuho's Agent Factory: One Bank's Plan to Mass-Produce 10,000 AI Agents
Japan's third-largest bank cut agent development time by 70% and is scaling to 10,000 autonomous AI agents across operations. Their 'Agent Factory' approach is becoming the enterprise blueprint.
Stay in the loop
New use cases, product updates, and guides. No spam.