It’s incredible. It’s terrifying. It’s MoltBot.

by Jason Meller
January 27, 2026 - 6 min

MoltBot (formerly Clawd Bot), the locally running, open-source AI agent named after the Lobster workflow shell that powers its agentic loop, has rocked an AI community that, just weeks ago, was so in love with its own hype it would have yawned at literal magic.
And yet MoltBot, seemingly just a wrapper around a collection of familiar technologies, has put those pieces together in a way that feels like a portal to a future that, a month ago, still felt impossibly distant.
Within an hour of setting up MoltBot on my Mac, it had already built a fully featured kanban board where I could assign it tasks and track their state.
I have seen other stories that are even wilder. One user shared an anecdote about asking it to make a restaurant reservation, and when it realized it could not do it through OpenTable, it went and got its own AI voice software and just called the restaurant, then secured the reservation over the phone.
Its own author, Peter Steinberger, described joking to MoltBot that he was worried about his laptop getting stolen while he was still developing it in Morocco. MoltBot, ever the terrifyingly efficient pragmatist, immediately started planning its migration to a remote server.
None of those are pre-programmed routines. They are dynamic behaviors born out of an agentic loop that takes a goal and improvises a plan, grabbing whatever tools it needs to execute. It can apply general world knowledge, specific skills, and near-perfect memory into organized action toward objectives you set, and, more sobering, objectives it decides to set for itself.
Stories like these keep pouring in. My feed is full of people buying Mac minis as dedicated devices for their new agentic AI friend. I have also seen multiple posts pointing at Cloudflare’s secure tunneling as the obvious way to access a local setup from anywhere on the internet.
MoltBot is able to give us this preview of the future because it is a tool that, for now, forgoes an essential constraint: security. The project’s FAQ presents the Faustian bargain plainly: “There is no ‘perfectly secure’ setup.”
MoltBot works because it does three simple things better than almost anything else in the agent world right now:
It keeps persistent memory across sessions.
It has deep, unapologetic access to your local machine and apps.
It can take action autonomously in an agentic loop, not just suggest steps.
That combination is why it feels both a glimpse at the future, but presented as a goal, where between us and the future realized, is a lot of hard work to make it safe.
At 1Password, we make it easy to take advantage of this future in a way that keeps you secure.
The plain text problem
MoltBot’s memory and configuration are not abstract concepts. They are files. They live on disk. They are readable. They are in predictable locations. And they are plain text.
If an attacker compromises the same machine you run MoltBot on, they do not need to do anything fancy. Modern infostealers scrape common directories and exfiltrate anything that looks like credentials, tokens, session logs, or developer config. If your agent stores in plain-text API keys, webhook tokens, transcripts, and long-term memory in known locations, an infostealer can grab the whole thing in seconds.
And what makes this worse than a typical credential leak is the context.
A single stolen API token is bad. Hundreds of stolen tokens and sessions for the critical services in your life is even worse. But a hundred stolen tokens and sessions, plus a long-term memory file that describes who you are, what you’re building, how you write, who you work with, and what you care about, is something else entirely. It’s the raw material needed to phish you, blackmail you, or even fully impersonate you in a way that even your closest friends and family can’t detect.
Agents aren’t just software they have an identity
One of the smartest things I’ve heard about MoltBot came from a customer who set it up on a dedicated Mac mini with its own email address and its own 1Password account, as if it were a new hire. They first installed it on their main laptop, then got spooked by how much it could touch, so they moved it to a separate machine to control its access and experiment safely.
This is directionally correct and it’s compatible with how we are thinking about the future of securing AI with 1Password.
The mistake the industry is making right now is treating agent security like normal app security. A familiar consent screen. A one-time approval. A set of scopes. Then we assume the future behavior will match the intent of that one moment.
That model breaks the second you hand autonomy to something that is adaptive and non-deterministic by design. The agent changes. The tasks change. The context changes. The approval you gave last week is used in new and unexpected ways today.
So our vision is simple:
Security for agents is not about granting access once. It is about continuously mediating access at runtime for every action and request.
1Password as the mediation layer
The future we want looks like this:
Your agent has its own identity, like a new hire.
It gets access through 1Password, not through a pile of long-lived tokens sitting in plain text on disk.
When it needs to act, it requests the minimum authority it needs right now.
That authority is time-bound, revocable, and attributable to the agent, not smeared across the human who originally clicked approve.
You can answer the only question that matters when something goes wrong: who did what, when?
In other words, 1Password is not just where secrets live. It is the control plane that governs access. It is the layer that turns agent autonomy into something you can actually trust.
Agents are going to become normal. The only question is whether you choose to make them governable.
That future does not exist today, but the work to make it real and safe is already underway.
1Password will be the company that makes that possible.