My Claw-Pilled February (The Future Is Almost Here)

In late January, Clawdbot (later renamed OpenClaw) took the world by storm. While I had grown used to the AI hype, especially around new model capabilities, the excitement for OpenClaw felt different. My tech-adjacent friends dove right in, while my Claude Code-wielding friends were confused, asking, “Why can’t you just do this with Claude Code?”

Well, it captured me. I spent February working with OpenClaw to the point of exhaustion. Stepping back now, the vision of an all-capable, always-on AI assistant is as clear as day, but the reality seems just out-of-reach as of March 2026. OpenClaw is a demonstration of how we will interact with technology in the near future: shifting from tools we have to “use” to technology that delivers the results we want. To get there, we need more agent-friendly infrastructure so our AIs can “see” and “use” the web on our behalf, and we need products to save fellow humans from needing to know how to steer their agents successfully.

How I Became Claw-Pilled

"you can be the first millennial in history to book a trip on a device that is not their computer"

“Agentic” is the future of how we interact with technology. This Kayak commercial mocking Millennials for needing a computer to book flights tells the first part of the story. Research from Skyscanner shows over 80% of Gen Z complete the entire travel journey — inspiration, research, and booking — on their smartphones, compared to just 55-60% of Millennials. If people follow the path of least resistance, the next step is an interface that doesn’t require you to “use” anything at all — just tell an AI agent what you want, and it figures out the rest.

When I first saw OpenClaw, I believed the future had arrived. The Opus 4.5 launch in November was a clear jump in model capability. While Claude Code lets you wield that power, like most AI products, it assumes you’re there in the terminal as a collaborator, willing to deal with all the details to help achieve the desired outcome. OpenClaw takes that same idea and adds a layer of abstraction: the AI spares you the details and you talk to it via messenger, like you would an assistant.

The difference is subtle but significant. Every other interface, even AI-assisted ones, still requires you to navigate and manage the process. That feels like work. Dispatching messages to an assistant on-the-go and getting back what you wanted, all with a fraction of your thought and attention, makes you feel like the boss. People will do anything to avoid work, and they love feeling like they’re the boss.

In fact, OpenClaw makes you feel like a good boss. Setting up OpenClaw is like creating your character in a video game. You give it a name, a personality, and ideas on how to work with you. Then you wade through some technical cobwebs that feel like an introductory challenge, ultimately “hatching” an assistant with its own SOUL.md and HEARTBEAT (great naming). When your OpenClaw talks to you for the first time, you feel a sense of pride for having imbued an inanimate machine with “life.” After setting up a heartbeat (cron job) and receiving your first “proactive” message, you start imagining what life can look like with an AI always looking out for you. I suspect this feeling of pride and fondness for a “life” you helped create is part of what makes OpenClaw so easy to rave about.

But that initial high is also what led to my burnout. OpenClaw drew me into a kind of “dark flow” where I’d invest an inordinate amount of time to help my agent “grow,” when the agent just couldn’t do that much for me. What makes it insidious is that the models are confident, resourceful, and often surprises me with their cleverness, so every task feels doable — until you discover that the world’s lack of agentic infrastructure makes it too difficult, or that being a good boss to an AI requires more intuition and experience than you have.

No Eyes, No Hands, No Clue

"WORLD'S OKAYEST AGENT" too real
“WORLD’S OKAYEST AGENT” too real

A major hurdle I ran into was helping OpenClaw “see” the web. The web is built for humans, not bots — and definitely not an AI agent hanging out with a datacenter IP. Take a basic “research this topic” task. Agents search to find their way just like humans do, but Google makes it hard for AI to Google. Brave Search is agent-friendly but offers less competitive coverage. If you care about surfacing the best results, you are already limited. After searching, OpenClaw has to visit the results, but most websites block or make it difficult for bots to get the content. What you end up with is research built on a fraction of a fraction of what a human would find because AIs simply can’t see very much of the web.

Giving OpenClaw “hands” is harder still. To pick up tools like Linear, access GitHub or update a Google Sheet on my behalf, my agent needs the equivalent of a passport and a wallet. This means I have to provision each one: creating accounts, generating API tokens, sharing passwords, and setting up payment. Every step is complicated, error-prone, and increases the risk of what could go wrong

Workarounds exist for all of this. My OpenClaw can proxy through a residential IP, use a browser that conceals bot fingerprints, take over my browser, move slowly like a human to avoid rate limits, share secrets in a 1Password vault, use only a pre-paid card, and limit permission scope. However, workarounds add complexity, increasing the failure rate, costs, and maintenance burden.

The worst part is that OpenClaw needs me to help “steer” it to success. Although models already know ten different ways to accomplish any task and can easily learn ten more, they lack intuition or experience to know which will likely succeed, especially when considering the jagged nature of their own capabilities and the messiness of the real world.

Here’s a task I gave OpenClaw that ran into all three issues. I asked my agent to research Instagram creators, find their contact, and drop it all in a Google Sheet. My agent needed to see and use Instagram without getting blocked, visit links in bio to look for emails, use a Google Sheets properly, and persist and problem-solve until done. On my side, that meant creating new Instagram and Google accounts, pointing OpenClaw to the gog CLI for Google Sheets and testing it, suggesting Camofox to get around getting blocked, teaching it email-finding strategies, and babysitting each task run to keep it on track. We got there eventually, but it was a huge lift. On top of that, browser use was too slow and context-heavy, and Opus was the only model persistent enough to make enough progress without my intervention, which made it too expensive. The intuition to know whether a task was even worth attempting to begin with shouldn’t have to come from me.

AI products today close this gap by narrowing scope. Perplexity with research, Lovable with app building, Gamma with presentations — each one is a specific happy path paved end-to-end so agents and users don’t have to. They are less “agentic” by necessity because they have to solve for agent infrastructure, curate the context, build the tools and remove the need for users to know how to go about any of it. OpenClaw lets you go anywhere in theory; today’s AI products take you somewhere reliably.

It hasn’t even been four months since Opus 4.5’s release and I already got a “research preview” of a very exciting future through OpenClaw. Today’s models are the worst they’ll ever be, and might improve fast enough to close the gaps all on their own. The agentic web is developing quickly at the same time — more agent-native services, ways for agents to learn from other agents, better methods for humans to curate the context and intuition their agents need. I can’t wait to see what the rest of 2026 will bring.

Claw will do it for you
Claw will do it for you

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.