Jump to content




OpenClaw is a major leap forward for AI—and a cybersecurity nightmare

Featured Replies

rssImage-27bafe169b537b56476e0592170b0576.webp

Cybersecurity researchers have discovered roughly 1,000 unprotected gateways to OpenClaw, an open-source and proactive AI agent that can be controlled through text conversations with apps like WhatsApp or Telegram. The gateways were found on the open internet, allowing anyone to access users’ personal information. One white hat hacker also reportedly gamed OpenClaw’s skills system, which lets users add plugins for tasks like web automation or system control, to reach the top of the rankings and be downloaded by users around the world. The skill itself was innocuous, but it exploited a security vulnerability that someone more nefarious could have used to cause serious harm.

Access to those gateways would allow hackers to reach the same files and content OpenClaw can access, meaning full read and write control over a user’s computer and any connected accounts, including email addresses and phone numbers. A number of incidents exploiting those vulnerabilities have already been reported.

OpenClaw, originally called Clawdbot, was released in November 2025 by Peter Steinberger, an Austrian-born, London-based developer best known for creating a tool that lets apps display and edit PDFs natively. The launch followed a wave of advances in AI’s ability to interact with files that began in late 2025.

Late last year, many people began experimenting with Anthropic’s Claude Code, an agentic AI that links to a computer’s file system through the terminal or command line and responds to conversational prompts to build large projects independently, with some oversight. The tool excited many users but also discouraged others who were uncomfortable working in a non-graphical interface.

In response, Anthropic set Claude Code to work autonomously on a sibling product, Claude Work, which layers a more user-friendly interface on top. While it has gained some traction, it is a third-party product built by a developer outside Anthropic that has captured the most attention.

Steinberger’s OpenClaw mimics the best features of Claude Code, but with more functionality and the ability to proactively work on tasks without being prompted.

That proactivity is a key differentiator between the tool, which was forced to rename itself Moltbot and then OpenClaw last week after a request from Anthropic, and other AI systems. Its potential has energized the tech sector, driven a spike in Mac Mini sales as a popular way to host the agent, and come to dominate certain corners of X and Reddit.

The problem is that the very thing that makes OpenClaw so appealing, the ability to oversee an eager AI assistant without specialist coding knowledge and with an easy setup, is also what makes it so concerning. “I love it, yet [I’m] instantly filled with fear,” says Jake Moore, a cybersecurity expert at Eset. Moore says users are so excited by the idea of OpenClaw as a personal assistant that they are granting it unrestricted access to their digital lives, sometimes while hosting their instances on incorrectly configured virtual private servers. That leaves them vulnerable to hacking.

“Opening private messages and emails to any new technology comes with a risk and when we don’t fully understand those risks, we could be walking into a new era of putting efficiency before security and privacy,” Moore warns. The same access that makes OpenClaw powerful is also what makes it dangerous if it is compromised. “If one of the devices Clawdbot is running on is compromised, an attacker would then gain access to everything including full history and highly sensitive information,” he says.

Steinberger did not respond to multiple interview requests, but he has published extensive security documentation for Moltbot online, even if many users may not incorporate it into their setups. That concerns cybersecurity experts. “Developments like Clawdbot are so seductive but a gift to the bad guys,” says Alan Woodward, a professor of cybersecurity at the University of Surrey in the U.K. “With great power comes great responsibility and machines are not responsible,” he says. “Ultimately the user is.”

The way OpenClaw operates, running without oversight and acting as an always-on assistant, may cause users to forget that responsibility until it is too late. Some have already demonstrated that Moltbot can be vulnerable to prompt injection attacks, in which harmful instructions are embedded in websites or emails in the hope that AI agents will absorb and follow them. “I wonder who these users think will be blamed when agentic AI empties their account or posts hateful thoughts,” Woodward says.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.