We are living through a massive shift in how we interact with computers. If you have been following the tech news lately you might have heard of OpenClaw (previously known and ClawdBot before Anthropic's lawyers got involved and then briefly MoltBot). It is an open source AI agent that is taking the world by storm. It was built by a single developer in a few weeks and it shows just how much power one person can have with the right tools. It is exciting and terrifying all at the same time.
OpenClaw is an autonomous AI agent that runs locally on your own machine. Unlike a chatbot where you just talk back and forth OpenClaw can actually do things. It has "hands" in the form of tools. It can access your file system. It can run terminal commands. It can open a browser to search the web or click buttons. It connects to messaging apps like WhatsApp or Telegram so you can send it instructions from anywhere.
You can ask it to "plan a holiday to Disneyland" and it won't just give you a list of hotels. It can go to booking sites and check availability. It can put the dates in your calendar. It can draft emails to travel agents. It can apply for leave. It is basically a digital intern that lives on your laptop and never sleeps.
It is incredible to think that a project this powerful was started by just one person empowered with AI coding tools. This is the kind of innovation we need to support. It shows that the barrier to entry for building world changing software has basically vanished. For InfoSec professionals and IT as a whole, this is a huge wake up call and a classic example of the power of a good developer augmented by AI superpowers.
We are no longer just securing static applications. We are securing dynamic agents that act on their own and change over time. For traditional software makers, this is a very scary time as their entire market could be disrupted by one person any time. This idea was one of the primary reasons for the massive tech stocks sell off in early February.
There are several risks we need to talk about. OpenClaw is amazing but it is also dangerous if you do not treat it with respect. The developer of openClaw is very clear that security was not his top priority. This is very much still in Proof-of-Concept phase and should not be given access to anything sensitive, and only tested on isolated single purpose computers!
Do not install it on your work computer or any computer you use for anything sensitive such as email, banking, even personal budgeting.
When you run OpenClaw you are effectively giving an AI model full control over your machine. It inherits your permissions. If you can delete a file so can OpenClaw. If you can email your boss so can OpenClaw.
The biggest risk is what we call the "lethal trifecta" which is access to private data plus external communication plus access to untrusted content.
Imagine you ask OpenClaw to summarise a website. That website could contain hidden instructions that tell the bot to find your passwords file and send it to a remote server. Because OpenClaw has permission to read files and access the internet it could theoretically do this without you ever knowing.
OpenClaw has access to an ever-growing library of skills written by other people. Many of these are malicious and actually send your information to other 3rd parties without your knowledge or use your computer for other tasks than what you think it's being used for.
This brings me to the most important rule. You must never under any circumstances give OpenClaw or any similar agent access to any work devices or corporate data.
Do not install it on your work laptop. Do not let it read your work email. Do not give it API keys to company cloud accounts.
If an agent like this is compromised on a work machine the attacker doesn't just get into your computer. They could get into our network. They can move laterally and cause massive damage. Until we have better sandboxing and enterprise grade controls these tools must stay strictly on your personal devices and completely separate from any work data.
OpenClaw is a great thing for the InfoSec and Cyber-Security space because it is open source. We can see the code. We can look at exactly how it executes commands and where the vulnerabilities are. We can understand it. We can work to fix it.
My real worry is not OpenClaw. My worry is the hundreds of closed source "AI employees" and software being sold to companies right now. Vendors are rushing to sell AI agents that do the exact same things as OpenClaw but we can't see into the black box. We can't see what problems the tools may have. We don't know what they're doing with our data. We don't know if they have backdoors. We can't see how they've instructed or trained the models. We don't know if their security practices are any good.
With OpenClaw the risks are right there in the open for us to study and mitigate. That transparency is the only way we will ever truly secure the future of AI.
The Way Forward
If you are fully aware of the risks, and are technically skilled enough to understand the implications of these kinds of tools, go ahead any play with them but don;t give them access to anything sensative,
And obviously, keep it far, far away from the corporate network.
We try hard to be the security team that enables innovation by understanding it and how we can use these amazing tool safely. We don't need to spread fear. We just need to explain that you wouldn't give a random stranger the keys to your house or the office and you shouldn't give them to an AI agent either. Not yet anyway.