TOO INDEPENDENT AGENTS ARE DANGEROUS – the case of ClawdBot / MoltBot / OpenClaw

Carefully designed and tightly supervised agents are an excellent tool for building automations that previously required massive amounts of code or were simply unsolvable. The semantic flexibility, decisionmaking ability, and domain understanding of language models allow them to perform assigned tasks with very little instruction.
However, AI agents must not be allowed to operate too independently. Although we imagine they handle assigned tasks in a humanlike manner, their behaviour contains many hidden threats that must be understood. Would you allow a person whose commitment to and understanding of your instructions is extremely fragile to manage your own or your company’s sensitive matters without supervision or restrictions? A wrong kind of input, whether intentional or accidental, may cause an agent to leak sensitive information to the world.
At the end of January 2026, a highly autonomous personal assistant agent named ClawdBot rose to massive public attention and popularity. After a naming dispute, it was first renamed MoltBot and then finally OpenClaw. The idea behind this agent is to run on a personal computer around the clock as a general assistant for all kinds of tasks the device owner requests. To achieve this, the agent also continuously monitors the user’s activities to learn how to serve them better.
After an initially enthusiastic and hypefilled reception, problems have begun to accumulate around OpenClaw, issues security experts warned about from the very beginning of its rise in popularity. The agent is apparently quite capable of helping users with their everyday tasks and, based on observations from monitoring the user, even invents helpful actions without explicit instructions. But to perform such tasks, the agent must be granted full system privileges, which contains a major risk.
An agent running with full system privileges can do anything the owner can do on their computer. In fact, with the knowledge base provided by a large language model, it can do much more than most computer users themselves. This is a hacker’s dream and a security professional’s nightmare.
Although the developer of the agent software, Peter Steinberg, warns about the dangers of the software and unskilled installations, eager experimenters have ignored the warnings. The most serious threat is socalled prompt injection, where a malicious actor manages to alter the agent’s operating instructions (its prompt) and harness it for their own purposes.
One of the biggest weaknesses of current language model technology is that models cannot reliably distinguish which inputs are instructions and which are data. For example, a cleverly crafted attack text hidden in an email read by the agent may cause it to interpret the email not as data but as a new instruction, after which it might send sensitive information back to the attacker. With systemlevel access, the agent can reach everything: stored passwords, personal data, API keys, banking credentials, etc.
Although OpenClaw was developed mainly as a personal assistant for private individuals, companies should not ignore the risks associated with agents. As agentbased automation tools become more common, the people building automated workflows often lack sufficient understanding of the related security requirements.
The easier it seems to build automation with generalpurpose agents, the more likely it is that agents become too autonomous, and therefore unpredictably dangerous. Agentbased workflows must be carefully designed, and the access rights required for each step must be precisely limited regarding company data and communications.
At Ai4Value, we have invested especially in AI security. Contact us if you have concerns about AI security solutions for your company. We also offer a training package on the topic titled Secure use of AI.