A fast growing AI coding platform is under scrutiny after a cyber security researcher showed how a hacker could seize control of a journalist’s laptop without a single click. The breach happened in seconds. No warning popped up. No file was opened.
The attack exposed how powerful AI coding tools can turn into silent entry points when security fails.
BBC Reporter Laptop Hacked Through AI Tool
The incident involved a BBC journalist whose spare laptop was targeted during a controlled test. The vulnerability was found inside Orchids, a popular AI coding platform launched in 2025.
Orchids markets itself as a “vibe coding” tool. Users simply type instructions into a chatbot. The AI then writes and runs the code automatically.
Cyber security researcher Etizaz Mohsin discovered that this convenience came with a serious flaw.
By making a tiny modification inside AI generated code, Mohsin showed he could trigger commands on the laptop without any action from the user. Within moments, a file appeared on the desktop. The wallpaper changed to a robot skull and a message reading “you are hacked.”
There was no phishing link. No download. No password theft.
The malicious command ran inside a trusted AI project itself.
How the Zero Click Attack Worked
Most cyber attacks rely on human error. Victims are tricked into clicking a bad link or installing infected software.
This case was different.
The AI platform had permission to write and execute code directly on the user’s machine. That level of access created an opening.
Mohsin inserted a small change into the AI generated project. The platform accepted it as valid code. Once executed, it allowed remote control of the laptop.
From there, a real attacker could:
-
View and edit files
-
Install spyware
-
Access financial information
-
Activate cameras or microphones
“The whole proposition of having the AI handle things for you comes with big risks,” Mohsin said.
He reported the flaw weeks before the story became public. Orchids did not respond before publication. The company later said it may have missed earlier warnings because its small team was overwhelmed.
Orchids claims around one million users worldwide.
Why AI Coding Tools Raise Security Risks
AI coding tools are growing fast. They promise speed and lower costs for businesses and hobbyists who lack technical skills.
Platforms like Orchids generate full applications by translating plain language into working software. That means users often run complex code they did not personally review.
Security experts warn this creates a new type of risk.
Professor Kevin Curran of Ulster University said AI generated projects may lack strict testing and documentation. As a result, hidden weaknesses can spread quickly.
Another concern is the rise of agentic AI systems.
These tools do more than suggest code. They execute commands, manage files and perform actions on a user’s device. When AI gains deeper system access, the potential damage increases if something goes wrong.
Key Security Concerns With AI Coding Apps
| Risk Area | Why It Matters |
|---|---|
| Automated Code Execution | Runs without manual review |
| Broad System Permissions | Access to files and hardware |
| Rapid User Growth | Flaws scale quickly |
| Limited Oversight | Small teams may miss reports |
The larger the user base, the larger the attack surface.
Mohsin said he has not found the same flaw in rival platforms. However, the demonstration raises questions about industry wide standards.
Growing Debate Over AI Security Standards
The incident comes as AI tools expand into workplaces, schools and government agencies.
Earlier this month, reports highlighted how AI systems have been used in military intelligence and national security projects. That growing reliance has intensified scrutiny over safety controls.
Cyber security analysts say AI companies must invest in stronger safeguards, including:
-
Sandboxed environments for code execution
-
Clear permission prompts before system access
-
Independent security audits
-
Faster response to vulnerability reports
Some experts argue that AI developers should adopt mandatory disclosure timelines similar to those used in the wider software industry.
At present, there is no specific federal law regulating AI coding platforms in the United States beyond existing cyber crime and data protection rules.
Practical Steps Users Can Take Now
Experts stress that users should not panic. However, they should be cautious.
If you use AI coding tools, consider these precautions:
-
Run experimental AI projects on separate devices
-
Avoid granting full system permissions unless necessary
-
Use limited user accounts instead of admin accounts
-
Keep operating systems and antivirus software updated
Developers should also review AI generated code before running it, especially when it includes file management or system level commands.
Security professionals say AI tools are powerful but should be treated like any other software that interacts deeply with a device.
Convenience should never replace caution.
The Bigger Picture for AI Innovation
AI coding platforms represent a major shift in how software is built.
They lower the barrier to entry. They speed up development cycles. They help non programmers turn ideas into working apps.
But the Orchids case highlights a difficult truth.
Automation without strong guardrails can create invisible risks.
As AI systems gain more control over devices, they also gain more responsibility. A single vulnerability can open the door to widespread misuse.
For now, the breach appears limited to a controlled demonstration. There is no public evidence of large scale exploitation tied to this flaw.
AI is no longer just a tool for writing text or suggesting ideas. It can execute code, control systems and reshape digital security in real time.
The future of AI development depends not only on innovation but on trust.
If you use AI coding tools, what safeguards do you think companies should adopt? Share your thoughts and join the conversation about how to balance speed with security.
