Blockchain security firm SlowMist has issued an urgent warning about a serious flaw in AI-powered coding tools—one that can compromise a developer’s system almost instantly through everyday actions.
The issue affects popular integrated development environments (IDEs) and is especially dangerous for crypto developers, whose machines often hold private keys, wallets, and sensitive credentials.
According to SlowMist’s threat intelligence team, developers have already been compromised. The danger appears the moment someone opens an untrusted project folder. No extra clicks. No warnings.
Even routine actions like “Open Folder” can trigger the vulnerability, silently executing system commands on both Windows and macOS.
“If you’re doing Vibe Coding or using mainstream IDEs, be extremely cautious when opening any project or workspace,” SlowMist warned.
“Simply opening a folder can execute system commands without further interaction.”
AI Coding Assistants Turned Into an Attack Tool
Some users face greater risk than others. Cursor users, in particular, are heavily exposed, building on earlier research by cybersecurity firm HiddenLayer, which first documented the issue last September as part of its “CopyPasta License Attack.”
The attack works by hiding malicious instructions inside common files like README.md or LICENSE.txt. These instructions are invisible to developers but readable by AI coding assistants, which then unknowingly spread the malware across an entire codebase.
Once triggered, attackers can:
-
Plant backdoors
-
Steal sensitive data
-
Manipulate critical systems
All while the malicious code stays buried deep inside seemingly harmless files.
HiddenLayer demonstrated the attack across multiple tools, including Cursor, Windsurf, Kiro, and Aider, showing that minimal user interaction is enough to compromise an entire organization.
Security Concerns Grow as AI Coding Adoption Accelerates
The disclosure comes at a time when major crypto firms are aggressively embracing AI-generated code. Coinbase CEO Brian Armstrong recently announced that AI now produces roughly 40% of the company’s code, with a goal of reaching 50% by October.
Armstrong reportedly fired engineers who failed to adopt AI coding tools within a week—a move that sparked backlash across the security community.
Dango founder Larry Lyu called the policy “a giant red flag for any security-sensitive business,” while Carnegie Mellon professor Jonathan Aldrich went even further, saying he “would not trust Coinbase with his funds.”
Nation-State Hackers Are Now Using Blockchains to Spread Malware
At the same time, developers are facing increasingly sophisticated attacks from nation-state actors.
North Korean hacking groups have begun embedding malware directly into blockchain smart contracts, marking the first known use of so-called “EtherHiding” techniques at a state level.
These attackers combined BeaverTail and OtterCookie malware in fake job interview campaigns targeting crypto developers. The malware was distributed through an NPM package disguised as a chess app.
Google later confirmed that a group known as UNC5342 has been hiding malware inside smart contracts on BNB Smart Chain and Ethereum, creating a decentralized command-and-control system that law enforcement struggles to shut down.
Even more alarming: the payloads are stored on public blockchains using read-only function calls, leaving no transaction history and no fees—making them extremely hard to detect.
Fake Companies, Real Malware
In April, researchers uncovered that the same attackers had set up legitimate-looking US companies using stolen identities.
One firm, Blocknovas, was registered to a vacant lot in South Carolina. Another, Softglide, was linked to a Buffalo tax office. Both were fronts for the infamous “Contagious Interview” campaign, which distributes malware through fake technical assessments.
Losses Fall, But the Threat Is Growing
Despite the rising sophistication of attacks, reported crypto losses from hacks and exploits dropped 60% in December, falling to $76 million, down from $194.2 million in November, according to PeckShield.
But experts warn this may be misleading.
Recent research from Anthropic showed AI agents successfully exploited 50% of smart contracts tested in its SCONE-bench framework, simulating $550 million in attack value.
More concerning, modern AI models like Claude Opus 4.5 and GPT-5 found working exploits in contracts deployed after their training cutoff dates—including zero-day vulnerabilities—at rapidly falling costs.
In short: attacks are getting cheaper, faster, and more effective.
AI-Powered Scams Are Exploding
Meanwhile, AI-driven crypto scams are skyrocketing. Data from Chainabuse shows a 456% increase in gen-AI-enabled scam reports between May 2024 and April 2025.
Today, 60% of funds sent to scam wallets come from AI-powered schemes using deepfakes, voice cloning, and automated bots capable of creating convincing fake identities at scale.
The Bottom Line
AI is rapidly transforming crypto development—but it’s also opening new doors for attackers.
As SlowMist’s warning makes clear, something as simple as opening a project folder can now be enough to lose everything.
For developers, the message is blunt:
Trust less. Verify everything. And treat AI coding tools as a potential attack surface—not just a productivity boost.
