Vibe Coding Boom Leaves Security Backdoors Open

Just a few days ago, a major software supply chain attack hit Axios, an important programming library used in millions of apps and websites to fetch and send data.
On March 31, 2026, attackers took over a trusted maintainer account and slipped malicious code into official Axios updates, so when developers installed them, a hidden code quietly deployed malware.
The breach lasted only a few hours but spread rapidly through automated updates, showing how compromising a single widely used dependency can put thousands of applications at risk without ever touching their actual code.
As AI systems become embedded in software development, the threat landscape is expanding. It is no longer limited to just the code developers install that can be poisoned, but also the data and instructions AI systems consume. Attackers now try to manipulate the AI’s behaviour itself, exposing a new kind of attack surface.
A New Age Of Code Injection
Vibe coding or AI-written software has reduced human workload tremendously in scaled-up environments, but as recent incidents have shown, this also introduces severe security vulnerabilities. These are often created by untrained users or AI coding models optimising for speed over security. Applications frequently lack input sanitisation, leaving them vulnerable to data breaches. Security experts peg 60-65% of the vibe code systems to be susceptible to attacks and breaches.
Prompt injection is one of the biggest threats in deployed large language systems (LLM) systems, which lets hackers change the code just through an AI tool query.
Such attacks are not entirely new, but in the context of vibe coding they behave very differently. They do not target the application directly. Instead, they influence the AI systems that generate the application. This is especially true considering how language models trained on platforms like Stack Overflow and GitHub can become vulnerable.
The attack surface here is the knowledge layer that comprises public forums, documentation, blogs, and code snippets on which AI models are trained. Within this layer, malicious or manipulative instructions can be embedded in ways that are difficult to distinguish from legitimate content.
“Prompt injection is surprisingly common. You’ll see phrases like ‘ignore previous instructions’ not just in code, but even on LinkedIn profiles and public websites,” Rahul Poruri, the chief executive officer of the open source community FOSS United, told Inc42.
This creates a fundamentally different security challenge. In traditional systems, malicious code is an artifact that can be scanned, flagged, and removed. In AI systems, the vulnerability may exist as influence rather than code.
Elsewhere, cloud-based security company Safedep’s founder, Abhishek Datta, frames this as a parallel supply chain.
“The threat landscape is expanding with the adoption of AI in software development, especially AI coding agents. We now have MCP servers, skills, plugins, and other components entering from open source ecosystems and being included as part of application code or the AI agent harness itself,” Datta said.
Complicating matters is the fact that these categories of dependencies didn’t exist two years ago, and they carry the same class of supply chain risk. As Datta put it, “This is untrusted code running in trusted contexts, but with less scrutiny and fewer established security controls.”
In other words, there is no clear line between safe and harmful content here. Something that looks like an attack could also just be a normal instruction meant for an AI. The ambiguity increases the severity of the security challenge.
Where Are The Human Checkpoints?
If prompt injection alters how code is generated, the new and emerging class of AI coding agents, which are autonomous AI systems, amplify how quickly those altered patterns make it into production. One of the most underestimated risks is the removal of human friction in dependency decisions.
An AI coding agent can autonomously identify a need, select a package, and install it without any human review. In traditional workflows, even minimal scrutiny existed. A developer would type a package name, glance at documentation, or notice anomalies. With AI agents, that checkpoint disappears.
For instance, Anthropic launched Code Review in Claude Code in March, a new multi-agent system that tries to catch bugs before a human reviewer even sees the code.
Removing the human from the reviewing and validation cycle of software development creates a new class of breach scenario.
An agent may hallucinate a package name or select a typosquatted (unintentional misspelling of domain name to resemble the original) version created by an attacker. The package gets installed, the code runs, and the developer sees a working feature. The vulnerability remains hidden.
Attackers are already adapting to this shift. There are early signs of package names being registered specifically to match patterns that large language models tend to generate, said Safedep’s Datta. “This is effectively a land grab for AI-driven typosquatting.”
The scale of impact is also different. When a compromised package spreads through traditional means, it affects projects that explicitly depend on it. When an AI tool consistently recommends a compromised package, it can propagate across thousands of developers and organisations simultaneously.
“Developers often trust outputs that appear syntactically correct, but these may include insecure logic, outdated libraries, or hidden vulnerabilities,” added Vaibhav Tare, CISO at Fulcrum Digital.
Many of these vulnerabilities are already visible in early vibe-coded applications. Vibe coding encourages speed and abstraction, often at the cost of deep understanding. Experts we spoke to say that the core issue is that vibe coders often don’t have foundational security knowledge, which leads to widespread, systemic weaknesses.
Vibe Coding Exposes Attack Surfaces
The knowledge layer feeding these systems is vast, informal, and difficult to audit. This is particularly true in India. “India has the fastest AI coding tool adoption globally and the broadest community-driven knowledge layer,” Siddon Tang, general manager of Asia Pacific, SVP of engineering and product at open source database company TiDB, said.
India has an estimated 4.3 to 5.8 Mn software developers, making it one of the largest and fastest-growing tech talent pools globally. “That’s a wider, harder-to-audit attack surface,” Tang added.
Overall, vibe coding related vulnerability could represent a material minority of application security incidents, potentially 20 to 30%, according to Biswajeet Mahapatra, principal analyst at Forrester.
This is not considered a concerning figure at the moment, but the way AI has evolved, new attack vectors could surface that could result in deeper damage.
At the moment though, the AI industry at large is bullish about the growth of adoption for AI-assisted development rather than focus on the exploit sophistication and the potential attacks in the future.
One potential weak link is that developers in India rely heavily on community-driven platforms, tutorials, and peer networks.
These sources are now directly influencing AI-generated code, often without any validation layer. In India the problem is particularly acute due to the larger developer base, faster adoption of AI tools among new engineers, and a widespread tech ecosystem.
This increases our exposure to AI-powered cybersecurity incidents, breaches and attacks.
AI coding platforms need to seriously implement guardrails and reeducate developers at a fundamental level to mitigate these compounding effects.
The post Vibe Coding Boom Leaves Security Backdoors Open appeared first on Inc42 Media.


Superadmin 










