
Topic
Software security in 2026: What everyone should know about AI-written code
Your new app works great and looks impressive. But what if it has the door wide open and you don't even know it?
The AI revolution in software development
You describe in plain language what you want the software to do, and AI writes it for you. No weeks of waiting, no technical specifications. It's called “vibecoding” and platforms like Cursor, Replit, or Lovable now allow even non-technical people to build apps in hours.
It sounds appealing, and that's exactly why more and more managers, marketers, and entrepreneurs are trying it. A weekend project, an internal tool, a prototype for investors. But the question is: is that code secure?
AI writes code that works. But working doesn't mean secure.
AI-generated apps often do exactly what you described. Buttons click, data displays. But under the hood, it can look very different. According to an extensive study by Veracode, 45% of AI-generated code contains security vulnerabilities. Nearly every other piece has a flaw hidden inside.
Research from Stanford University further showed that people using AI believe their code is more secure, even when it is actually less secure. AI gives you a false sense of confidence. The app looks professional, it works – so why doubt?
In February 2026, data leaked from Moltbook, a social network built entirely with vibecoding – 1.5 million API tokens and 35,000 email addresses. The AI set up the database without access control and the founder deployed it as-is. It worked perfectly, after all.
What specifically fails
Unlocked doors
AI creates a functional app but often forgets to define who can see which data or who can delete records. It won't answer those questions because nobody asked them.
Keys under the doormat
Every app needs credentials for databases, payment gateways, email services. Professional developers store them carefully. AI often leaves them directly in the code, visible to anyone who looks.
The invisible supply chain
And here's the key point. Modern software is like a car – the manufacturer doesn't make every bolt themselves. 70 to 90% of code in a typical app comes from third-party components. Your app depends on them even if you don't know they exist. And those components are increasingly the target of attacks.
The Axios incident: why even a manager who just vibecodes should worry
On March 31, 2026, one of the largest security incidents in web history took place. The target was the Axios library – a component most web apps use to communicate with servers. 100 million downloads per week. Present in 80% of cloud environments.
Attackers gained access to the library maintainer's account and released an infected version. The malicious code ran within two seconds of installation, downloaded a trojan, gave attackers remote control of the computer, and then covered its tracks.
What does this mean for you if you're vibecoding like this? Your weekend app, that internal dashboard or prototype, uses dozens to hundreds of similar components. When one of them gets infected, the malicious code runs on your computer. On the very same computer where you're logged into company email, CRM, banking, internal systems. The attacker isn't after your hobby app's data. They're after everything you have in your browser and on your disk.
The infected Axios version was available for only two to three hours. That was enough. And this was an attack on a component used by professional teams with security processes in place. Imagine what happens when attackers infect a smaller, less-watched library – one that AI recommends and you install without thinking.
Why vibecoding makes these risks worse
No human oversight
In traditional development, code passes through multiple people. One writes, another reviews, security tests catch problems. With vibecoding, code goes straight from AI to production without anyone looking at it.
Built by people who don't think about security
Marketing builds a dashboard, sales builds a CRM, HR builds a hiring app. Great. But these “non-technical developers” aren't looking for unlocked doors because they don't even know doors exist. Worse, vibecoding usually happens on a work computer where every company service is logged in.
Every fix adds more problems
The typical flow: generate code, test, tell AI “fix it”, repeat. Researchers at Kaspersky found that after five such iterations, the code contains 37% more critical vulnerabilities than the original version.
AI hallucinates, and attackers exploit it
AI models sometimes recommend a software component that doesn't exist at all. They simply invent a name that sounds plausible. Nearly one in five packages that AI recommends is fabricated. And these hallucinations repeat – AI recommends the same non-existent name again and again.
Attackers haven't missed this. They register those names, fill them with malicious code, and wait. When AI next recommends that same package, the developer downloads malware instead of a useful component. And if AI installs packages automatically without confirmation – which some tools do – you have a problem before you even realize what happened.
What we do differently
We use AI in development. But we treat AI-generated code the way we treat data from an unknown user – carefully:
Mandatory review. AI code goes through an experienced developer's review before it reaches production.
Human oversight over critical areas. Authentication, authorization, data access, payment logic. We don't rely on AI here.
Dependency verification. Every third-party component is vetted. We don't blindly copy what AI recommends.
Isolated environments. AI agents have no access to production data.
Conclusion
The problem isn't that AI writes bad code. The problem is that it writes code that looks good, works well, but isn't secure. And people trust it because they have no reason to doubt.
If you're vibecoding on a company computer, you're risking more than your weekend project. Every installed component is a potential entry point for an attacker. And they aren't after your prototype – they're after your company data, credentials, and systems you have access to.
Companies that invest in a professional approach today and treat AI as an accelerator – not a replacement for expertise – will have the advantage. And those betting that “AI can handle it alone”? Attackers are already sharpening their tools for them.
Written by
Jakub Honíšek
Published
24. 4. 2026
Written by
Jakub Honíšek
Published
24. 4. 2026






















