Skip to content

Vibe Coding – Real Code, Real Risks, or Both?

You may have heard the term “vibe coding,” and the controversy surrounding it may have piqued your interest. But what is it really? And what are the actual security risks?

This blog examines the difference between a non-developer using vibe coding for creative expression, and trained professionals using agentic AI to accelerate the coding process. By the conclusion, you’ll understand the difference between those two use cases, the risks of using AI for code generation in any form, and how to mitigate the potential threats that can arise from valid AI use.

What is Vibe Coding?

Vibe coding is a controversial approach to software engineering where developers do not deal directly with code. Instead, they explain what they want in natural language, and let a generative AI create the product for them. The idea is to stay in the natural flow of creation rather than worrying about the technicality of properly structured code. You let artificial intelligence do the coding work for you. Originally, this was suggested for weekend code experimentation. However, now this practice is influencing professional development.

Controversies Around the Practice

Vibe coding is controversial because it puts the power of coding into the hands of people who don’t truly understand the code they’re creating. This allows non-coders to express their creativity in functionality without needing a trained and experienced programmer. They get the benefits of a fully developed application, minus the learning it would otherwise take to achieve those results. 

Over the years, there have been many attempts at enabling non-technical users to create functionality using low-code or no-code solutions. In the past, these solutions had built-in guardrails for security, privacy, and data loss prevention. Vibe coding seemingly sidesteps all of this. It allows nontechnical users to get down to the code level and create functionality in a way that lacks safeguards.

Vibe Coding: Non-developer Use vs. Agentic AI in Development Practices

As with any marketing term, “vibe coding” has been misapplied to programming with AI coding assistance in agentic mode. The difference is, a trained developer using  agentic AI for the purposeful creation of code is scrutinizing exactly what the code is doing and how it is doing it. If this is how your developers are using AI, you are a step ahead of the major risks associated with the highest form of vibe coding.

This is where it gets tricky, because the term “vibe coding” is used to describe both use cases: the “non-coder” who uses AI without guardrails, and the seasoned developer who incorporates agentic AI use into their development processes.

From this point forward, we will assume that when we refer to vibe coding in this blog, it involves your developers using generative-agentic AI for code creation, and that you have an existing process in place for application security during development. If not, pause this article and do that first, then come back and continue from here.

While the unknown can be intimidating, it’s important to understand that vibe coding produces code similar to that created by human developers. Developers have been using code generation tools for years. The resulting code is still very much Java, JavaScript, Python, Ruby, Go, C# or whatever programming language you ask AI to generate. For this reason, if you have a process for secure code development, then you also can use that exact same process for AI-generated code. The person who generated the code is still the person responsible for the code.

The Known Risks of Vibe Coding

Now that we’ve set that baseline, we must acknowledge the basic risks introduced by generative-agentic AI coding. Historically speaking, secure development processes have centered around optimizing the output of programmers while ensuring code reliability and human accountability. While human developers are evaluated for their performance and attention to detail, an LLM lacks the ability to discern consequences. 

AI Hallucinations Lead to Malicious Code Integration

The lack of bias in an LLM can help it generate things that seem to be outside the box or without known constraints. However, its lack of discernment prohibits it from knowing which assumptions are safe to make and which ones risk it falling victim to things like Slopsquatting.

In Slopsquatting, would-be attackers monitor for AI hallucinations that look like legitimate package names. They use those names to produce malicious code that the AI will inherently trust. After all, the AI already said the package was trustworthy. Why shouldn’t it believe itself? The vibe coder, none the wiser, downloads the malicious package and installs a back door into their systems. If the organization doesn’t have integrated AppSec controls in place, or the developer doesn’t fully vet the package prior to integration, the malicious code may not be discovered until well after it has been deployed across connected systems–or worse, after the breach.

“Easy” Coding Can Lead to Redundancy

Generating code, regardless of the method used, can enhance a programmer’s productivity by increasing the volume of code they produce. A consequence of this is that there tends to be more duplicate functionality, making it less maintainable. 

When developers mindfully create code as a team, they build based off of fully-formed architectures where they can reuse packages and containers. The rapid-fire nature of vibe coding doesn’t take existing APIs or libraries into account unless developers specifically direct them to. And even then, AI often follows its own whims. Developers who incorporate generative AI into their coding practices must make it a habit to validate code, eliminate redundancy, and maintain good, maintainable practices.

Vibed Code is the Developer’s Responsibility

AI code generation also tends to lack the transparency needed in common production scenarios. Just like when a human receives an instruction to create a feature, generative AI prioritizes the functionality described. However, it doesn’t account for any implicit technical or security needs.

Experience informs a programmer’s discernment for code patterns and helps them choose the most reliable methods for performing tasks. AI code generation generally acts on what is most common. Until AI can take legal responsibility and have discernment, it can only be used by a developer as a generation tool. The user becomes responsible for its quality, correctness, and security.

Vibe Coding and the Human Factor

At the center of vibe coding risk is the false sense of security. Numerous studies (like one from Cornell University) show that developers who lean heavily on AI to generate code are more likely to think they have created quality, secure code than programmers without access to an AI coding assistant. 

Whether or not this assumption aligns with reality, the issue is clear: humans tend to trust AI, even when shown that AI is prone to mistakes. The problem is, no matter what you ask AI, the best it can give you is consensus, not discernment. Like any AI tool, prescriptive use of AI code generation requires human decision-making at the forefront. If your developers are integrating AI into coding practices, it’s critical that you–and they–understand and mitigate the risks and potential threats. No matter how good the output, we must scrutinize AI-generated code to the same (or greater) degree as traditionally written code.

No matter what you ask AI, the best it can give you is consensus, not discernment.

Alton Crossley, Application Security Engineer, GuidePoint Security

Current AI Risks

Veracode research illustrates a glaring concern: 45% of the AI-generated code examined contained security vulnerabilities. While these vulnerabilities vary wildly, a deep concern is AI’s tendency to hallucinate non-existent package dependency names. As demonstrated earlier, can lead directly to you becoming a victim of Slopsquatting.

Underpinning these alarming trends are AI’s vulnerability to data poisoning, outdated packages, advanced typo squatting, and various security anti-patterns. Couple this with the added volume of code, and you can see why the industry is scrambling for answers.

Dealing with Threats

Based on the current state of AI-assisted coding, developers often think of it as a trusted pair-programming partner. We should treat vibe coders like any other programmer wanting to build an application. Their code must follow the proper process of version control and assurance. 

Merge reviews are even more important now, and time should be given to improve code before it is merged. This underutilized activity can help engage junior programmers and improve their code review skills. Tooling that performs differential scans and decorates merge reviews with additional information can prove a valuable education tool. The discipline of software engineering is much more important in the context of vibe coding.

While the industry works to improve the initial output of these generative tools, security needs to lean into security assurance processes and test automation to keep up. Development teams must give additional scrutiny to dependency management and the dependencies adopted by a project. Prompts and merge reviews should emphasize the principles that drive securable attributes in software.

Developer education is of paramount importance now because they are getting less code education through experience. Merging reviews with peer knowledge sharing is highly valuable. Allowing the proper amount of time for design and architecture can raise the value of cross-cutting solutions that prevent the bypassing of controls like input filtering or authorization. This can relieve these sorts of contexts from being part of the prompt context, simplifying high-risk code operations.

Conclusion

The AI code generation landscape is quickly evolving. Ultimately, vibe coding produces code. Code scanning, testing, version control, or rejection all apply, just as they would with any code. We remain in control. And, we can make a significant impact on our organization’s security posture by following secure coding best practices. This is a reminder that we need to have security assurance and education in place throughout the development environment.

Watch this blog for our next vibe coding post. We will discuss methods for getting securable code from vibe coding, along with tooling types that can help you keep up with this new evolution of code creation.

Wondering if your application security passes the vibe check? You can find out with GuidePoint Security’s application security architecture reviews and source code review tactical assessments.