Top Highlights
-
Tool Utilization: AI tools like vibe coding are not inherently good or bad; their effectiveness depends on human oversight and security measures.
-
Human-Centric Approach: Successful implementation of AI-assisted coding requires prioritizing human involvement to validate and verify code, thus preventing security vulnerabilities.
-
Security at Inception: Organizations should adopt a "Secure at Inception" strategy, integrating AI with built-in security features to identify and mitigate vulnerabilities early in the development process.
- Ongoing Human Oversight: Despite advancements in AI code checkers, humans must continue to monitor and verify outputs to ensure security, emphasizing a "trust but verify" approach.
[gptAs a technology journalist, write a short news story divided in two subheadings, at 12th grade reading level about ‘How to Vibe Code With Security in Mind’in short sentences using transition words, in an informative and explanatory tone, from the perspective of an insightful Tech News Editor, ensure clarity, consistency, and accessibility. Use concise, factual language and avoid jargon that may confuse readers. Maintain a neutral yet engaging tone to provide balanced perspectives on practicality, possible widespread adoption, and contribution to the human journey. Avoid passive voice. The article should provide relatable insights based on the following information ‘
A tool can be used well or poorly, but much of the time it is neither inherently good nor bad.
Take vibe coding, the act of using natural language to instruct an LLM to generate code. Applied poorly, because models hallucinate and frequently introduce security vulnerabilities, they aren’t great at patching up on their own. But when humans are put at the center and code is verified and checked, the models can, in theory, be used to enhance production output (albeit not replace a person entirely).
If you’re at an organization and you just got a mandate from up high telling you that you need to implement AI-assisted development tools, one of your first questions might (hopefully) be: How can I do this securely?
There are many answers, but three of them come down to putting people at the center, emphasizing security from the beginning, and accounting for inherent unpredictability.
People First
Snyk chief technology officer (CTO) Danny Allan tells Dark Reading that in the past three months, “I have not talked to a customer that’s not using AI coding tools.”
“There is zero question in my mind that 100% of developers are going to be using AI-assisted coding tools,” he says. “It’s not a question of whether they’re going to use it or not. It’s more a question of how they use it.”
While this is, of course, anecdotal, it’s clear that AI-assisted coding has quickly found a presence in many organizations. While the technology is still nascent and not capable of singlehandedly taking over application development, Allan says he’s seen some of the best use cases in prototyping and greenfield applications, where software is conceptualized for further development or refinement.
AI-generated code, as it exists now, generates significant software vulnerabilities when left unchecked. In Veracode’s “2025 GenAI Code Security Report,” researchers found that AI-generated code introduced notable vulnerabilities in 45% of tested tasks. A Georgetown University white paper published in November 2024, meanwhile, detailed how AI-generated code can introduce risks to the software supply chain.
While that doesn’t mean organizations are wrong to utilize AI-assisted coding tools, it does mean a human (or multiple humans) needs to be somewhere in that pipeline.
Wiz security researcher Hillai Ben-Sasson says that although vibe coding can be useful, it should be framed as a tool for humans to use. In other words, someone should be there to verify the code being generated.
“There has to be a human in the loop. If an application is fully vibe-coded by a person who can’t even look at the code, they can’t see where the code is secure or not, and then no one is there to take responsibility for it,” he says. “A model or vibe-coding platform can’t take responsibility for a security failure.”
The World of AI Code Checkers
If an organization is going to apply vibe coding at any scale, Allan recommends what he describes as securing at inception, or attempting to ensure that code is developed with security in mind. “If you can use agents and guardrails within the AI, that is the path forward,” he says.
This month, Snyk announced a “Secure at Inception” capability in its platform that scans generated and executed code in real time for vulnerabilities. Its product joins a growing field of AI code remediation tools, such as Veracode’s Veracode Fix product and Legit Security’s AI remediation capabilities.
Chris Wysopal, founder and chief security evangelist at Veracode, tells Dark Reading that to appropriately implement something like vibe coding, you have to improve your entire security program.
In application security, organizations run into security debt, where an organization finds a vulnerability and doesn’t fix it, and as that happens more and more over time, the application becomes increasingly insecure. That problem gets even worse when the organization builds apps faster with fewer developers, so the problem needs to be addressed on the ground floor.
“The solution is either to train the models on secure code or to figure out how to align and tune them so that they put out more secure code,” he says. “What we think people need to do is use AI to fix the code.”
Similarly, at DEF CON 33, DARPA announced the winners of its two-year AI Cyber Challenge, a research contest to build tools intended to address open source software flaws using AI. All tools developed by the seven finalist teams will be open sourced.
Whether or not any of these checkers solve the emerging problem of AI code security, these tools are one of the key ways organizations are thinking about addressing it. Like security keys and phishing attacks, the idea in part is to offload some of the burden off of people.
And even with AI code checkers, that’s not to say humans are taken out of the equation entirely. Allan explains that in the spirit of “trust but verify,” code verification will be at least partially conducted by AI, but “humans will end up verifying the verification systems.”
Wypsol says that because AI generated code is insecure, even if it’s getting better in some ways over time, the predictable nature of AI code generating vulnerabilities should be accounted for in the build process, no matter what security solution you’re using.
“Build it into the process,” he explains. “It’s to say, ‘I know vibe coding is going to be generating these vulnerabilities.’ So right after it generates the code, test that code and remediate.”
‘. Do not end the article by saying In Conclusion or In Summary. Do not include names or provide a placeholder of authors or source. Make Sure the subheadings are in between html tags of
[/gpt3]
Discover More Technology Insights
Learn how the Internet of Things (IoT) is transforming everyday life.
Explore past and present digital transformations on the Internet Archive.
CyberRisk-V1
