COMMENTARY
Every single company I speak to has adopted or is adopting AI code generation tools. Cursor’s ARR numbers don’t lie. Some companies are pretending they haven’t adopted the tools, calling projects “just a pilot” or “a few devs experimenting.” Others are leaning all the way in because they have to. Dev teams are demanding speed. Leadership wants productivity gains. And honestly? These tools are good. Too good to ignore.
But here’s the thing: AI-powered development is already here. It’s loud. It’s fast. And it’s not waiting around for security sign-off. The train has left the station. Security’s still checking the schedule. Google says 25% of its code is now written by AI. If it got there that fast, your org’s not far behind. This isn’t hype. It’s happening.
Let’s not kid ourselves; most developers aren’t generating full-blown apps (yet), especially at midsize and larger companies. For now, it’s auto-complete. Maybe a helper function here and there. Seems safe. Harmless.
But we’ve seen this progression before. Auto-complete becomes “generate block.” “Generate block” becomes “why not ship it?” And suddenly, you’ve got AI-assisted infrastructure changes cruising toward production. There’s a Jira ticket. There’s some plan. But no one’s waiting for a security review. The specs are thin, the pace is insane, and the review process? Skipped, sidelined, or rubber-stamped. Just enough structure to ship, nowhere near enough to secure.
And here’s what makes it even scarier: A lot of these changes aren’t greenfield. They’re not new apps. They’re small adjustments to complex, fragile, existing systems. A new flow here, a helper function there. A tiny tweak that seems low risk, until it breaks something critical or quietly bypasses security controls that were never meant to be touched. And the tools? They simply lack the context to catch it.
We’ve already seen horror stories play out with vibe coders — developers building software with prompts and AI agents — pushing insecure features into production. And we’ve heard of plenty more quietly handled inside enterprises, breaches averted by luck, late-night heroics, or pure chance. This isn’t theoretical. It’s already happening.
Where Does That Leave Product Security?
Let me be crystal clear: These tools don’t know your threat models. They don’t care about your asset inventory. They sure as hell don’t know your compliance requirements. They don’t know that your legacy billing service is basically a haunted house held together by duct tape, broken dreams, and a regex pattern no one understands.
And now the window between “I had an idea” and “it’s in production” is … what? A couple of hours? What used to be design reviews, architecture sessions, a few spicy Slack threads, that’s all gone. It’s now: prompt, pull request, CR, merge. Done.
Manual product security processes? Dead on arrival. If your team still relies on humans to dig through design docs and flag every potential risk before a line of code is written, you’re already underwater. Best case, you slow things down and become that team. Worst case, you get quietly pushed out of the process entirely.
Some companies are trying to solve this by plugging static security rules into their AI dev tools, like that’s going to stop the bleeding. But rules without context don’t make things more secure. They just make the code more bloated. You end up with unnecessary security controls shoved into places they don’t belong, confusing developers and cluttering the codebase. The devs don’t argue. They just ship it and move on. Congrats, you’ve added security theater to the build pipeline.
But let’s not confuse fixing bugs after the fact with building secure systems in the first place.
None of these tools are going to raise a flag when your architecture is fundamentally broken. They won’t catch it when a new feature opens up a nasty compliance gap. They won’t stop a dev from shipping an unauthenticated admin panel to production because the AI said, “Looks good to me.”
They weren’t built for that. And that’s the real problem.
Security teams are staring at a widening gap, and most are still trying to patch it with manual reviews, tribal knowledge, and wishful thinking. But as the pace of development accelerates — accelerates absurdly — those old methods just don’t scale. They’re already cracking under pressure.
The opportunity? Massive. AI-powered development has created an entirely new class of problems, ones the existing security stack was never built to solve. Securing applications generated at the speed of a prompt requires a new approach, a new mindset, and yes, new tools. Entire categories of security tooling will need to be rebuilt for this world. This isn’t a “feature gap.” It’s a market gap.
Where Do We Go From Here With AI Tools?
Security thinking has to shift upstream. It needs to be seamlessly embedded into planning, not patched onto delivery. It has to evolve from reactive reviews to proactive visibility. It needs to move with the development process, not try to slow it down from the sidelines.
This isn’t about waving red flags after the build pipeline is already green. This is about being in the room when the feature is defined, before a single prompt is typed into the AI tool.
Because the future isn’t coming. It’s already writing itself.
And if security doesn’t learn to speak that language, to exist inside that flow, it won’t just be left behind. It’ll be ignored completely.
Don’t miss the latest Dark Reading Confidential podcast, The Day I Found an APT Group in the Most Unlikely Place, where threat hunters Ismael Valenzuela and Vitor Ventura share stories about the tricks they used to track down advanced persistent threats and the surprises they discovered along the way. Listen now!