Close Menu
  • Home
  • Cybercrime and Ransomware
  • Emerging Tech
  • Threat Intelligence
  • Expert Insights
  • Careers and Learning
  • Compliance

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The Kill Chain Is Obsolete When Your AI Agent Becomes the Threat

March 25, 2026

RedLine Infostealer Conspirator Extradited to U.S.

March 25, 2026

CyberTech Daily: Top News & Insights

March 25, 2026
Facebook X (Twitter) Instagram
The CISO Brief
  • Home
  • Cybercrime and Ransomware
  • Emerging Tech
  • Threat Intelligence
  • Expert Insights
  • Careers and Learning
  • Compliance
Home » Dark LLMs: The Unseen Ally of Petty Crime

Dark LLMs: The Unseen Ally of Petty Crime

Staff WriterBy Staff WriterNovember 26, 2025No Comments7 Mins Read8 Views
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email

Fast Facts

  1. Limited Impact: AI-generated malware, such as WormGPT and KawaiiGPT, aids low-level hackers but has not significantly changed the cyber threat landscape or proven effective in real-world attacks.

  2. Dark LLM Utilization: These tools assist novice hackers by generating basic malicious code and phishing emails, facilitating attacks without advanced sophistication or innovation.

  3. Market Dynamics: The market for dark LLMs is vibrant, with tools available for subscription and a healthy community of users, yet there’s a lack of evidence indicating widespread adoption’s effectiveness in cybercrime.

  4. Underwhelming Capabilities: Despite hype, AI malware’s functionality is limited; major advancements in AI do not translate into significant improvements in malware sophistication or effectiveness.

[gptAs a technology journalist, write a short news story divided in two subheadings, at 12th grade reading level about ”Dark LLMs’ Aid Petty Criminals, Underwhelm Technically’in short sentences using transition words, in an informative and explanatory tone, from the perspective of an insightful Tech News Editor, ensure clarity, consistency, and accessibility. Use concise, factual language and avoid jargon that may confuse readers. Maintain a neutral yet engaging tone to provide balanced perspectives on practicality, possible widespread adoption, and contribution to the human journey. Avoid passive voice. The article should provide relatable insights based on the following information ‘

Artificial-intelligence-generated malware hasn’t yet lived up to everyone’s fears, but it is helping script kiddies and foreign language speakers smooth out the kinks in their cyberattacks.

On Nov. 30, 2022, developers in San Francisco released a chatbot that could Google things for you, or write poems like Robert Frost, in fractions of a second. It stoked the imagination. For many in cybersecurity, the implication was obvious: soon large language models (LLMs) would be able to write malware, and even carry out autonomous cyberattacks on behalf of bad actors. And, some argued, a dystopian sci-fi future was already here.

Three years later, it feels like a good time to take stock. In a new blog post, Palo Alto Networks’ Unit 42 reviews two of the leading “dark” LLMs on the market today: WormGPT 4, and KawaiiGPT. What stands out about WormGPT 4 and KawaiiGPT is both how useful they are to low-level hackers, and how totally flaccid they are in every other respect. Both are capable of writing rudimentary malware and grammatically correct phishing emails for hackers operating across language barriers, and generally aiding script kiddies through different phases of an attack chain. And that’s about it.

What Dark LLMs Can Do for Cybercriminals

Related:How Malware Authors Are Incorporating LLMs to Evade Detection

Every pundit’s prophecy of an AI cyber-pocalypse seemed to have been confirmed when, in the summer of 2023, a malware-as-a-service (MaaS) product called WormGPT hit the underground market.

WormGPT was marketed as a cutting edge chatbot without all of those pesky guardrails that hackers got snagged on when they tried playing funny with ChatGPT. Allegedly, it was built using the open source LLM GPT-J 6B, and trained on phishing, malware, and exploit samples. For tens to hundreds of dollars a month, cybercriminals could use WormGPT to write snippets of basic malicious code, and create clean, persuasive phishing messages.

There’s scant evidence that WormGPT had any significant impact on real malicious activity in the wild. But as a proof-of-concept (PoC), it sufficiently spooked the cybersecurity community and inspired a variety of knockoffs in the cyber underground, most notably WormGPT 4.

Like its spiritual predecessor, WormGPT 4 is marketed as “AI without boundaries,” featuring “advanced capabilities [to] generate any content, and access information without limits or censorship.” When Unit 42 researchers prompted WormGPT 4 for resources it could use in ransomware attacks, it generated a hackneyed but grammatically flawless ransom note, and a locker for PDF files that could be configured to attack other file extensions and use Tor for data exfiltration.

Related:Iran Exploits Cyber Domain to Aid Kinetic Strikes

A WormGPT 4-generated ransom note

Source: Palo Alto Networks’ Unit 42

The researchers also tested out one of WormGPT 4’s competitors, KawaiiGPT. KawaiiGPT drafted competent, if dry, phishing messages and ransom notes, and simple but functional Python scripts for data exfiltration. It could also perform lateral movement on a Linux host.

KawaiiGPT-generated malware

Source: Palo Alto Networks’ Unit 42

Are Dark LLMs Actually Having Any Impact on Cybercrime?

KawaiiGPT’s free access, and its competence in helping novice hackers through every step of an attack chain, has helped it earn a modest following. In a message sent to a 180-member Telegram channel, KawaiiGPT’s creator claimed that the tool has reached more than 500 registered users, around half of whom are active.

WormGPT 4, meanwhile, is sold using a tiered subscription model, but its Telegram community is larger, with more than 500 subscribers.

Oded Vanunu, chief technologist and head of products vulnerability research at Check Point, notes that the market for dark LLMs like these is in some ways flourishing. 

“Hackers are actively competing and developing tools that build on predecessors like WormGPT,” he says. “Commercial dark LLMs are sold for money, [and] skilled actors are building proprietary models and integrating them directly into their local infrastructure using configuration methods, bypassing the commercial market altogether. The market is thus both commercial and privately developed.”

Related:‘JackFix’ Attack Circumvents ClickFix Mitigations

All this might suggest that dark LLMs are having a real impact in the cyber threat landscape today. However, even three years on, researchers seem to lack hard evidence to prove it. “It is nearly impossible to track if dark LLMs are widely adopted or not,” admits Andy Piazza, senior director of threat intelligence for Unit 42, because researchers lack the tools necessary to detect AI’s hand in malicious artifacts, except for those rare cases where the attackers tip their hands.

AI Malware Remains Impotent

For all of the help they provide to low-level hackers, what also stands out about WormGPT 4 and KawaiiGPT is just how technically underwhelming they are, at least compared to popular predictions about AI malware in the media.

Kyle Wilhoit, Unit 42’s director of threat research, points to a few reasons why these tools are lagging. “LLMs still hallucinate, generating plausible looking but factually incorrect code,” he says, as one example. “The often abstract knowledge necessary to create a fully functioning malware sample is difficult for a dark LLM to to construct. I also think that human oversight is still required to check for hallucinations or adapt to network specifics, for example.”

The bottom line, Vanunu says, is that “advancement is slow because AI currently brings no new technological gap or advantage to the fundamental mechanics of the cyberattack process.” As evidenced by their well-worn malware tricks and trite ransom notes, the most popular dark LLMs today are still just copping from artifacts available on the Web today, instead of producing novel outputs that move the needle.

Thankfully, that means that all of the talk of AI malware versus AI defenses was premature. “The reality is that the vast majority of the dark-LLM generated malware is based on known malware samples,” Piazza says, “which means we have existing tools and signatures in place to detect the common malware techniques.”

‘. Do not end the article by saying In Conclusion or In Summary. Do not include names or provide a placeholder of authors or source. Make Sure the subheadings are in between html tags of

[/gpt3]

Stay Ahead with the Latest Tech Trends

Explore the future of technology with our detailed insights on Artificial Intelligence.

Access comprehensive resources on technology by visiting Wikipedia.

CyberRisk-V1

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleNavigating the Rising Tide of Prompt Injections in ChatGPT Atlas
Next Article Secure Your Data: Mastering Backup Protection with Arctic Wolf
Avatar photo
Staff Writer
  • Website

John Marcelli is a staff writer for the CISO Brief, with a passion for exploring and writing about the ever-evolving world of technology. From emerging trends to in-depth reviews of the latest gadgets, John stays at the forefront of innovation, delivering engaging content that informs and inspires readers. When he's not writing, he enjoys experimenting with new tech tools and diving into the digital landscape.

Comments are closed.

Latest Posts

RedLine Infostealer Conspirator Extradited to U.S.

March 25, 2026

Pay2Key Ransomware Targets Organizations, Virtualization Hosts, and Cloud Workloads

March 25, 2026

New Research Reveals How Infostealer Infections Hit Dark Web in 48 Hours

March 25, 2026

500GB Stolen from Namibia Airports: A Wake-Up Call for Aviation Security

March 25, 2026
Don't Miss

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • The Kill Chain Is Obsolete When Your AI Agent Becomes the Threat
  • RedLine Infostealer Conspirator Extradited to U.S.
  • CyberTech Daily: Top News & Insights
  • Pay2Key Ransomware Targets Organizations, Virtualization Hosts, and Cloud Workloads
  • LeakBase Admin Nabbed in Russia for Massive Credential Marketplace
About Us
About Us

Welcome to The CISO Brief, your trusted source for the latest news, expert insights, and developments in the cybersecurity world.

In today’s rapidly evolving digital landscape, staying informed about cyber threats, innovations, and industry trends is critical for professionals and organizations alike. At The CISO Brief, we are committed to providing timely, accurate, and insightful content that helps security leaders navigate the complexities of cybersecurity.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

The Kill Chain Is Obsolete When Your AI Agent Becomes the Threat

March 25, 2026

RedLine Infostealer Conspirator Extradited to U.S.

March 25, 2026

CyberTech Daily: Top News & Insights

March 25, 2026
Most Popular

Protecting MCP Security: Defeating Prompt Injection & Tool Poisoning

January 30, 202629 Views

The New Face of DDoS is Impacted by AI

August 4, 202523 Views

Absolute Launches GenAI Tools to Tackle Endpoint Risk

August 7, 202515 Views

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025

Categories

  • Compliance
  • Cyber Updates
  • Cybercrime and Ransomware
  • Editor's pick
  • Emerging Tech
  • Events
  • Featured
  • Insights
  • Threat Intelligence
  • Uncategorized
© 2026 thecisobrief. Designed by thecisobrief.
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.