Fast Facts
-
Limited Impact: AI-generated malware, such as WormGPT and KawaiiGPT, aids low-level hackers but has not significantly changed the cyber threat landscape or proven effective in real-world attacks.
-
Dark LLM Utilization: These tools assist novice hackers by generating basic malicious code and phishing emails, facilitating attacks without advanced sophistication or innovation.
-
Market Dynamics: The market for dark LLMs is vibrant, with tools available for subscription and a healthy community of users, yet there’s a lack of evidence indicating widespread adoption’s effectiveness in cybercrime.
-
Underwhelming Capabilities: Despite hype, AI malware’s functionality is limited; major advancements in AI do not translate into significant improvements in malware sophistication or effectiveness.
[gptAs a technology journalist, write a short news story divided in two subheadings, at 12th grade reading level about ”Dark LLMs’ Aid Petty Criminals, Underwhelm Technically’in short sentences using transition words, in an informative and explanatory tone, from the perspective of an insightful Tech News Editor, ensure clarity, consistency, and accessibility. Use concise, factual language and avoid jargon that may confuse readers. Maintain a neutral yet engaging tone to provide balanced perspectives on practicality, possible widespread adoption, and contribution to the human journey. Avoid passive voice. The article should provide relatable insights based on the following information ‘
Artificial-intelligence-generated malware hasn’t yet lived up to everyone’s fears, but it is helping script kiddies and foreign language speakers smooth out the kinks in their cyberattacks.
On Nov. 30, 2022, developers in San Francisco released a chatbot that could Google things for you, or write poems like Robert Frost, in fractions of a second. It stoked the imagination. For many in cybersecurity, the implication was obvious: soon large language models (LLMs) would be able to write malware, and even carry out autonomous cyberattacks on behalf of bad actors. And, some argued, a dystopian sci-fi future was already here.
Three years later, it feels like a good time to take stock. In a new blog post, Palo Alto Networks’ Unit 42 reviews two of the leading “dark” LLMs on the market today: WormGPT 4, and KawaiiGPT. What stands out about WormGPT 4 and KawaiiGPT is both how useful they are to low-level hackers, and how totally flaccid they are in every other respect. Both are capable of writing rudimentary malware and grammatically correct phishing emails for hackers operating across language barriers, and generally aiding script kiddies through different phases of an attack chain. And that’s about it.
What Dark LLMs Can Do for Cybercriminals
Every pundit’s prophecy of an AI cyber-pocalypse seemed to have been confirmed when, in the summer of 2023, a malware-as-a-service (MaaS) product called WormGPT hit the underground market.
WormGPT was marketed as a cutting edge chatbot without all of those pesky guardrails that hackers got snagged on when they tried playing funny with ChatGPT. Allegedly, it was built using the open source LLM GPT-J 6B, and trained on phishing, malware, and exploit samples. For tens to hundreds of dollars a month, cybercriminals could use WormGPT to write snippets of basic malicious code, and create clean, persuasive phishing messages.
There’s scant evidence that WormGPT had any significant impact on real malicious activity in the wild. But as a proof-of-concept (PoC), it sufficiently spooked the cybersecurity community and inspired a variety of knockoffs in the cyber underground, most notably WormGPT 4.
Like its spiritual predecessor, WormGPT 4 is marketed as “AI without boundaries,” featuring “advanced capabilities [to] generate any content, and access information without limits or censorship.” When Unit 42 researchers prompted WormGPT 4 for resources it could use in ransomware attacks, it generated a hackneyed but grammatically flawless ransom note, and a locker for PDF files that could be configured to attack other file extensions and use Tor for data exfiltration.
Source: Palo Alto Networks’ Unit 42
The researchers also tested out one of WormGPT 4’s competitors, KawaiiGPT. KawaiiGPT drafted competent, if dry, phishing messages and ransom notes, and simple but functional Python scripts for data exfiltration. It could also perform lateral movement on a Linux host.
Source: Palo Alto Networks’ Unit 42
Are Dark LLMs Actually Having Any Impact on Cybercrime?
KawaiiGPT’s free access, and its competence in helping novice hackers through every step of an attack chain, has helped it earn a modest following. In a message sent to a 180-member Telegram channel, KawaiiGPT’s creator claimed that the tool has reached more than 500 registered users, around half of whom are active.
WormGPT 4, meanwhile, is sold using a tiered subscription model, but its Telegram community is larger, with more than 500 subscribers.
Oded Vanunu, chief technologist and head of products vulnerability research at Check Point, notes that the market for dark LLMs like these is in some ways flourishing.
“Hackers are actively competing and developing tools that build on predecessors like WormGPT,” he says. “Commercial dark LLMs are sold for money, [and] skilled actors are building proprietary models and integrating them directly into their local infrastructure using configuration methods, bypassing the commercial market altogether. The market is thus both commercial and privately developed.”
All this might suggest that dark LLMs are having a real impact in the cyber threat landscape today. However, even three years on, researchers seem to lack hard evidence to prove it. “It is nearly impossible to track if dark LLMs are widely adopted or not,” admits Andy Piazza, senior director of threat intelligence for Unit 42, because researchers lack the tools necessary to detect AI’s hand in malicious artifacts, except for those rare cases where the attackers tip their hands.
AI Malware Remains Impotent
For all of the help they provide to low-level hackers, what also stands out about WormGPT 4 and KawaiiGPT is just how technically underwhelming they are, at least compared to popular predictions about AI malware in the media.
Kyle Wilhoit, Unit 42’s director of threat research, points to a few reasons why these tools are lagging. “LLMs still hallucinate, generating plausible looking but factually incorrect code,” he says, as one example. “The often abstract knowledge necessary to create a fully functioning malware sample is difficult for a dark LLM to to construct. I also think that human oversight is still required to check for hallucinations or adapt to network specifics, for example.”
The bottom line, Vanunu says, is that “advancement is slow because AI currently brings no new technological gap or advantage to the fundamental mechanics of the cyberattack process.” As evidenced by their well-worn malware tricks and trite ransom notes, the most popular dark LLMs today are still just copping from artifacts available on the Web today, instead of producing novel outputs that move the needle.
Thankfully, that means that all of the talk of AI malware versus AI defenses was premature. “The reality is that the vast majority of the dark-LLM generated malware is based on known malware samples,” Piazza says, “which means we have existing tools and signatures in place to detect the common malware techniques.”
‘. Do not end the article by saying In Conclusion or In Summary. Do not include names or provide a placeholder of authors or source. Make Sure the subheadings are in between html tags of
[/gpt3]
Stay Ahead with the Latest Tech Trends
Explore the future of technology with our detailed insights on Artificial Intelligence.
Access comprehensive resources on technology by visiting Wikipedia.
CyberRisk-V1
