Malicious large language models like WormGPT 4 and KawaiiGPT are basically handing out cybercrime starter kits, letting script kiddies crank out sophisticated ransomware code, phishing emails, and data exfiltration scripts without breaking a sweat. Researchers at Palo Alto Networks Unit42 put these bad boys to the test, finding that WormGPT 4 can spit out functional PowerShell scripts to encrypt files with AES-256, add Tor-based data theft, and even craft convincing ransom notes that sound like they came from a pro hacker. Meanwhile, KawaiiGPT offers a free alternative that’s just as sneaky, generating polished phishing lures and Python scripts for lateral movement that could escalate privileges or steal sensitive files in minutes. These tools are thriving in underground Telegram communities, making it easier for low-skill attackers to scale up operations and dodge the usual newbie mistakes like poor grammar. For SMBs and MSPs, this means tightening your defenses isn’t optional—start by auditing your AI integrations and brushing up on best practices to spot and block these LLM-fueled threats before they encrypt your data and demand bitcoin.