GhostGPT: The Uncensored AI Chatbot Empowering Cybercriminals
A newly launched AI chatbot, GhostGPT, has become a powerful tool for cybercriminals. It enables them to develop malware, execute business email compromise (BEC) scams, and perform other illicit activities with ease.
A newly launched AI chatbot, GhostGPT, has emerged as a powerful tool for cybercriminals. It enables them to develop malware, execute business email compromise (BEC) scams, and perform other illicit activities with ease.
Unlike mainstream AI systems like ChatGPT, Claude, Google Gemini, and Microsoft Copilot, GhostGPT is an uncensored AI model. It has been deliberately designed to bypass ethical safeguards and security restrictions, making it highly appealing to bad actors. According to Abnormal Security researchers, GhostGPT can generate malicious code and respond to harmful queries that legitimate AI platforms typically block.
Marketed as a tool for creating malware, coding exploits, and crafting phishing emails, GhostGPT has already demonstrated its capabilities by producing a convincing DocuSign phishing email during a test. Abnormal Security first discovered GhostGPT being sold on Telegram in mid-November. Since then, the chatbot has gained traction among cybercriminals. It is offered with three pricing tiers: $50 for one week, $150 for one month, and $300 for three months. The creators of GhostGPT also claim the platform doesn’t log user activity, adding to its appeal for those seeking anonymity.
Rogue AI chatbots like GhostGPT are a growing concern for security professionals. They lower entry barriers for cybercriminals, enabling even individuals with limited coding knowledge to create sophisticated malware. While previous malicious AI tools like WormGPT and EscapeGPT struggled to gain widespread adoption, GhostGPT’s growing popularity suggests a shift. Researchers suspect it may be a jailbroken version of an existing large language model wrapped in a custom interface.
The creators of GhostGPT have recently become more cautious, deactivating promotional accounts and shifting to private sales, making it harder to trace their operations. Despite this, the rise of tools like GhostGPT highlights the urgent need for stronger measures to address the misuse of AI in cybercrime.