GhostGPT, a newly introduced AI chatbot, has become a valuable tool for cybercriminals seeking to develop malware, carry out business email compromise scams, and engage in other illegal activities.
Unlike mainstream AI systems such as ChatGPT, Claude, Google Gemini, and Microsoft Copilot, GhostGPT is an uncensored AI model that can bypass typical security measures and ethical constraints, similar to previous chatbots like WormGPT.
GenAI With No Guardrails: Uncensored Behavior
GhostGPT enables bad actors to generate malicious code and receive unfiltered responses to sensitive or harmful queries, providing them with a convenient tool for various cybercrimes, including malware creation and business email compromise scams.
Abnormal Security researchers discovered GhostGPT being sold on a Telegram channel, with pricing models ranging from $50 for one-week usage to $300 for three months. The chatbot promises quick responses without jailbreak prompts and claims not to log user activity.
Rogue Chatbots: An Emerging Cybercriminal Problem
Rogue AI chatbots like GhostGPT pose a significant challenge for security organizations by empowering cybercriminals, regardless of their coding skills, to generate malicious code easily. These chatbots eliminate the need to jailbreak GenAI models and allow individuals to enhance their malware capabilities.
The rise of GhostGPT follows other uncensored AI models like WormGPT and EscapeGPT, which failed to gain traction due to their lack of transparency and unfulfilled promises. GhostGPT’s creators have remained elusive, operating in underground circles and shifting to private sales.