A new generative AI tool known as GhostGPT is raising major cybersecurity concerns after being repurposed for criminal use
A new generative AI tool known as GhostGPT is raising major cybersecurity concerns after being repurposed for criminal use. Discovered in late 2024, GhostGPT reportedly allows cybercriminals to bypass safeguards found in models like ChatGPT, enabling the creation of malware, phishing content, and step-by-step attack strategies.
Unlike mainstream AI, GhostGPT doesn’t log user activity, offering anonymity for illegal acts. “It’s now a case of not if such attacks happen, but when,” warned experts, noting its ability to create convincing phishing emails and spoofed login portals.
The UK’s 2024 Cyber Security Breaches Survey found phishing remains the most common attack type, with 84% of businesses and 83% of charities affected. GhostGPT also allows for malware generation, including polymorphic variants previously requiring expert skills.
Experts urge firms to adopt multi-factor authentication, AI-powered threat detection, and employee training. As analyst Ryan Estes notes, “Understanding how tools like GhostGPT work… will become a differentiator” in modern cybersecurity defense.