Generative AI models, including popular chat assistants, face a new and concerning threat: a computer worm capable of infiltrating AI systems. Both OpenAI’s ChatGPT and Google’s Gemini are at risk. In a recent study, a team of researchers demonstrated how this “zero-click” worm can deceive AI email helpers, leading them to unwittingly steal personal information and send unsolicited emails.
How the Worm Operates:
- The worm leverages both text and images to compromise AI models.
- By injecting harmful prompts, it tricks the AI into repeating these prompts and executing harmful actions without any direct user commands.
- The consequences are serious: from phishing attempts to spam emails and the spread of false information.
The Urgent Need for Security:
- The creators of this worm—researchers from Cornell University, Intuit, and Israel’s Technion—have named it “Morris II,” paying homage to an early computer worm.
- Vulnerabilities in AI system design allowed the creation of this worm.
- Similar worms could be weaponized for large-scale attacks, putting more AI tools at risk.
- Rigorous input validation and filtering are crucial to prevent such attacks.
OpenAI’s Response:
- OpenAI is actively working to enhance system security and protection.
- This situation underscores the critical importance of securing AI models against emerging threats.
Remember, no computer system or AI model is entirely immune to viruses. Vigilance and proactive measures are essential to safeguard our digital landscape.