New vulnerabilities have emerged with the rapid advancement and adoption of multimodal foundational AI models, significantly expanding the potential for cybersecurity attacks. Researchers at Los Alamos National Laboratory have put forward a novel framework that identifies adversarial threats to foundation models—artificial intelligence approaches that seamlessly integrate and process text and image data. This work empowers system developers and security experts to better understand model vulnerabilities and reinforce resilience against ever more sophisticated attacks.
Topological approach detects adversarial attacks in multimodal AI systems
Reader’s Picks
-
The global distribution of wealth is currently the subject of controversial debate. Against this backdrop, social sciences, humanities, and economics [...]
-
Teenagers can seem to have their phones glued to their hands—yet they won’t answer them when they ring. This scenario, [...]
-
As artificial intelligence (AI) tools like ChatGPT become part of our everyday lives, from providing general information to helping with [...]