A group of AI researchers at Ben Gurion University of the Negev, in Israel, has found that despite efforts by large language model (LLM) makers, most commonly available chatbots are still easily tricked into generating harmful and sometimes illegal information.
Dark LLMs: It’s still easy to trick most AI chatbots into providing harmful information, study finds
Reader’s Picks
-
What does it mean to think, act and work as a Jewish professor when human freedoms are under siege and [...]
-
Detroit’s population grew in 2024 for the second year in a row. This is a remarkable comeback after decades of [...]
-
Australia is facing some of the biggest challenges in our history: climate change, food security, energy transition, and the prospect [...]