A group of AI researchers at Ben Gurion University of the Negev, in Israel, has found that despite efforts by large language model (LLM) makers, most commonly available chatbots are still easily tricked into generating harmful and sometimes illegal information.
Dark LLMs: It’s still easy to trick most AI chatbots into providing harmful information, study finds
Reader’s Picks
-
Myanmar’s history of prolonged conflict has led to the forced displacement and resettlement of generations of refugees to the U.S., [...]
-
During fieldwork in cities in China, I came across a new marital practice, locally described as liang-tou-dun, literally “two places [...]
-
Every year, around 90,000 young people make the transition from school to work. A large number of them start to [...]