A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.
Study says AI chatbots need to fix suicide response, as family sues over ChatGPT role in boy’s death
Reader’s Picks
-
A new study co-authored by a University of Wisconsin-Madison professor finds that life expectancy gains made by high-income countries in [...]
-
If you’re a binge-watcher, you’ve probably said, “Just one more episode,” a thousand times over.This article is brought to you [...]
-
Screens, and technology more generally, are often seen to be at odds with engagement and connection with nature. Teenagers are [...]