Most of the companies behind large language models like ChatGPT claim to have guardrails in place for understandable reasons. They wouldn’t want their models to, hypothetically, offer instructions to users on how to hurt themselves or commit suicide.
AI can help you die by suicide if you ask the right way, researchers say
Reader’s Picks
-
New research published in the journal Frontiers in Psychology reveals how extremist groups are exploiting the popularity of video games [...]
-
If scientists are to better understand whether the genes that let us welcome the weekend with a cold beer or [...]
-
When it comes to susceptibility to influence on social media, “It’s not just about who you are—it’s about where you [...]