More and more people are turning to large language models like ChatGPT for life advice and free therapy, as it is sometimes perceived as a space free from human biases. A new study published in the Proceedings of the National Academy of Sciences finds otherwise and warns people against relying on LLMs to solve their moral dilemmas, as the responses exhibit significant cognitive bias.
Seeking moral advice from large language models comes with risk of hidden biases
Reader’s Picks
-
It’s the question many young couples ask those in long-term marriages: What’s the secret to a successful marriage?This article is [...]
-
A statewide report from the University of South Florida’s Trafficking in Persons (TIP) Lab estimates that more than 700,000 Floridians [...]
-
Health care platform moderators use strategies to manage distressing material while staying engaged enough to protect vulnerable users, finds a [...]