Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations—showing biases like overconfidence of hot-hand (gambler’s) fallacy—yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).
AI thinks like us—flaws and all: Study finds ChatGPT mirrors human decision biases in half the tests
Reader’s Picks
-
Eventgoers’ live experiences are shaped by media technologies like social media, whether used in the moment or not, and memory [...]
-
Language learners often assume that using rare, complex vocabulary will make their speech sound more fluent. Research suggests that there [...]
-
Lead researchers Nicole Hiekel from the Max Planck Institute for Demographic Research (MPIDR) and Katia Begall from the Radboud Universiteit [...]