Researchers from Saarland University and the Max Planck Institute for Software Systems have, for the first time, shown that the reactions of humans and large language models (LLMs) to complex or misleading program code significantly align, by comparing brain activity of study participants with model uncertainty.
Humans and AI models show similar confusion when reading tricky program code
Reader’s Picks
-
Instagram users may overestimate the extent to which they are addicted to the platform, according to research conducted on 1,204 [...]
-
Researchers say today’s AI platforms often default to common biases and stereotypes when prompted to generate images of people, including [...]
-
In a re-evaluation of Hockett’s foundational features that have long dominated linguistic theory—concepts like “arbitrariness,” “duality of patterning,” and “displacement”—an [...]
