Large language models (LLMs) sometimes learn the wrong lessons, according to an MIT study. Rather than answering a query based on domain knowledge, an LLM could respond by leveraging grammatical patterns it learned during training. This can cause a model to fail unexpectedly when deployed on new tasks.
LLMs use grammar shortcuts that undermine reasoning, creating reliability risks
Reader’s Picks
-
Eldest sons step up financially, while eldest daughters take care of their parents: A new study from the University of [...]
-
A new study from the University of California San Diego finds that adults in California and Louisiana who experienced intimate [...]
-
Probation officers—who supervise nearly 4 million people across the United States—are among the most visible faces of the criminal legal [...]
