Large language models (LLMs) sometimes learn the wrong lessons, according to an MIT study. Rather than answering a query based on domain knowledge, an LLM could respond by leveraging grammatical patterns it learned during training. This can cause a model to fail unexpectedly when deployed on new tasks.
LLMs use grammar shortcuts that undermine reasoning, creating reliability risks
Reader’s Picks
-
In a re-evaluation of Hockett’s foundational features that have long dominated linguistic theory—concepts like “arbitrariness,” “duality of patterning,” and “displacement”—an [...]
-
Whether it is a whole friendship group migrating to using iPhones or a swath of classmates wanting the latest Lululemon [...]
-
When the term anarchy pops up in everyday conversations, images of lawlessness and chaos after a government breakdown or catastrophic [...]
