A new study led by researchers at the University of Oxford and the Allen Institute for AI (Ai2) has found that large language models (LLMs)—the AI systems behind chatbots like ChatGPT—generalize language patterns in a surprisingly human-like way: through analogy, rather than strict grammatical rules.
Like humans, ChatGPT favors examples and ‘memories,’ not rules, to generate language
Reader’s Picks
-
Populist rhetoric targeting young offenders often leads to kneejerk punitive responses, such as stricter bail laws and lowering the age [...]
-
Some incels offer an ideological rationale, reinforced by peer pressure, for not working or studying
The critically acclaimed Netflix drama “Adolescence” has put a spotlight on the culture and ideas of incels (involuntary celibates), an [...] -
Hispanic immigrants face a daunting and unique set of mental and emotional health issues, according to new research by School [...]