A new study led by researchers at the University of Oxford and the Allen Institute for AI (Ai2) has found that large language models (LLMs)—the AI systems behind chatbots like ChatGPT—generalize language patterns in a surprisingly human-like way: through analogy, rather than strict grammatical rules.
Like humans, ChatGPT favors examples and ‘memories,’ not rules, to generate language
Reader’s Picks
-
A new study has uncovered hidden social patterns in ancient Hebrew kingdoms by analyzing personal names from archaeological findings. Applying [...]
-
Children who have lived through a series of adverse childhood experiences also face an increased risk of homelessness during their [...]
-
If there are few speakers left of a language, how does a community revive it? In our current era, 3,000 [...]