Large language models (LLMs), such as the model underpinning the functioning of OpenAI’s conversational platform ChatGPT, have proved to perform well on various language-related and coding tasks. Some computer scientists have recently been exploring the possibility that these models could also be used by malicious users and hackers to plan cyber-attacks or access people’s personal data.
New insight into why LLMs are not great at cracking passwords
Reader’s Picks
-
Instagram users may overestimate the extent to which they are addicted to the platform, according to research conducted on 1,204 [...]
-
Researchers say today’s AI platforms often default to common biases and stereotypes when prompted to generate images of people, including [...]
-
In a re-evaluation of Hockett’s foundational features that have long dominated linguistic theory—concepts like “arbitrariness,” “duality of patterning,” and “displacement”—an [...]
