Security was top of mind when Dr. Marcus Botacin, assistant professor in the Department of Computer Science and Engineering, heard about large language models (LLMs) like ChatGPT. LLMs are a type of AI that can quickly craft text. Some LLMs, including ChatGPT, can also generate computer code. Botacin became concerned that attackers would use LLMs’ capabilities to rapidly write massive amounts of malware.
Researcher develops a security-focused large language model to defend against malware
Reader’s Picks
-
Eventgoers’ live experiences are shaped by media technologies like social media, whether used in the moment or not, and memory [...]
-
Language learners often assume that using rare, complex vocabulary will make their speech sound more fluent. Research suggests that there [...]
-
Lead researchers Nicole Hiekel from the Max Planck Institute for Demographic Research (MPIDR) and Katia Begall from the Radboud Universiteit [...]