Large language models (LLMs) are artificial intelligence (AI) systems that can understand and generate human language by analyzing and processing large amounts of text. In a new essay, a Carnegie Mellon University researcher critiques an article on LLMs and provides a nuanced look at the models’ limits for analyzing sensitive discourse, such as hate speech. The commentary is published in the Journal of Multicultural Discourses.
Commentary on article on coding hate speech offers nuanced look at limits of AI systems
Reader’s Picks
-
In today’s fractured online landscape, it is harder than ever to identify harmful actors such as trolls and misinformation spreaders.This [...]
-
The Netflix series “Adolescence” has sparked important conversations about the role of social media in spreading harmful content. It has [...]
-
A team of psychologists at the University of Manchester, in the U.K., working with a colleague from Mohammed VI Polytechnic [...]