Large language models (LLMs) are artificial intelligence (AI) systems that can understand and generate human language by analyzing and processing large amounts of text. In a new essay, a Carnegie Mellon University researcher critiques an article on LLMs and provides a nuanced look at the models’ limits for analyzing sensitive discourse, such as hate speech. The commentary is published in the Journal of Multicultural Discourses.
Commentary on article on coding hate speech offers nuanced look at limits of AI systems
Reader’s Picks
-
A new exploration of how therapy dogs can create a safe, nonjudgmental environment for survivors of domestic violence in educational, [...]
-
Anyone who has experienced loneliness knows how deeply personal it feels. We feel lonely when our social and emotional needs [...]
-
Korean adoptees worldwide are grappling with a devastating possibility: they were not truly orphans, but may have been made into [...]