Large language models (LLMs) are artificial intelligence (AI) systems that can understand and generate human language by analyzing and processing large amounts of text. In a new essay, a Carnegie Mellon University researcher critiques an article on LLMs and provides a nuanced look at the models’ limits for analyzing sensitive discourse, such as hate speech. The commentary is published in the Journal of Multicultural Discourses.
Commentary on article on coding hate speech offers nuanced look at limits of AI systems
Reader’s Picks
-
Myanmar’s history of prolonged conflict has led to the forced displacement and resettlement of generations of refugees to the U.S., [...]
-
During fieldwork in cities in China, I came across a new marital practice, locally described as liang-tou-dun, literally “two places [...]
-
Every year, around 90,000 young people make the transition from school to work. A large number of them start to [...]