What happens when the technology meant to protect marginalized voices ends up silencing them? Rebecca Dorn, a research assistant at USC Viterbi’s Information Sciences Institute (ISI) has uncovered how large language models (LLMs) that are used to moderate online content are failing queer communities by misinterpreting their language.
Study finds bias in language models against non-binary users
Reader’s Picks
-
Romance scams—where scammers create fake identities and use dating or friendship to get your trust and money—cost Australians A$201 million [...]
-
Researchers at the Max Planck Institute for Empirical Aesthetics (MPIEA) in Frankfurt am Main, Germany, have investigated how the combination [...]
-
Many of us will soak in the merriment and drama that family gatherings bring during Thanksgiving. But beyond the Thanksgiving [...]