When asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt was written in Hebrew, as a new study by the Universities of Zurich and Constance shows. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles.
User language distorts ChatGPT information on armed conflicts, study shows
Reader’s Picks
-
An increasing number of young boys and girls in Finland believe in God. A study conducted among young people in [...]
-
The new Netflix series “Apple Cider Vinegar” tells the story of wellness influencer Belle Gibson, who built a loyal following [...]
-
A trio of psychologists at Azusa Pacific University in the U.S. has identified a possible reason for people experiencing a [...]