We use computers to help us make (hopefully) unbiased decisions. The problem is that machine-learning algorithms do not always make fair classifications if human bias is embedded in the data used to train them—which is often the case in practice.
New mitigation framework reduces bias in classification outcomes
Reader’s Picks
-
A new study co-authored by Yale sociologist Nicholas A. Christakis demonstrates that tapping into the dynamics of friendship significantly improves [...]
-
Parents often worry about the use of social media among children and young people. Caring about this is a good [...]
-
Most young adult men in Australia reject traditional ideas of masculinity that endorse aggression, stoicism and homophobia. Nonetheless, the ongoing [...]