Getting AIs working toward human goals: Study shows how to measure misalignment

Ideally, artificial intelligence agents aim to help humans, but what does that mean when humans want conflicting things? My colleagues and I have come up with a way to measure the alignment of the goals of a group of humans and AI agents.

This article is brought to you by this site.

Reader’s Picks