Understanding how humans and AI or robotic agents can work together effectively requires a shared foundation for experimentation. A University of Michigan-led team developed a new taxonomy to serve as a common language among researchers, then used it to evaluate current testbeds used to study how human-agent teams will perform.
A common language to describe and assess human–agent teams
Reader’s Picks
-
When humans interact with each other and engage in everyday activities, they typically follow various undefined rules, also known as [...]
-
Research involving Pompeu Fabra University has explored the relationship between having or not having a romantic partner with changes in [...]
-
Women do the majority of “thinking work” in households, regardless of their employment status or how much they earn, new [...]
