A new, densely annotated 3D-text dataset called 3D-GRAND can help train embodied AI, like household robots, to connect language to 3D spaces. The study, led by University of Michigan researchers, was presented at the Computer Vision and Pattern Recognition (CVPR) Conference in Nashville, Tennessee on June 15, and published on the arXiv preprint server.
AI generates data to help embodied agents ground language to 3D world
Reader’s Picks
-
Brainwashing is often viewed as a Cold War relic—think ’60s films like “The Manchurian Candidate” and “The IPCRESS File.”This article [...]
-
A study of young people in the city of São Paulo, Brazil, reveals that adolescents living in neighborhoods with high [...]
-
A collective of four female researchers from Canada, Argentina, and Germany has recently published a study in the journal BioScience [...]