Johns Hopkins research shows artificial intelligence models fall short in predicting social interactions, a skill critical for systems to effectively navigate the real world.
Johns Hopkins research shows artificial intelligence models fall short in predicting social interactions, a skill critical for systems to effectively navigate the real world.
Humans, it turns out, are better than current AI models at describing and interpreting social interactions in a moving scene—skills necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world.
The research, led by scientists at Johns Hopkins University, finds that artificial intelligence systems fail at understanding social dynamics and context necessary for interacting with people and suggests the problem may be rooted in the infrastructure of AI systems.
"AI for a self-driving car, for example, would need to recognize the intentions, goals, and actions of human drivers and pedestrians. You would want it to know which way a pedestrian is about to start walking, or whether two people are in conversation versus about to cross the street," said lead author Leyla Isik, an assistant professor of cognitive science at Johns Hopkins. "Any time you want an AI to interact with humans, you want it to be able to recognize what people are doing. I think this sheds light on the fact that these systems can't right now."
Read more at Johns Hopkins University
Photo Credit: Icsilviu via Pixabay