Joyce Chai: Teaching Robots New Tasks Through Language Instructions

19 May 2017
Begin time: 
End time: 
CITEC, 1.204


Enabling situated communication and collaboration with artificial agents (e.g., robots) faces many challenges. Humans and artificial agents have different levels of linguistic, world, and task knowledge. They also have mismatched capabilities in perceiving and reasoning about the shared environment and joint tasks. All of these significantly jeopardize the common ground between humans and robots, making language-based communication extremely difficult. To address this problem, we have developed several collaborative models for language processing aiming to mediate perceptual differences between humans and robots. We have also developed approaches to allow robots to continuously acquire knowledge about the environment, actions, and tasks through communicating with humans.  In this talk, I will give an introduction to this research effort and particularly focus on task learning through interactive language instructions.

Brief Bio:

Joyce Chai is a Professor in the Department of Computer Science and Engineering at Michigan State University. She received a Ph.D. in Computer Science from Duke University in 1998. Her research interests include natural language processing, situated dialogue agents, information extraction and retrieval, and intelligent user interfaces. Her recent work has focused on grounded language processing to facilitate situated communication with robots and other artificial agents. She served as Program Co-chair for the Special Interest Group in Dialogue and Discourse (SIGDIAL) in 2011, the ACM International Conference on Intelligent User Interfaces (IUI) in 2014, and the North America Chapter of ACL – Human Language Technology (NAACL-HLT) in 2015. She is a recipient of the National Science Foundation Career Award in 2004 and the Best Long Paper Award from the Annual Meeting of Association of Computational Linguistics (ACL) in 2010.