Keynote Talk 9:00-10:30 a.m.: Michael Beetz | Automated Models of Everyday Activity
Moderation by Thomas Schack
Abstract: Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing. Modern simulation-based game technologies give us for the first time the opportunity to acquire the commonsense and naive physics knowledge needed for the mastery of everyday activities in a comprehensive way. In this talk, I will describe AMEvA (Automated Models of Everyday Activities), a special-purpose knowledge acquisition, interpretation, and processing system for human everyday manipulation activity that can automatically (1) create and simulate virtual human living and working environments (such as kitchens and appartments) with a scope, extent, level of detail, physics, and photo-realism that facilitates and promotes the natural and realistic execution of human everyday manipulation activities; (2) record human manipulation activities performed in the respective virtual reality environment as well as their effects on the environment and detect force-dynamic states and events; (3) decompose and segment the recorded activity data into meaningful motions and categorize the motions according to action models used in cognitive science; and (4) represent the interpreted activities symbolically in KnowRob using first-order
Keynote Talk 10:30 a.m.-12:00 p.m.: Selma Sabanovic | We Are Not Alone: Identifying and Addressing Group, Organizational, and Societal Factors in the Design and Evaluation of Robots
Abstract: Robots are expected to become ubiquitous in the near future, working alongside people in everyday environments to provide various societal benefits. In contrast to this broad ranging social vision for robotics applications, evaluations of robots and studies of human-robot interaction have largely focused on more constrained contexts, largely dyadic interactions in laboratories. As a result, we have a limited understanding of how robots are perceived, adopted and supported in open-ended, natural social circumstances, and of how to design robots for non-dyadic social interactions. In this talk, I will present three strands of research that take a broader, group-based view, of human-robot interaction (HRI) and the study and design of robots for everyday life.
The first strand of our research goes beyond a dyadic understanding of human-robot interaction by studying group effects on HRI in lab and field experiments. Group effects are well established in social psychology, and suggest that intergroup interactions are more aggressive and negative than ingroup interactions. Translating this to HRI means that interactions between humans and robots could be more negative if robots are seen as outgroup members, and be more positive if robots are seen as ingroup members. Our research explores whether these group effects transfer to HRI and under what circumstances, as well as how they can be used to reduce negative attitudes towards robots.
A second strand of research involves a series of situated studies of eldercare robots in nursing homes and retirement communities These studies show that such technologies have effects on and are affected by not just the individuals directly interacting with the robot, but the broader organizational context the robots are in. I will describe the social effects and factors we observed in studies of the socially assistive seal-like robot Paro in small group interaction and in voluntary public use in a retirement community, and will discuss the possibility of designing such robots for the community, rather than for just for individuals.
A third strand of our research involves exploring how robots might fit into the complex social dynamics of users’ homes. This includes making sense of users’ perception of themselves, as well as robots, their interactions with each other, and the broader cultural and social forces that affect their lives. This strand explores the use of participatory design as a way of incorporating broader social meanings and interaction effects in HRI design.
My discussion of these three strands of research will focus on constructing a broader social perspective on studying HRI and developing interaction requirements for new robotic systems.
Platform Talk 1:00-2:00 p.m.: Lorenzo Natale | iCub Reseach Platform and Research Perspectives
Abstract:
The iCub is a humanoid robot shaped as a four years old child. It is available as an open system platform and it was designed to support research in embodied cognition. The iCub was originally designed by a consortium of 11 partners and later adopted by many laboratories worldwide (more than 30 copies have been built so far). The first version of the iCub was designed to crawl on all fours and was equipped with hands for dexterous manipulation and articulated head and eyes. The sensory system included cameras, an inertial unit and F/T sensors on the arms and legs. During the past ten years the robot was improved in several aspects: the sensory system was enriched with a system of tactile sensors covering the legs and the upper body, its legs were redesigned to support walking.
In this talk I will provide an overview of the iCub platform, to illustrate the design choices that were made to better support research on artificial cognition, focusing on the sensory system and the software architecture. I will review my recent work on object perception and manipulation using tactile and visual feedback, to illustrate with examples how the platform supports the study of perception in artificial systems.