Keynote Talks

Keynote Talk 9:00-10:30 a.m.: Ulrich Rückert | Cognitronics: Resource-efficent Architectures for Cognitive Systems

Abstract:

Mapping brain-like structures and processes into electronic substrates has recently seen a revival with the availability of deep-submicron CMOS technology offering 1000 processor cores on a chip in the near future. The basic idea is to exploit the massive parallelism of such circuits and to create low power and fault-tolerant information-processing systems. Aiming at overcoming the big challenges of deep-submicron CMOS technology (power wall, reliability, and design complexity), bio-inspiration offers attractive alternative ways for embedded artificial intelligence. The challenge is to understand, design, build, and use new architectures for nanoelectronic systems, which unify the best of brain-inspired information processing concepts and of nanotechnology hardware, including both algorithms and architectures. This talk will give an overview of our experiences in designing brain-inspired architectures for nanoelectronics and their application in cognitive robotics.

Keynote Talk 10:30 a.m.-12:00 p.m.: Terrence C. Stewart | Neural Cognitive Architectures with Nengo and the Neural Engineering Framework

Abstract:

We have been developing large-scale models of biological brains. To do this, we have been taking high-level algorithms (expressing a theory about what modular function a particular part of the brain does) and translating them into detailed low-level neural models that express that algorithm. By combining modules of this type, we created Spaun, the first large-scale biological neural network model capable of performing multiple cognitive tasks. In the process of making such models, we have been developing an overall neural cognitive architecture based on the modules and structures of the mammalian brain. In this talk, I will discuss these modules, how they are implemented, how they interact together, and how the resulting simulations can be applied in robotic domains.

Moderation by Helge Ritter

 

Platform Talk

Keynote Talk 9:00-10:30 a.m.: Tamim Asfour | Engineering Humanoid Robots that Learn from Humans and Sensorimotor Experience

Moderation by Sebastian Wrede

Abstract:

Humanoid robotics has made significant progress and will continue to play central role in robotics research and many applications of the 21st century. Engineering complete humanoid robots, which are able to learn from human observation and sensorimotor experience, to predict the consequences of actions and exploit the interaction with the world to extend their cognitive horizon remains a research grand challenge. In this talk, I will present recent results and discuss future directions of research, which combine robotics, machine learning and computer vision to speed-up learning and facilitate intuitive programming of complex robotics tasks. I will discuss how a robot motion alphabet and a robot internet of skills can be learned from human observation and how statistical language models, whose words are robot poses, and whose sentences represent sequences of poses can be learned from human motion data and used to generate complex robot motions. I will introduce the concept of “Object-Action Complexes”, a co-joint representation of perception-action dependencies, which emphasizes that objects and actions are tightly coupled, intertwined and even equivalent. Based on such representations, generative mechanisms can be defined that uses existing robot experience together with new observations to supplement the robot's knowledge with missing information about, object-, actionas well as planning-relevant entities leading to improved learning and prediction capabilities.

Keynote Talk 10:30 a.m.-12:00 p.m.: Jochen J. Steil | Robert Learning – Science and Fiction

Abstract:

Robot learning is a hot topic in the current heated discussion about the digital society and the future of AI. The talk will discuss some of the common anthropomorphic misconceptions about robot learning in this context and illustrate through examples what robot learning currently can do and can not do. It will further tackle the fields of medical applications and production technology and discuss related ethical implications. Based on this evaluation, the talk will argue for the somewhat paradox stance that robot learning is concurrently under- and overestimated, that it is both, science and fiction.

Platform Talk

Keynote Talk 9:00-10:30 a.m.: Michael Beetz | Automated Models of Everyday Activity

Moderation by Thomas Schack

Abstract: Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing. Modern simulation-based game technologies give us for the first time the opportunity to acquire the commonsense and naive physics knowledge needed for the mastery of everyday activities in a comprehensive way. In this talk, I will describe AMEvA (Automated Models of Everyday Activities), a special-purpose knowledge acquisition, interpretation, and processing system for human everyday manipulation activity that can automatically (1) create and simulate virtual human living and working environments (such as kitchens and appartments) with a scope, extent, level of detail, physics, and photo-realism that facilitates and promotes the natural and realistic execution of human everyday manipulation activities; (2) record human manipulation activities performed in the respective virtual reality environment as well as their effects on the environment and detect force-dynamic states and events; (3) decompose and segment the recorded activity data into meaningful motions and categorize the motions according to action models used in cognitive science; and (4) represent the interpreted activities symbolically in KnowRob using first-order

Keynote Talk 10:30 a.m.-12:00 p.m.: Selma Sabanovic | We Are Not Alone: Identifying and Addressing Group, Organizational, and Societal Factors in the Design and Evaluation of Robots

Abstract: Robots are expected to become ubiquitous in the near future, working alongside people in everyday environments to provide various societal benefits. In contrast to this broad ranging social vision for robotics applications, evaluations of robots and studies of human-robot interaction have largely focused on more constrained contexts, largely dyadic interactions in laboratories. As a result, we have a limited understanding of how robots are perceived, adopted and supported in open-ended, natural social circumstances, and of how to design robots for non-dyadic social interactions. In this talk, I will present three strands of research that take a broader, group-based view, of human-robot interaction (HRI) and the study and design of robots for everyday life.

The first strand of our research goes beyond a dyadic understanding of human-robot interaction by studying group effects on HRI in lab and field experiments. Group effects are well established in social psychology, and suggest that intergroup interactions are more aggressive and negative than ingroup interactions. Translating this to HRI means that interactions between humans and robots could be more negative if robots are seen as outgroup members, and be more positive if robots are seen as ingroup members. Our research explores whether these group effects transfer to HRI and under what circumstances, as well as how they can be used to reduce negative attitudes towards robots. 

A second strand of research involves a series of situated studies of eldercare robots in nursing homes and retirement communities These studies show that such technologies have effects on and are affected by not just the individuals directly interacting with the robot, but the broader organizational context the robots are in. I will describe the social effects and factors we observed in studies of the socially assistive seal-like robot Paro in small group interaction and in voluntary public use in a retirement community, and will discuss the possibility of designing such robots for the community, rather than for just for individuals.

A third strand of our research involves exploring how robots might fit into the complex social dynamics of users’ homes. This includes making sense of users’ perception of themselves, as well as robots, their interactions with each other, and the broader cultural and social forces that affect their lives. This strand explores the use of participatory design as a way of incorporating broader social meanings and interaction effects in HRI design.

My discussion of these three strands of research will focus on constructing a broader social perspective on studying HRI and developing interaction requirements for new robotic systems.

 

Platform Talk 1:00-2:00 p.m.: Lorenzo Natale | iCub Reseach Platform and Research Perspectives

Abstract:

The iCub is a humanoid robot shaped as a four years old child. It is available as an open system platform and it was designed to support research in embodied cognition. The iCub was originally designed by a consortium of 11 partners and later adopted by many laboratories worldwide (more than 30 copies have been built so far). The first version of the iCub was designed to crawl on all fours and was equipped with hands for dexterous manipulation and articulated head and eyes. The sensory system included cameras, an inertial unit and F/T sensors on the arms and legs. During the past ten years the robot was improved in several aspects: the sensory system was enriched with a system of tactile sensors covering the legs and the upper body, its legs were redesigned to support walking.
 
In this talk I will provide an overview of the iCub platform, to illustrate the design choices that were made to better support research on artificial cognition, focusing on the sensory system and the software architecture. I will review my recent work on object perception and manipulation using tactile and visual feedback, to illustrate with examples how the platform supports the study of perception in artificial systems.

Keynote Talk 9:00-10:30 a.m.: Yiannis Demiris | Beyond Shared Autonomy: Personalisation and Developmental Aspects
 

Abstract:

As humans and robots increasingly co-exist in home and rehabilitation settings for extended periods of time, it is crucial to factor in the participants’ constantly evolving profiles and adapt the interaction to the personal characteristics of the individuals involved, to move beyond the standard shared autonomy paradigm. In this talk, I will describe our computational architectures for enabling human robot interaction in joint tasks, and discuss the related computational problems, including attention, perspective taking, prediction of forthcoming states, machine learning and personalised shared autonomy. I will give some examples from human robot collaboration in musical tasks, robotic wheelchairs for joint control with disabled kids and adults, among others.

Keynote Talk 10:30 a.m.-12:00 p.m.: Giulio Sandini | Humanizing Robots

Abstract:

In the recent years robot technology has advanced dramatically producing machines able to move like a human and, at the same time, being faster, stronger and more resilient than humans are. The variety of humanoid robots being built and, to some extent, commercialized has increased enormously since the first humanoid robot announced by Honda 30 years ago. Since then the complexity and the performance of these robots has been steadily increasing and nowadays we can claim that more and more sensing and motion abilities of robots are approaching those of humans. Moreover, the computational power of today’s computers and the possibility and the possibility to process gargantuan amount of data, has created the impression that the science fiction world described by Asimov where humans and robots co-exist and collaborate is not very far away. Is this true? Is there some major missing ingredient we have to develop? What is the role of robotics research in this endeavour? Does it still make sense to think to robotics as an engineering activity waiting for the technological solutions required to fulfil Asimov’s dream, or should robotics get involved head-on in actively seeking the knowledge which is still missing? During the talk I will argue that robots interacting with humans in everyday situations, even if motorically and sensorially very skilled and extremely clever in action execution are still very much primitive in their ability to understand actions executed by others and that this is the major obstacle for the advancement of social robotics. I will argue that the reason why this is happening is rooted in our limited knowledge about ourselves and the way we interact socially. I will also argue that robotics can serve a very crucial role in advancing this knowledge by joining forces with the communities studying the cognitive aspects of social interaction and by co-designing robots able to establish a mutual communication channel with the human partner to discover and fulfil a shared goal (the distinctive mark of human social interaction).

Platform Talk 1:00-2:00 p.m.: Robert Haschke | Bi-Manual Shadow Robot Hand Setup

 

Keynote Talk at 3:00-4:30 p.m. in CITEC 0.007

Angelo Cangelosi: Developmental Robotics for Language Learning, Trust and Theory of Mind

Abstract

This talk presents recent research on the development of language and theory of mind skills for communication and trust in developmental robots and human-robot interaction (Cangelosi & Schlesinger 2015). For a communication point of view, ample theoretical and experimental research on action and language processing and on number learning and gestures clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012; Borghi & Cangelosi 2014). Another key developmental milestone is the acquisition of theory of mind (ToM) and its effect in the building of trust, which also benefits from a sictuated, embodied approach. During the talk we will first present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition and grammar learning (Morse et al. 2015; Morse & Cangelosi 2017) and experiments on pointing gestures and finger counting for number learning (De La Cruz et al. 2014). We will then present a novel deveklopmentl model, and experiments, on ToM and its use for autonomous trust behavior in robots. The implications for the use of such embodied approaches for embodied cognition, intersubjectivity and for robot companion applications will also be discussed.