Team of Bielefeld for RoboCup@HOME
The Team of Bielefeld (ToBI) has been founded in 2009. The robocup activities are embedded in a long-term research history towards human-robot interaction with laypersons in regular home environments. The overall research goal is to provide a robot with capabilities that enable the interactive teaching of skills and tasks through natural communication in previously unknown environments. This work is based on the BIRON platform developed in the Applied Informatics Group within the Home Tour scenario.
The challenge is two-fold. On the one hand, we need to understand the communicative cues of humans and how they interpret robotic behavior. On the other hand, we need to provide technology that is able to perceive the environment, detect and recognize humans, navigate in changing environments, localize and manipulate objects, initiate and understand a spoken dialog. Thus, it is important to go beyond typical command-style interaction and to support mixed-initiative learning tasks.
Team Leader: Sven Wachsmuth, Leon Ziegler, Sabastian Meyer zu Borgsen
Team Members: Kai Harmening, Christian Klarhorst, Martin Holland, Andreas Langfeld, Leroy Rügemer, Christian Witte, Niksa Rasic, Suchit Sharma, Ankit Kariryaa
The current platform is based on the research platform GuiaBot by MobileRobots customized and equipped with sensors that allow analysis of the current situation. It comprises two piggyback laptops to provide the computational power and to achieve a system running autonomously and in real-time for HRI.
The robot base is a PatrolBot which is 59cm in length, 48cm in width, weighs approx. 45 kilograms with batteries. It is maneuverable with 1.7 meters per second maximum translation and 300+ degrees rotation per second. The drive is a two-wheel differential drive with two passive rear casters for balance. Inside the base there is a 180 degree laser range finder with a scanning height of ~30cm above the floor (SICK LMS, see image on the left). In contrast to most other PatrolBot bases, ToBI does not use an additional internal computer. The piggyback laptops are Core2Duo processors with 2GB main memory and are running Ubuntu Linux. The cameras that are used for person and object detection/recognition are 2MP CCD firewire cameras (Point Grey Grashopper, see iamge on the left). One is facing down for object detection/recognition, the second camera is facing up for face detection/recognition. For room classification and 3D object positions ToBI is equipped with an optical imaging system for real time 3D image data acquisition.
Additionally the robot is equipped with a Katana IPR 5 degrees-of-freedom (DOF) arm (see image, second from bottom on the right); a small and lightweight manipulator driven by 6 DC-Motors with integrated digital position encoders. The end-effector is a sensor-gripper with distance and touch sensors (6 inside, 4 outside) allowing to grasp and manipulate objects up to 400 grams throughout the arm's envelope of operation. The upper part of the robot houses a touch screen (~15in) as well as the system speaker. The on board microphone has a hyper-cardioid polar pattern and is mounted on top of the upper part of the robot. The overall height is approximately 140cm.
Who is Who:
The following video shows the performance of team ToBI in the "Who-is-Who" task, where known and unknown persons must be found in the arena, the faces of unknown persons must be learned and afterwards must be re-identified autonomously by the robot.
Walk 'n' Talk:
This video shows team ToBI doing the "Walk 'n' Talk" task, which is split into two parts: First a team member introduces the location of certain objects (e.g TV set) to the robot; After that the robot has to find these places again - atonomously.
For interaction with humans in a real world scenario a visual understanding of the environment is indispensable. Finding an object in an unknown space with a mobile robot is one of the main challenges in domestic service robotics. This video presents an object search behavior for a mobile robot that reduces the search space by applying a novel kind of spatial attention system. Different visual cues are mapped in a SLAM-like manner in order to identify hypotheses for possible object locations. These locations are scanned for known objects using a recognition method consisting of two complementary pathways --- a detector measuring color distributions and a classifier using a SVM with a Pyramid Matching Kernel. The usefulness of the proposed approach was shown by conducting an evaluation with the domestic robot BIRON in a real world apartment scenario.