Spatial Movement Concepts in Human Robot shared Environments
SpaCon is situated in the research area of human-robot interaction (HRI) with a mobile robot.
Our investigation focuses on spontaneous nonverbal interaction in narrow spaces. One key scenario to study such nonverbal cues is making room for one another in spatial bottlenecks. The goal is to investigate gross body motion (positional shifts, rotational and translational motion with the entire body) which can be interpreted as communicational cues (prompts) of a human in interaction with a mobile robot and vice versa. Therefore HRI studies are conducted to provide the necessary data. Subsequently, a computational model of spatial passing concepts in narrow spatial configuration for HRI will be developed.
Interesting and fundamental research questions within the scope of the project are whether humans use gross body motion to communicate spontaneously and whether they understand this kind of communication signalized by a robot.
SpaCon project is inspired by bodily communication in human-human interaction. Consider, narrow spatial configurations in office buildings and people’s homes (e.g. a doorway, hallway, or a small room). Humans are well capable in understanding their mutual intentions when they have to negotiate nonverbally their (personal) space around them. In particular, this is the case when interaction partners have to make room for each other in passing by situations, in which one interaction partner blocks the other’s way as there is only space for one person/ robot to pass at a time.
Bodily communication such as spatial prompting (defined by Anders Green 2009) and proxemics (E.T. Hall 1969 and others) are a natural means of communication in human-human interaction, but their presence and role in human-robot interaction (HRI) has so far mostly been investigated regarding hand-gestures and verbal prompts. Whether a person uses body motion to communicate (prompt) spontaneously with a robot and if a robot would be able to make itself understandable has not been investigated on the level of interaction.
Research questions within the scenario’s scope are:
- Do humans spontaneously communicate with a mobile robot via body motion within the scenario’s scope? – And which characteristics do these body motions have.
- Are these body prompts detectable and interpretable by a mobile robot?
- Do humans understand this way of communication displayed by a mobile robot? – And which are the important characteristics for humans to understand the “body” motion of a mobile robot as communication?
To answer these questions, two human-robot interaction studies were conducted with the following robots:
BIRON (=BIelefeld Robot companiON)
Composition of GuiaBot and PatrolBot by MobileRobots; so far used 180° laser rangefinder to detect people automatically.
Description of study: Spatial Bottleneck – hallway
BIRON was used in an exploratory study, which was conducted, to find out, which behavior the most appropriate passing strategy is and how participants express their wish to pass-by.
BIRON blocked the way in a corridor at Bielefeld University and initiated a variety of defensive and more offensive passing strategies towards approaching persons. 59 participants took part. The data of an external camera and questionnaire form the corpus which is analyzed.
Snoopy (LTH, Lund University, Sweden): Pioneer P3-DX by MobileRobots; 360° laser coverage used to detect people automatically.
Description of study: Spatial Bottleneck – narrow way out of a room
Snoopy took part in a qualitative and exploratory study. 32 participants had to guide the robot around three rooms in an office environment. The scenario of interest, implicitly created here, was a situation in which the robot blocked the way out of the room: the participant navigated the robot unknowingly between themselves and the door frame in order to show the printer and other items in the printer room. The guiding study was conducted to observe, describe and classify spontaneous and communicational body movements from a human toward a robot in a blocking situation.
The corpus to analyze consists of laser data (person and robot, questionnaire data before and after the run and video data of an external camera and an onboard camera.
he next paragraphs comprise selected results. Please consult the listed papers for more information.
Spatial Bottleneck – hallway
Different human-like motion patterns of how to avoid/ react to an approaching human were tested in a situation in which the robot BIRON blocked a hallway. Participants were asked which avoiding motion they preferred as reaction to their presence when BIRON was blocking their way. They preferred a backward and sideways motion of the robot (54.5 %) as opposed to forward sideways motion (9.1 %), no motion (29.1 %) and straight forward as well as straight backward movements (7.3 %). This conforms to the results of the actual study in which participants experienced the motion pattern (backward and side movements) of the robot. Participants interpreted the motion pattern in which the robot turned around and moved to a side as reaction to their presence in 78 % of the cases. This means that 78 % of the participants understood this motion pattern as communicational feature that BIRON was explicitly reacting towards their presence.
Furthermore, the questionnaire allowed gathering free-form ideas from the participants about how BIRON's movements could be made more predictable, especially in "making each other room" situations. Verbal output and any kind of audio signals were suggested each in 20 % of the cases. Remarkably, participants suggested putting indicators (like car indicators) on the robot to signal the direction of travel in 60 % of the cases. Indicators are a simple but very effective signal to help a person predict what the robot is going to do - even if the robot is not moving before it starts going somewhere.
Spatial Bottleneck – narrow way out of a room
First results show that humans communicate via positional and rotational movements with a robot. They incite the robot to move out of the room.
Movements towards the robot were either directed to its front or left or right, meaning that the participant took steps covering 5 - 50 cm towards the robot which are certainly detectable via robotic sensors (laser, camera). There are three types of movements:
a) A step towards the robot, which occurred in 43.75 %,
b) A step to the participant’s left (25 %),
c) A step to the participant’s right (12.50 %).
The other two patterns are repetitions of the three above:
d) Swaying/ moving to the left and right (9.38 %) and
e) Swaying/ moving back and forth (9.38 %).