How a Robot Reacts to Different Languages

CITEC researcher Marietta Sionti studies to what extent commands can be adapted to multiple languages

For robots and assistive systems to be seamlessly integrated into daily life, it is important that they clearly respond to commands. But this does not always work: most robots are first programmed only in English. Some languages, however, express movements in other ways – and this can lead to misunderstandings. Dr. Marietta Sionti studies how this can be avoided.

CITEC researcher Dr. Marietta Sionti studies how commands given to robots can be conceptualized unambiguously in different languages. A robot can come in handy by picking something up that has fallen on the floor, or by leaving the room to get something. This works by way of the robot receiving – and understanding – commands. In order for assistive systems or other intelligent applications to work well, what they are meant to do must be unambiguous for them.

This works well when actions are clearly linked to their respective words or phrases. The robot knows, for instance, that it should spring to action when it gets the corresponding command to do so. To understand how this works, CITEC researcher Dr. Marietta Sionti studies patterns in the human brain. Sionti has been at CITEC for four years, and graduated from the CITEC Graduate School. She is a member of the “Neurocognition and Action” research group, which is headed by Professor Dr. Thomas Schack.

Different Languages Express Movements in Different Ways

 As a computer linguist, Sionti deals with the question of how the human brain cognitively maps movements, how the brain connects these movements to the right verb, and how this process can be applied to robots and other assistive systems. Different languages can pose a challenge in terms of linking action verbs to the actual corresponding actions in technical systems.

In her experiments, Dr. Marietta Sionti uses EEG and Eyetracking technology. “To this point, almost all of these systems have been programmed in one single language, namely English,” says Sionti.  This means that commands to them are not always easily adaptable. “But every language opens up a new perspective on the world. In a certain way, language filters how we communicate actions.” Different languages, for example, often do not express movements  in the same way. There are fine distinctions, and because of this, misunderstandings can occur. 

On the one hand, humans do all have the same sensory organs. For this reason, it can be assumed that they perceive the world in a similar way, and express this similarly in terms of language. “On the other hand, there are indeed significant differences in how movements are described,” says Sionti. This is why she is researching how the human brain classifies movements in order to derive universal patterns from this. “This is a step towards being able to possibly categorize and designate movements on a higher order in different languages.”

Languages differ, for instance, in how sentences are constructed and how words relate to one another. “Many Germanic languages express movements and actions differently than they are expressed in Romance languages,” says Sionti. In Spanish, for example, many verbs refer directly to the direction of movement (e.g. entrar – to go in; salir – to go out) in the word itself, while in German or English, a preposition is usually used for this. One example of this is the verb “to go,” which becomes, respectively, “to go in” or “to go out.” “For this, verbs in German and English frequently express the nature and manner of movement,” says Sionti. While in German one says “I fly to America,” this would be expressed simply as “I go to America” in many Romance languages. Due to these many different forms of expression, misunderstandings can occur with assistive systems or robots.

Studying Common Cognitive Mechanisms

This is why Sionti is studying the cognitive mechanisms by which humans express and categorize movements in various languages. In her research, she uses, for instance, EEGs to record brain waves, and Eyetracker technology to capture eye movements. “Test subjects in experiments are asked to, for instance, look at a screen and recognize the movements of avatars, describe these movements with a verb, or to imagine certain actions,” she explains. The researcher hopes that this will allow her to learn to understand what happens in the brain and how certain movements are linked to certain patterns.

The goal is to design better computer systems that can interact in different languages. “This can be a big advantage in multilingual environments, such as hospitals, or universities and schools, but also more generally for assistive systems,” says Sionti.

Contact:
Dr. Marietta Sionti, Bielefeld University
Cluster of Excellence CITEC / Neurocognition and Action research group
Telephone: +49 521 106-5128
Email: marietta.sionti@uni-bielefeld.de

Written by: Maria Berentzen