Emotion Attribution to a Non-Humanoid Robot in Different Social Situations
The robot used in this research. I don’t know about other Aspergers, but I’ve never seen a robot that looks human. I don’t know if this is a humanoid or non-humanoid robot; or is it meant to be a dog? I’m not kidding.
Gabriella Lakatos, Márta Gácsi, Veronika Konok, Ildikó Brúder, Boróka Bereczky, Péter Korondi, Ádám Miklósi
Highlights are “capacities” that Aspergers lack, according to you-know-who.
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans’ ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot’s greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.
A general requirement for social robots is that they should be able to participate in different interactions with humans. In order to interact socially with humans the robot has to convey intentionality, that is, the human has to think that the robot has goals, beliefs and desires . There is evidence that humans are willing to interpret lifeless objects as social beings, attributing aims, desires, inner states and even personality to them (e.g. , , ). Designers of artificial agents try to exploit this anthropomorphizing tendency and supply social robots with social cues that induce the concepts of intentions in people.
Many scientists in social robotics agree that the main requirements of a complex social interaction include communication, the recognition and expression of emotions, and some rudimentary form of personality (Description of a neurotypical human?) , , . These features are widely thought to increase the believability of artificial agents (e.g. , , , ) and enhance the long-term engagement of people toward artificial companions.
The importance of the representation of emotions in artificial agents (or virtual characters) has been recognized long ago in art. According to Thomas and Johnston , two of the core animators of the Disney’s productions, “it has been the portrayal of emotions that has given the Disney characters the illusion of life”. Many robots and virtual agents (e.g. Kismet, Yuppy, Max, Greta, Probo, EDDIE, Feelix , , , , , , , ) have been supplied with affective expressions so far in order to examine the contribution of emotions to livingness or to observe humans’ perception of the expressive behaviours of robots. Although, it is important to note that most of these studies used only facial expressions to express robotic emotions and questionnaires to analyse the recognition rate of the different emotions. Direct human-robot interactions analysing humans’ reactions also on the behavioural level are relatively rare. For example, one of these studies showed that subjects tended to feel less lonely and found the agent (Max) more life-like if it expressed empathy toward them compared to situations, in which the robot did not show emotions or tended to be self-centred . Once again, cartoon characters and robots
“have empathy” but not Asperger humans. It could be that NTs are so childlike that they can’t escape an anthropomorphic interpretation of everything they encounter, and that the content of the environment must always pay attention to them. Additionally, the electromyography showed that subjects had higher activity of the masseter muscle (which is one of the muscles of mastication and an indicator of negative valence) when the agent expressed negative empathy (“Schadenfreude”) compared to positive empathy . So now we learn that positive or negative empathy is displayed by a chewing muscle.
Many of the present day social robots are built to have humanoid embodiments and their behaviour is designed on the basis of human psychological models. Designers of such humanoid social robots aim to implement human-like behavioural and cognitive features in the robots along with human-like social competencies using human-human interactions as a model (e.g. developing social relationship with humans, gesturing, using speech-based communication etc.) , . In the last few years impressive improvements have been achieved in this field and it has been proved by separate studies that humans can successfully recognize emotions displayed by a humanoid face of robots or virtual agents Neurotypicals are easily fooled (see also the above mentioned studies and for other examples e.g. , ; or for a review see ). Moreover, some studies provided evidence suggesting that emotions expressed by a humanoid robot face evoke similar emotions in humans as well (e.g. , for a review see ). Although, a recent study of Chaminade et al.  showed that on the level of neural responses humans react differently to the emotional expressions of a humanoid robot and of a human. Besides, again, we have to note that most of these studies have been restricted to the use of facial expressions (instead of using multimodal emotional expressions), which on the other hand requires complex technology both considering the perception and the signalling .
In fact, human-human interactions are very complex since they are generally symmetric, develop since birth and are based on the use of language. Hereby, it is extremely hard for robot designers to mimic human behaviour successfully and the robots mimicking human behaviour will never be perfect “humans”. This can lead to mismatches between the appearance and the behaviour, which means that the users’ prior – often unrealistic – expectations, mostly based on the appearance, will be violated resulting in a feeling of unease How neurotypicals “see” neurodiverse people?, , . This is the well-known phenomenon of the “uncanny valley” , that is, agents which are very but not totally similar to humans, induce aversion in people.
Other companion robots are designed to have rather pet-like appearance (e.g. PLEO, AIBO, PARO , , ) and have also been used as alternatives to animal assisted therapy , . However, the behavioural repertoire of these pet-like social robots is very limited and for this reason, compared to animal pets they proved to be less successful in maintaining humans’ interest in long term , .
cont., at link above